From d6971460df8a2efbacf1e93561bec49f1975ef65 Mon Sep 17 00:00:00 2001 From: qiqiwwang Date: Sat, 14 Jan 2023 13:20:26 +0800 Subject: [PATCH] Conformance results for v1.26/TencentCloud Signed-off-by: qiqiwwang --- v1.26/tencentcloud/CreateTKECluster.png | Bin 0 -> 466560 bytes v1.26/tencentcloud/PRODUCT.yaml | 9 + v1.26/tencentcloud/README.md | 22 + v1.26/tencentcloud/e2e.log | 37901 ++++++++++++++++++++++ v1.26/tencentcloud/junit_01.xml | 20499 ++++++++++++ 5 files changed, 58431 insertions(+) create mode 100644 v1.26/tencentcloud/CreateTKECluster.png create mode 100644 v1.26/tencentcloud/PRODUCT.yaml create mode 100644 v1.26/tencentcloud/README.md create mode 100644 v1.26/tencentcloud/e2e.log create mode 100644 v1.26/tencentcloud/junit_01.xml diff --git a/v1.26/tencentcloud/CreateTKECluster.png b/v1.26/tencentcloud/CreateTKECluster.png new file mode 100644 index 0000000000000000000000000000000000000000..0097a926e65fbcc5319e8a20b8f3f6cc44fac3cc GIT binary patch literal 466560 zcmbrl2UJtrx&}%|(1=nML=Xj~gERqYQl*0k(tGcO4xx&O2uSZmr1#zlMd`f;2qZ`e zCG-|r!ppw*?0xols@c*YaVtGv4{6)+L9dQt7|ExqhkOe6#fjzox_46)du*BCUB6b7uZ| zDep}LB@R9QXM934@Vyn8&+>>3wRgV>?}@(gd-!YSUM*k8@-T)j5Wypnoh|lt#s@$u ztd;w1{m>enoqqw3%PxF$WJ*S-!V&XTXPwO6o83zm8`YR%K z`_iNy4`XuOdr#PcfqJhb5tF#P+Mhua7WIuRLCRKlw~#aF6tZB#Ayt3Xxk- zUVC9RF-9}yBfj{zPzOf53fhtJuL)iES^Zgpm<~i`m~tLu(Y$=ch8L!OWBZ{rpe3ZN zYofdBVe>M{uM0-fwJ!Kzm+qvg^T0Civ;yY0A&&V-sEWXu8a=UHt^n=&J#HoI{h9yiR1o?FaW>Z?`8gYYdok+UycVw#DtmmL+k!Uddn^t6;lO*TMm- z0p|G?JZjcgAytxT5^<4F#kp;Q|HxEsuahpIHAL#c17b5N zCI!NGCYe1hVnbbbc!)U&m~-xw5gJwZW%^v=e*y1 zT!$+bVI*1kBXbj2M~0t5-TSMIN9SvlYxM^Xz)<(v-0Lz*_P&xT-?E38DV=K7GGf0N z3Rj}hZo}WS(ZAwdaK-7f!?}M7t;~4i!#)}JFf6y9TjOP#$hkZ@zwiAq;-f?Noi`-k zGpZt8Qe0^F3b?xIm}2{%iQEG%J1H|G?paeL-xs1l-Piw2|4aUtIC+G{8@DHf-^||n zYvLAvtP8FScMow7o0j#JpJmdITN?Pi#$S_Qpv?6g5*HI+`7Jx{inWDO^a}%X5>wuA zZduMw9{)Qojg}W^EwK`OfuJ;nk?&l2+Um&S!eX!D%DTF`vU>k|7Mr(q)Hd&Jb|E6A zr;g1%Y;)2_PliuxJ!Ck6M)y6{Nsi{rEll2qYOV&%!rKsblvlot84eEYi z&wS6&Gpb!05lVVd`sh*0Or}h?%utbJcVBnbma`TEL-5|~CB>zuEkYhwCl7qhPs~n_ zPLsA~wlk;v3e+YWA!JiD+uW1z?}3Up_wmi=c$)6uuG00Y_S(uN<4$yT1zCSHSGTHy z$8=_zY7G;}6pflgT0E9zBhn(EQ>1gI&%!O8x;c zUF27#DAYq^A&g1)OA@{T0bR>U8@Dk^ArHY!6Oo_pkz2M?S@(&aUd!K-ONf5OvqHl+ zJv`3=b7+{qTN`NwLQo^_BVHZ@5QW2j94XXY)WX#EoYDI3`Y$=~Ihm8}In_R3dS6<& ze()(H)uty{?mZH*`O9)w4z=BxN+Pmpg0D<$1)nOtw!dO*$)kro09o zHM17HJ1TgTUc#3sacV;JK*O-s&^Dlf-DN<_%HXNNC_VyNo+Y;J97i-sFub*myY%A&< z(|YPsr$j8cYRGjr+&i|A1*pLvYps-@&1!OsJ8>SvxY_6v;cN_)MAX{U;PlwntINvm zsgtQR*1}_YB1l^e?+H}x;e5z^Ij{`Dk#TIoWxaRi)897iQ-17-;7)_rGnz@AzS-_x z43nT~rK;o#K&uQ2jYJO9aDF)1gno8LD%L5jF$>XLal={&jG1&N!PDkjpYpSn!tytU z76%81#JMOqL~oW-_SwL6AF}G(Kw+paoa_w&m@6D{Vgv>FKwe zowgJsM-%;$oDJ0sD-Gbf_YEiNx;%dl0vLq82pLu0DPC+GaG07@aTdBQWIRvamLE`h zF zK&hcd_$({31(mno+$QAi{I1;|eFho^SzdJotYhE-{pY`XgNyLXiRX!tWb^b3?kTQK zb-1=EGvm+~PCE{Yj@z$Z?kU*U1|I)rkViLN70pZxnt$rkepL9_*xi3wW+DPit*bOP zuoEAnSJ(g(7C4-)Zy<5pE&QRHjb^|t+g07na5IA6k(z}7hYR)FM? z`>EkOW*Kjxxq=pC2-Kd18^<`}^6{l?v@uGOD){+%%MHy&RfHr=o)jB=;5b-*cx$Di zg2RR#-@+llrNtq{j&QMuI4<2^V>#RxIQaiHkB5U3Zi_?kuQjUJ^B;fTu*Va_WdZk+>-60m!()yr2Dhz$VeZr~ZS{z4@yG z&D)4~0dxvBy^>P@eak-VFp=6Y8pfj)Z~8s%5S%7rd`t-b z+`S|rC1Xw{UY7gahyTH2{<+sU>d)@YJVGHCi^1tR;-W05H0!?zflHt$859I2mn^vb zx4A#b^uPTWTb<<4)4ZG+USWBYK8s$tzmp>G4^pt&UjA+FACm%)RHj7W-qv~FqL>?> zplI(R{gbECe_j1T9*^K8@kji?_y1ft7{3AW{4%&So3%#)>>`Zx>Cv%&lQKz?TldCw z@cP|@|5Kr343Z3^ESlFO)uWG}{f=@$fx;?BUTKrOH24G=we4XkJbh^8VDOS z3w>p>UKN(?EO7~ry$#-SKU&Om+v0;-oEEq|tuoI<$(Tr#{Y_dK_3^5+4QglDcK;8w z#nOQ0g^j1W9?`}lt$Ctj<^|!g;g6Vu-fiA4e^IZVJy~VV4h)Mqd?@*SeA^lc#zAN+ zT43VA{HK4DJIFZb4k+Nh(bNA}@3AaE;GOz?)^dJSlDRM=rtf4;?Vt?F^&nq6(V?9b2q8KsJKU*hRGIk6WDM zXDmej#t_@v&u;S4J*Rd)|NC+O`H1R0qh^Qoru5D3^5D_A&&=A!aQ4-aijd9o|KQBZofMX?Ldbj%jKxXQ>~HY7}|HzVhv;FEvTf$Wwi5t@sq?OGi_Pp=_011 z*nwY#>!;k2L??hnzA>WXt+N>Go#{ds<}luH`)_)G8bvJWf05oBr$KhH|6d-W|C40B z+`J8QRTO8(=db>Q$&+K#4qEI?&jCfxp2qOp4=mW>srFX`)huy^U%vZles@M9@!xiF zupI{%w2q+!2L0`E{;A3&gh;NZF)wVLbj*HZ%bK5ns;q#`_ZQjh1*twV$dkkfUw0No z{3TiHMg%L*zH@Ph_qqPlANcz^N!fcvTC8U;*wZ20hDL@W0#U!5mZ|>6$@~|RlE$nn zQv!dR`=^^)*EYFAkc-vrB|F{+Ww@(>l`}i0&J1hQ`YATjB}|pDd)cWf3jgJg2@fUf3Lo{rH!E?=45`ttT2IBDQr!GdG_a4opt?;H5*;{Qd$C28?4N@dv~ zispSMUh}$)FU_}9g-?V1eGXUbCfg0O_>1XcMP{DXn3m3`lx?Q4{%wzxKO~#UHm8Eb zXgW6lu-h+Zb}L5v4XGXnoUr=zzkczzmH+s`zJ+umdYI}L3aX^fOZv_#o;?SYaCGdv z_*Q$<%<0mn!Qhm9C8t#nRoymzn?>6wR%JA-NY6+3r^oG=4K6u-HhFCl+p?6OV?`HB zt0H@(%(9|j^xNjP@~4%|3-bB1+Wt@b`6eS4vF}%98_LIeeiV}ODpe!rDITWCS;`SGYAAZi zP-j7441`KsW)?*(tCY$9>7o^787DFB$EkKj3cH?~CloQ(3bb=N#h)mri%dh%X%YNO zaECS-CVlPC9KP>KsXKOxhs|tBKVIADrAaHd@84F&fld9`i?M<#wTI>U*{H+IidSWz z|2L^H3l8y^qR{C#K|+pIh|&z#ON`&UM)HnX--Tvgt)ZP?Pk?yq&5S8GQq28U{?dzY z9~dXs@nS=J6{N1<^CsY5^{dXDJY68d%(45z2Y>;ji}v*w2Ugx-H=6OnV1HVF9Rpb? zOHpN$yDiP1Jp32_`rrS}S-e$trEP&MF%!W0>+!#^z8GGC8VI3U^ve+k2Efl%|mNa`mFi-P0Q$tE+=T%!FoN?ONMs35Tx;MllIR?j9^KNp7284oGEuVO)gt zDuV5b0&S(Wtp>#f3J@iktK5WW$vpeYCr_M&G(fGjZcRI|zC2(Z`*ILTo1v3m(@z)g)t@-F zdlz=xFy3yNwJ4EWQfF7kE|zV*D#Q{ApSgjtNR*KZ8~S$NiXBlBgA$rp4d|VR*Qi1q}#_*V>NDj9ykFg;BZrrH{VxA^^6r(e|k$|}0pC)+&2IU39VUuI8 z>y)!z3M`yGt9>N6YU%!*`kyfVZ&c1PBdvS=CJ)Kn?6Xt5PoaCM>J%NN@h_Mq*-ZWU zPJk+DiNJ7#`Bw*BHo{f;wO=0wuNVj2CI7HUD_PkNM90mllD@;xZ0<|?Gkp=d_eS{cwNbA z(m(k=E3WDIx@Jv<^?a{O%j<{upSSr|Et$8mt2y4}+`D^c=bJS&g~!>t*HKnmB{pOW zV!3!1@Qk`B!yCBgG=uZqTNeM*W8afv@AN67N=LhbnjFc(Ze=S_|A(XoF43e0NvtF| zwLKm2(_k>gyIK@~{U4%o$sUTTVBmYK*=4y}@9nC*4lw~QzM}Yw_g@kp{N=ex4UIMV zu-C-z_ZG$!{g4iGEC8lCU@&hS!Z!~*tRQVKGyhU2ra?*#e&ox*R=}lhg_Y4NE+f_o zWuFE3&zR^W%#RShWLxdpx7LC5&L7I;8K2?;nx-70D%_D`)0|eT658 z#T;$Ct82S9x&OpYNanZ`s*10lYtnb#9_!GnyK#29kB$E)$+r2r{BzC!9fRgS8Q)s0 z^PQ0dAKz7bbhiL{8tY_YEOo^vgGI{7>yJyUhY#;S6-gjdJN~pmlv?~sA6%&c4wK^i z-UG(&%9k!(YM9+E@Lh_WO|t$$t16wARjZDgC(9k4i$zNwH2c0>Z7@F?AMR(Q4IYli zU;imA`qw(D51j;)vi~v~R3EWfk%3_ywj=j1|N5To8{{{~yl%1F-~-c;i?*ni&W?oI z)6{D0B_`tBVr!1k>l~yB13_S!z8iI3;uBkVMa)|k{nR9HTs6<9f%p1daYs0|IDN%h zO~m(oZ)#?z!odvu{emz0XFA9#@Bg>bF?tbx%Jy|1UY3(UX%E{Q{X;Z0Px<0}PbwAw zk95EOp|Gwu?;sMO4;6|SMi)|YfSw1?H~e*@9*(No5fWAuPnERM!aL`M-IZ`Vup>UtbM-{-i%wXyUx=9p%{3y2s@;?!TAWgh_e`1rUD z4)ncEWP86BN+yxmqm-0rfjSBMr|gv+AWi9P>EDiE^2;#VnZM*`&mIoc##&jc^#}TO zHZ0-vZjytJoCZxF#4p!w3t!}!=gURW9&K$O9dIoX?$_B~x>clkv~D7v0(+%R-`Dlznk&&#dC=Ul1nBD^ruEhIA%(cT+Uk?ehS zbu|zM5wM@V7sb3eTOVKNIHkBfUCf*==omX=KRfs_!g#PSJ2g=4#jQL>mrL7@B*jFw zFxYAsjWPG3aG}(`MkYh)u|>Vb%K{3uSJ+D{IHwB6z7<T@}69Mmm2%r)>RGQYzNG-ulYmgRn% z*AuHbzQY*19W~|1<=IhQ`mAO}n)QRA9H}68)Kb##Qe5<9|1$bAe^IgTuHRe`So-c` zJCtFo6G#phB%)TLnz=C9;t^>E*fclN7%UVyx%>0(;uuj;*u0dpMc|MlYVcDO3v{i5 z+!^yd{V$fbo&ndJ3rT!mqsXik)(5M*bY@E4$A1DKl?tPG*MF_X#A&Eueh-7<{$KpCM@(Z)36fE%-Ove#zbCI>2?`LA~f5>^@U23+FzbsosoY&EA!g;yN(e8!rC{7zQ@ydJS$qjzF^g~*b z*+isKuf4pN50%r7JeDt4^oq^5oO7>2YY0`dUe0)?s}|I*&^o7hw)d?81~s8oeR&Vz z83R}^LA@lT+qxyvW3Ll4V`;CP@8~#YXe`yoP{8T@No4=95LyA53cTA6@@ceHH%!TO z^ixV%J)Vte%ostLjSIhB_nqc#LCPcxYzieF`b2e~H-S&YmfJto?YmdJU8-cFM!y8k ztkWo@3v&Jl@9;UM)@^ZbUfXbvpS1!R9fk;~d%^*_ki#3@u_q8AV(r$F3Z{AHRU5guXCT~v#$+8w=5Z(_xmzOtx zJ3-GP%RW?VCUF`+0gMY))Syh9DAzqjA9*GY~NpAZtB5YT4=@*ciVY38O&?G3pc_u%RrWGGuL-*2!ztF0mb z^`mcnWELUu*h{%ZT*D+@=$ow@yjXCPSGRSPRihPiXcLs-r3~9`*}1$k)zA=MB(`Mq zq42<;g|^U3nQX{rB1d#zWdGIAhIE@iXTM^u)=s6RF~h>*1`UJL)0EXrij4wLxM)ZF z4@l_iORdcya5wMHU0-E)tlx=%NjsHYmNDz2}~0Y zkxYlnlKW3f%?=~O^x)k|iOi=u1|IgxnKfrC?Wn#~?&MO7%w{M#b1U8@eqT@WqI#YN z;ISCGF5Y4}2(%;E9!iM|l1=&WPxIR9(S&zj+XT{$3r$%dZ_~|TLtK?K!K4C&hJ!5r z;qcQXO>$FJ2GCL1NZkg*DyNX)6V5cD`Yvs$<0J#L7Ywa%#N;!PN9QCqMy0>JjdY?> z30daeU1&%e;u6*=(W$iBP&wO#$%5w-E%~M{!c+J58SER3y>=&gX@?7?bhyFSNo^?1 zbKa#*0A>um`fw7s?^r_?chg0~R|PAbK;A_@i4vU<=%rhb{vB&y{o3+R&l1_AY8|Uu~pL zmqh5p*_$8yihtoUXbAzXLaeQ!^t!UXf+&S~h%0$iAhu{%3kvL` zL>TnmRyby81azkzy6bZST!`gwCjQnlaF8bh6zaoywKv+xz^^tuQF-pT<3kA51=x;7kU+cAn^L zD|mu`MQ*u%+YSmP8>+_R?l|(?AvJpPP12;RyXPo-!zig%F2l|q@Tkg}KZpCj*pzKD zr#sJoJq(wDUi6u(4Ncqf0ob6Ykg>YaY`Z&4Fdm&ex>l~E^8aGs~siA#;#ND*(#GABX+6&u+2CS zNSv4H_H)E5G9Y#;*URQ%K<~h0Gmx0>Src$W)elnaY(1QmJRI}xPJRAlDv9!htG$~-8_%Y$ z(93~8z?w1U(nQ%1otNVgm5 zA72Ox&=<3%d2acGHSY6nKFi;qJBXnQjp0VU2M?|^`+lsJys$)+?K?_lKXwimgX%Wc zj)|ybRH9Go*q@M7#dH|IJ+@NW%}hPHE34n& zP>)8V0-Y5w6_kgUMkRXnB|ci!tzNwQp`y+mqhPg)W(>Ed;pz6w`dWN+BIt65Q@~EE z5$U-z$BGQZ%r73=WcZ&?Si>q-xya@kTiuau@{79>g2QQi#w-E4C&Z~TS5<%Iv;LM- z_@4k@j=E&%p24zVTc@~ApM4U?H#OA6it`kd2&;BZ;*e6Stcj0rje8s()!g8u$CkZa zm1E&4F3C);bDd&IkJ>_$_4Z7woS4nwUJd*X2h_e7bXa`Rc(abNIjuTg?;d9VB1l`+ zY49BJ`29LTPAgCtCknuMP`Atp-wENYJog~7nZG=V;2V!Sv3%Fhk0h?B^_TSFvim{j6_n1eg?0mo~ zduK8~G^#k|82viFZpG)OVgi!Z>~{VztgAv~|HB4DtJ}v5ubUjx(0nW|7Mu?o4TL!$ zqEvZZNpAeM027CVRG=6JRO!!rDYUPrlYPzVExS-8mqJfth3a$om&HI`^;(sPJ2kn6 z62dg&G?e4<1E~-k+L3Dn8Ov>G0TI$0zqWgirZTI6hLWGUGQhRMOG>UHMyBf%m--;# zGb*CI9x^b{aU0m?S+4>bPHVY}lvP(xw;#e9BExa5wq*$+Bhc*Xw0);>TVLhwuiM{% zj$0U~JC{3v@#Dr_qdvOA3NLdn$8B-nYm94?Z%g}uc{LXmEhU%b83Y<&#c2s{guLma zIswm|Ub<||?oym(ScnfUae7xiF{ImXr*{YJI)yP;O6tdx8$|}-Ji6NJ^Aw)QULOjK z>p|QuvC^61Kf3&`jemX;nUM68OhP>ijZ%_Ldm!i#SZr8YK=Nkc9sn`L{Vo}^^@#3Z z&2zky9WXxCXT*|%Y=G`Z5}n#!`tHhK&2)l|v%Tw}BmTmi*Dnj9qFxKN3W)YJ0egj* zy)dul$yS7oEwmutQBC5tUnG&bp~uC=HEU4xRd7S1z>3=D7{VO%FMQVcs8_d=oFk;cqw%R)ynL}*n4%j+?ahc4B+^=mr?N=dEVYDPUqpBGInepQJ~Kh8Mn$JC z7I#rL_Mvzl;d=GMr;RVIAWYo8s0Fb?>}w8tSroc?D-~3HRsqj9jJ=~Wl?tUaEAn2_ zyhFV`JwH9qNJhSc8()s2D%p$MQ100>!1qsMm%2hLK>K|AVwG}d$xCgjDsOUu%I(M8 z9$dwv(ldFafh{||n+BHjvR7x0>tB?{YTqg>7M;*e_;janSWF=}J)?o56g$a9(?U{S zdugS5`8=_qvmI+I5(_&CFp$74oedQRn~*lt_U!yK=?nWmfyZsk6RzG!pXBDf zHrlQs!uLob=l=D2fwZLlO}2&8_;N`Avpg0t z$D19q{4N~26P_obdG_qXhiZZvI~eoBvsIx}C-!1XdULJ>_fc1ojc4h7+pyaUR&6HY z&VkJ?;P%yldP{WG=F3o9BHH>fB0RG7!an4N9)US5$3I;y&sZ}$$qj?zkp^XU?yPhjx?u*R6G3SyJ8kafRFYwHO&T|1I=!z_81w|1mTEg0q#4! z2B(qa_oa|I4!g7ol_67>9L1>#zo9Ocobf~c3jE8H1C0$~v025*w_N@#D=Fw@*?T(CI0n!YZo>Aub>p1&d5hl~-O_1s|$rNL%% zqKVSB9~Alv8-g!9lLn?CGAH5mfbJ>e#+RYd~qvzdtl-z#taI%tM<%>PW*K{fDYh4O8A6m7`Q#u(lS7+OF-vRMam`N{q zce4|nkYf_zZPIv>XSQ7I=>jQg8!X$`J2|&-oXn8@B;+}g7k)lQ-Ka;QuQV8 zcNbLv+dyE3V;JSlrh&BzPOfaN8ThRC!^Cis)vd zQrmi9kfTxKRF?(x40en-Bbll=tBjG^Z5FTR24pDKKXbcF;l6n}b-V;82>--tiq}?0 zKXM}ErV|=*r~5=#8@cIP=oj6-Ap@O|&ppoMe7f(a0KEe&Z}o+lkIZ@xN{hmV&DnJ- z?t&B|lXp~~R2CNs|9k|gPU`vWYB_uevh6dja6(+&RbjocP~}xgmh^_2;^-Q?E_uC z1|~eIQ}2Wx4rU?t9_@wJ2S^5-WRJ;|s-|7nKeDXpqz_z-J(E89h9;hq>}CiwctJ1P zkjxTA*JJS;x#;8dN!U@QL%B5wj2KCm3uTo+JDAnQ9M$AV(4Ozp=1TZ(3KZ%d8Zd6z zQG9KlU!4D?e{wqwbMNq+;-*=lyNE9&#?Y63$z{s;2V@Q#!^%{KNu(=#0_Z5M z&ZmH{uyJdRSB%qHn+@HNvgbg$bNh4JrV^>V%b#HdCXnS>2g+ncL+E56yTI~PqKwM3I7-LPGdcfHNgut`s_U05vK=>nrxltC1bB|+e z4j3CU$`fa7`!`!0f5`z4fckl|$YGHr9jxi-gl@0sz^LOy! zpHnrCP;L@CW>~_gfwFU?X^b)7ty#WQenUw(wocGPppt(HyUnLFBRZO63Gq4G6 zz*YBJHU7&pDUC+zB|L0NVu}lV@eY~m9(F)F3tyzmf_Bg`x2S&kl5}YaQ9>R z8dujP=KG3qhI|vSc*I@;hQ5|>?3IRuY=b2ag2|CtR|Bicny=!!E-}jw0yzQWTeE&e zBQ=G=e3uJiC-vgp1Xi|;D!E|(2QPGavd>Dpytx?1qNKBzKDd_+;EnPB)aOdj;;Y+o zZOJt{{L*ODCL)D2RL_GV_NAlvIJrvW8h+q=au$Fy&P_*jXCXG)I4l?`D?LRKJtt&pgY;gquFORrww@HEjPS+0>>=CcKfhwqHYO) zOoF*tuZQ%>bxgldWQf1efKKPaU_ly&g9aG+ zsbuI?R-a!4ckQS--N2+c zxNR!z)Fm(;ZK!Lec@X+|jLt0NqRs=5YgF67gVPXg9dIHU|M87dKtO;pPqv{g=Cx^D zoP{Q0_~J>+RdHbk;~!uMJexDuy5}=DdhCBO)A$3q^oFhP_u1Q%VUJ{mLiYx=xx>Ux zem-B^TDx`v^7-?ZC$>R`xnJLNxp2+f0;MDe7J#7U%L+q4o+9-yjUY0uVQaw)jeS?D z_N*{w^48uV(bFw&azf#jLn1Nz1H-n%Lh=_y9;<{kKKR!wr&Rn6GRQWN?G;R%e0Kw$ zw0WIR`(l$mz9ahxvw+MV^jI}$gTyo>mLq@FGn`}$TpU?Z-LY%Ut)sMVdD zT*yoIv>b?A_jIj(OV)1sQ*MH+tKARv%{mL6Z3r*Yz;9JG_P43|-mKZ`Y06_7dU?7Q z4Xto3#m|POoE%_|j@X?qbOs(!ICx>RaSNRype*!~f&NA(K9TG`*KyKCcUn=M!^E)D zEKUj33H-9EM9H<>9P)C)VvmmZni4916L2;6H7rxPSKLuLbk>M3b-tp_KX00^E5y$_M+LOxwjQh_*7}4TUIH=vsU-=`bM5uTe)Tut5x;hQX#Y-3uCIV32_NtFSTv^ zO6K;0y=J6bdYoYl78HSbyQ+)}JZSRffe58`k@`IS4Kcj%mqQK^k-s|0{*w9sIj*6Cp~jAyUI#mVv8 zqn~8A@$?nfU*XZ+`Y_-@6O>y@I&84Z&j&-B?{Kq78`r-A(5%|g14)vA zX<3k2fj1Vx?wHF-+G9B~fy4G2m>{k-1fO0(Uok|z7MP;d7I7AwbhMBl(3nHdmqeNr3IP*LFR>`l9&ly3{$ zd=tNmawM%?P$GtB6@1civU_Qs4*M<}V;tj=EiPs;m^_uAdU2={Sy0OzBipg~M*wI` zzhH%av(B!PMrrk7dWM$ZVWq=D-P`SS;D<&dQ$)W=z5RTNuXUogc;iog^zbZtOMJ9S_r~F`>6gub1>M72F)B6 zFthcM705;wOx&+9Y!z>LLMJ5lOqL_5O}~a@}FC; z`$y6nDU0(q4o`)82ba8HHlddbb+)z}uC^>yD_lrOc2 zRmeK#T_}-9RvyeoLU)3Il=i#`_-Y@0WghnjhGH;{fw#s43S-lIIn5=Q0P=Q3d$NMO zMZ6CjB++PdV7Jmf9FQ(ZtUUldb*_OHL8^?1_4w(G+TuDK^%rnK3y2xW(i0kE?}kkI zvKww2o-!f=Nn^UTvs5JQ|PRlb-e&8Q&2fR0K^a)8Ma zKFb51-CT37FiluxcE+vENE-D-MZwoM^|3FC#^6q_gvZUiwhcTW0z%a-5m17R9IsH; z7tAB~$80TcZ5cX*ZdGy+^j%bb*MfHx&vNe`zl%d7!%YDx_6vE{HiGRlr4W|g06553 z4tf15P{aLV*Gv4HG}%Kte1e!)4k=#)5FKZIB;^Nal!Of5gsn^Z6|QuLq-y4wV_j5E z7=toGzA6}G_#=ktdZEF}GQK^(1-GCX^g+n)}SgI<+NZfW>ot+1S=2uSe zTOao|srAv=Hs)521DCn!#Eik)Go|+Leq13pk#;k;_k&`c@kr}8le6^@{@O4qe_NeP z48M)~SR>2gkYXZxkJe(HEs9J;z@8fLCaO52JCcya`}9kxGX?#|_!E2;Mq?V=sg^L3 zFXaxEKd>3nM0$iT=rWZ?)26;^+nSmO zRl!OXV85)_mTArNXoBVwwT-#=)&PkaE6W8dM1<{tJ2=Df+3i5A>3*C`RI8FM{?lr6 zidtl6o{PnVuU)&g5%MXD?nl)7N~3m#0x4|a_A*jiwW?~RFj4T{+fo~z$)U=&%$$a~ z=0c||geo7mrcw)#m%=k)rB@m57l9}#cu5KKE4Hz0+{4z@a*q>3<06}2CFwi9sg!`> z*40tsJi8gTm)`0UTT89-?yA)drN@!ZO=slIXP-JOCRG%#+?Fny8d7BF>!BCgp8_vp zs-f#`Ki}OZoC3XMjdhez`rT2KS{DZcuU}lg-Q_6QNOq~bJxYJytM&F^r2dnEPdPi@ z7!&bb2_wnx_OOl^{p`VGYG-nb6@{=c-YESwB*wH&^*n%e&XG<`S!vkak(1*{B`+4Q*%}7)w zP9cAk+{oP?@wls>a8c3@Zve`sM5mkX__X`(S@PxE9wa6;_WIwOs<{1Znj?7Z^_AJM zAj)k2v7w{!v43&Fsgh>fOVgF)qgHs91s~31-vf2q`5GU7@4Z#E-aaU2i_yMYepi&J z7xgdtWGgd!yO+J_IyVpZjkwi5QKD z{MG6AzrB>9exJbIC@%eZD>jl34%YJxy#QUF)JZ-0ugvf@_o;5f^>d%=2y4zn*c{la z^08o!`I^tuHSpEVL*`)%ut8ny9U82#i|2UV)eXQjoY1j-5~JbLWAlCI&53`BBFSi+ z9WqERP9PJ%=2$>f;S)UvY>Hm?cMBISgb%5v?imbg^*$_cV%cxUCaiy+>~0^xfaLkd zW&88nEV?r3g5`L`zt#bd zICH3kglz>|s-JkT_`Ye+teGjD=B*SFE-1~`8GhGi65BW1+c!5G37#|of2}#Tji)1x zAEb0_KZhi2APcAQFIO_b${}TKj&{q~z60rXY-1xdN5J`;$y=;#KHU5eg6AWbC2GKu ziLNizUKt~k5D_WlBd{A^?&m1=9t&&OCmSDh^AeuVfaKIwN&&io`v5>5`~JpLjVcpW zBM#tf54`Ea$xTDQwG-ETGI@stAr68j0b%>48~(2i%|iq;yGN``^& z4cWX@;&+Luj?{^37te#EFhrLq@z-# zh7O?w1rd=Zy@pUjPk;a+m3Ohv^PcmKGxpwRyU#bi@%?!IB_p|$`(A6VIj?!mxvtf= z);1xJH&nav`1R=O;+?P3Mz?4A=rrQfoiw!6bp$_Nh=`oIrgJlAN< ze31Nh?KxTWy|e`3nB2F6{+6H?#*104kte=yMHO9YZ+oo>+DMmmB30HbPm<>M@B}S< z?XT8n9cujbvU;Y7QI(O@#C59j@F3aE(A*o`#R z&3pEKmPAK-_3PR}dA|fS=@CWUo|jPP-CI)H2{Ied5TKlC*Unu+P*I*~MP@ZiExnQY zlCL?@bh4fb89{gODQ`FAS#S1kTW1ByV`z|)D3H`yJw67=twSihQudU+q;|zTBb6gZ zJe91DRtkof>s$>!xsRAI!I7VkOgU3prRHA;ZwR%qJKBwnsfm7q2(ExSKdtRPi0hf^ z`h+qQ8WXYOZqqjiW2Iaao@$@*2tcaA?!<^n#!mgH&{ftR(~O#@C5>a6N8%8pV*w1; z9Q5X1-ndMXPHwYt_r?ZmFJhZyQZ};FK`KV?k?-!Y8O*zv^Tr%$+@J+n9`O83KJSvA zSlD~GEb($1RO&%Ha|f!;9;!|6hu%qrZ-&mrf8{~e_cHw6#Y%n31-%$uB6bSsVgmf0 z&Grr9v7kle8so>sCJEvnOuoLP|4S<&X$FbU+-~wab zPQ_Wxq;g$D5ooc5fZ|lgqlWy50u$*<$O9>--cR^Szg1mo{=lk`ARkv9CoJ={_T#-9 zOFr?hf1uT;I+x^$s(e3zd)_?kSTu2Qt0>3}O<4TkH9JZZWBXPE%D^9pO;Q)m-|b3j zaMno^uIKYG@$R@=O`Mr*7xoO!Ahwykd+$^i2&)|8kZuIq({vw2Ybo7i_>kItA`|hM zpMwHYy>jx+%1zKCu@BzC=>hMFmp25q0(hN|Jz6Y8$px@wOW?Cp;VKHvrH!e}(;Xwo z*sHs#H)pZ)N=hkEe|kA0tYW*K3;Rmcv|`vzA|X8CyIgsptn+n5@?$ znA-z^nIw0+Aq;G6uFufoNZqD)roe%Dl&@?}@N7rQ`|a5n?}=(*c5x%dRfD|N``6menOzO8(z=v(J!Rw+U+S-#!iZw#JdX8g+Xy`@Nlr%GyV^yq=4u7e?CzkGTU zD>-_i^Gu`NaeN{A{PBKu^xfuVHmBQYwE$^au*0}<*6i1$qlIOqJ(<~W$QBE^hkfbu zf!C0%^}ZLCQTg%QSCZViRx>538>?}8j*1pMGye2h zsAgP&Dbe+&65C{txEJMzrGjXH?^$>M?Z+N#kI0i8_q@}FN@s^%5N=c+ch0?or~yi? z&vJl2iLQO4blg!ulIjEjbOg*xyPNuT3l~a7U-` zOWJq5uG?I2nQbz+#ED|4q~`8aaV~d@yWJr=XZb@W>}djy!70&O-#&bZ?iY?=`OtO> zm85wF3Mj`6{Z^dJr9Q<-{k;Ag?&$>cGzWk3TH3TbZn$plQR#AdzSrc<Dn6#}|ds06dKc16*6=|*cW`7E6zL$Iy1g}ngpznPMClD>y&@S(@*r?*{`Ks}d z@)apLkIv2I>C_sxQI0L1;%#J=kANoayXcx3H`7RexQp2t-5T>^l8Pff@+I}Ti}6`z z1-sk*%sy(nZR+ORxYzg1*z^zM>&!oO;iXmMX+yNIes%Xd9=v)AAG(UGRvj+Asm{=4 z;eD1}qHz-@{i&OS?dfg)hZpCzYIl98R*v2DKBXO14Y8G!WtatQ+ReAFS2fgDPV&}} z0(up4Y-&Ma(H|L|j&4X@yhM0*rY2p5zSvN;?PC$I$9k4QB6}&Ligmd_Tv?cAo3l5dPMQn z8Ub@z$L@hHg&toF{ap3^fk&-xvp<`UNmzgN^`uAmzJR}gU41I{RvW6FydFCWBV8Xe zU-AU>a$9j^+KR0vZ@iV3A)LM;dwIm~wb>QPWpnt9s03`da}#FRmk4HTtxW>=8Eeb5ffQ;V;cn4OD|#YR}{##4G10cY{`8-RZwV3yKPHW4{ON1@jZ?Q0qA5Aauwk8`a`-u_?*cb*O~Hb4I$d z<-MwgHIBtGP=U{3cs;>$N|dxmhJYLPmsMFCbRf^fH!LZcw?%B!0ES35TM{ZZR(sAJ zA;OjRFm*c(F=pw`Kn!uns+D#+My=xbs=NG-H?y2M@qLY#_+|H0c5_MPwC4 zCVI;xR{{nkC#s1lBep>dZI?`%0 z%zIfnDsA>+10?x3?yIw`Djt^Ge5_MGBSt9PSuYi5L4Ud=Qmyt|rr9Z9U@%~IZ0 zu_`&Xpu+8v*%<(!&EWc$cT(J?Wz_EV)FeNu|Hz>cw6m;3zFSg7=7_X2w4@C&Hg%Xg zn-rimW3qPIw3c4xPS}`&G~<)ydWt$N_El>CC`P$5?2#`guT01`WfvqXi;dHe395}SQ{o9u!{V{S$n^WS(N0fTUx%S55*-hHEJs>*jethn&70x z|5BAXed&P1iiKkwSE~InKLN&04?pm-gJgN`X~R>*CQlgJOu%g^Ns*~9&#=;K+B-EN zE?Qf`;E=rEM>ZLSS(5m=eizt%C`ymRah|#ZJ?RQqC$1iKP#G>BTk{Xf=)}Yyr z(O*lPHn^#T8jGaEkQFYThqCYncK?NWOo(sJ?Ry0#s>6x?sqJmaQ#*+0*CX6gS}kWo z9)ErN`PakWS-DqL1#}3tjl%m}FP7l{XY;5~iAr18w+>g`mDOUaTwOu7kO%gcBAzzl z-(OYazM?nEZ*eA~5<7V)cOF|=Sy?kz!X0z73tq<<)3j%Nsz@Ktd2_nd?#3d-iBvwx zi?kc{inMCZZ654QK3+gL9iTZ@F?#u_Yfqo}3;VjP@+#}1S>jSJlHa$mF10%UFq=-# zr1V`@QL&;P^a{;mAl6?0qbf$5>NWKPE_J0l;%alkM|VTS3wGyyY71E7_Y!U*ckKFKCgj;!8)EA zmKzJ@67Y?fn7x^h!uDio!I@w?gskEEYo}ZGFIOppMIL(Jt)m7w7roRT{(>Hr=X$y4 zuemy%5N1}c5j78|Y0wiesbn`Sb~>cNGDFd&zg8QPcA=9&y{WYpIf&+7 zHrHQjWjqvL{t@A2lUV1R9-nObHnzQd)>XJ`)%labEl%F>$|`G8p@y&gg^$`Jp22>1 zj%~cN?sdM1IVKf3cfyDH(%$ld&;D-M^ySsUlpD<)gi-3qz|@AFR`0R`?uP54AI5_` zGWB;PRBZ2D**yavE4s@q^HKcz-or)4pT`2IyN4=Vlf3a|*)&FePlidk(`L~aH}R^m zqSDv!D_@y*dL{QAGlS1{nW@q<#& zHTf841S*x6N@a~i$5gmqMLR-Phc^p5`+hmiwVy)@oo8J7jah-Mi(D?b;YMZOf3z9pWi|2{wE%=`zy#nR3S64^KUtY-hRT?bu!h zQ=sDP#+_knLPw0ZYT3ON1SdXcM#qXCb#aw z$s<$D?GEXh0z>FYW=H>_+ZW%8cNu|JkRII=zYHl3zht!tE8LGtpX%-I+yg^d$|xT; zagk9ZE?JDkq9JYw^McP-FBSrPHzFZR@G*4l=evd4yhzi{%~*$_@_7in{rw3$GdINY z`B<-4-{FOYk#{u)i-*3S-L4`|z*Y#k6;@QAWk(*XQ|8byYqsjsddl@J!y9~2i%=4) zmx_5XGKt4J%eAKXr}vJ74B<*4OPNELrk2XQ(R{Cmc8+dl$&GVIhtR<_E~8~bd)6Y( zzZEysJ~egma>)5Zd;*SF`rU_auIKBt zQg$gkm)tAOk~L>?x#7yzY2RX5PEXmvOuV1G>FLyK&)yv(oK&3`@D`d8NV+bf;1 zJq8h6j{Z;7EjYUJgAgj^K%3;_>w8U53al;qwt$oKWNpehMVW9-pv^+fnk2kE{i`4o zMN0kSj!;IfZTKg>H@9`5C6?$Np(SVc+MYf8;k<)}6$T4$2~``3RUu)+QFgO|GaHwO z7T);$C1~)6T>|O(`yGBsIk2Eelx_aT-)tbhde>P8`!!<3W?19AYK4gCDnr=m+3Ui( zr_-8@;cPXGI}?@is}dLYb6J1G3Kc}Z3LDEQfBf?&)HMIirURAsC3mRyE{M#`8CzzMD$o-oQn>&*WYwztyL_RpIBdzzLx1VD9`aD z`|1PkUc;kddpS9M<35Frf4lq_8}omEl66k+aYF@`B?rx=@_SwouO&Sm^=?`@HXp8a zpZTJ++ZY}yx%Yx|wHK z6x?o-SPO;F&&gs*kdv@&SW~x3gaFsKoP)t*6BZBf`%gNap2=~ziGIe%d&dsF=B+Ew z`F9j{{@FZIBC_&KgAgMb$#)@Q5oen@6T9g9t==N(Le%#1LVlUr`FYl9P=7kD>(8q2a-S8bCN;G*@CZ|0KEWRV-37S#0mW6xe>(i2gOezxV2j7K@Za_szYh{qonB zhBT^|hOU{&7Vxtz-iu4@JLglIL}ayTf4W?eS8@gz0_PFOe+UKrwT$K8Z0UdAhS&hHSpMVp|DQshe{;}`j{rkp zeJ=G^H_`vGkzJ7h*7@c@@y{!N{kxHchycafc;*|4zyI_Gz+gZfpywkb5Z;E2mhHT`|a8@tiTX#cfA()+m7y^x1j#N zZt5?Z+y779)Wrek-gCyh|1ZRuhgY65+03;bd4>7!!^8gbI{su@g4M#^-{|P{M+e%{ zNOzbO?;rYkNy&fO>II8k66+jD@q+(8rdl%ZpPRb&7ia!YR$2NU;2wl`*;N1cKlSjT zO5sm?f`5+9KmY%~(Ln#V2L(gK{mXd&D71eH(u zA3GqwAlT<0S*o$Cy6g#TdK9Ky@x|@lQlinDw}a9Gaf8Pr?(>InYb+HDUsWv%fArpg zeGgCALl&mn+XNJYu+cGAQ&c0;J?5Xf(Lu(wwI%rCAGHu^wEaiUTzUA*E~Y(y{DN&( z`m=$l>*<1cqxV{Im&Z3kE_+VT!aPN}$n(to=Dy1g_1$3!JcoH?l*Uf3xn&vn7d38< z-0aD^BOGttopGycb{@^YAup6caH4c@x}U!p#O}@p&#tmb`}6<(lUw3vA}amxZP$-i zYmEz!7s;0i6)vn5V0dh7l*{+Y6GJ{h+^a6eSp?*?-Bw|2^~_Z-izS$>rmHoP>?_TE zFN$$&QX$*ADf=ig*r4dDh3^_HBy~mTONxZdNKlPt`+pdaKe@43W}JUIa_!Meu2Ufx zAB=n6GW(Wpa+!Mkzc2 z*4f7+#>9o#oL94M@)5R4T~EiWH47sTe(OE>$8+>2x3p)!h&mHfFl+ZnVrTzw#;qxb z!An#Qu$5#+yX1S17CHEo_c+>?u(9Cm{v~7|ukq&F$wy~jq19sRO(S?@EiQ4o$NuTB z|FHq(9{C7WHBa~yB)pPj9rYYbs}8bs|FpQQ2j8hQKi^~3T-zy6T5{PanVR2!bBl)ZIjkw5Oa zYsHX+0kpt6h|%=X&v|9!vcr$@Ng?TB_aEtS7YwX*&~J*(y5Y%z)pg0vra(d+^n-IR z5(?=zp0zKY;WWRGYfHGiwS0CJm-N_rv}#qlS}U=(J8b!Wjhm@j62ED)8ei}+B$^Ts zL~F^GMr$#iaEh9I>GT~sYhVtGJt4nF^jpZuo(jJU#U1V>cJA0BBjB^V?EttB)794C zLl*;=vPH5;HDlGzY6Ar_QR4Ndcrzb4DxTQ8*i^MTWj*)f@CWEN*L}+R$)?I&B3*vNc`V=9rK*d9 z!WMH3i}4*7&}ySc)=5rdPN>j6C_s^{e{bwXxOW!5;XoU(%t+9i2-d6jlsYxG@>VoD ze6g@zs1S{N`u*8`>T*Sn9EmT1qD)db5pNb?tMp)dx!98$E3vClkFhoKRsrUe?DI8k zDr6sDOR}MV(KbA=X`UlxCeG-+K8~C0|KZ{zFD?sO<;9xAxYafx~M{eBbB?PONx z)3CM|tHTdvFQ&zv>};tKCB2-l3G7en2zL&oDf^7RGqp0u29tutei-)2gQniFSBLcNjeRhjkuZ_IMQqMd2&Ise!Tvk+rDjVdY4Mpd`yvxtVBM)l?gd= zFT^&h?FE!^4QPEHwNGU<8+~~1?0_QpCdOAk84K2?cuj=S990hdv3vM4Z+1^3T}oQ7 zpa<7^%T(W6l}9ceQFz=KbqQ($E48~b6@`#H8(@LZZ@DhzHhMw#)yFMCSY9B0sM$I; z>=$#?!_NERqMV3&T7%r;%V>?lcos)qUyO&InP(TOt9s*@E%G%^v}Q?SHA-0Qg&-f| zjPLfnyxNopw9BiSX=?ASn-AGG201cxHAGJeZFz55?K4!Vn;c5AO%+w16;mV}@x}HD zUwlSh6BI|;B>7DiQ@ygtCS%#jYKhU1u_7zXw5lxFS~Umjk+xkP^Pk0(e6FRL^w^_F z)ilpmPHmwjS*J21D>`}hqmvsW11Fl;EG(|f>#XlxFKqwDg-}QBmn%5;1N72n6Dmd} z#SB+p2B*Q5e1<)0N)|8rb|k~GBL?23i_|u2k5Tu?i##}u&a_7#6r&F5R4bl+)>zducu?+&cH zqMGbhk*03nm^X8Y(~3()u1c$O!KEpQ4K|{tVk#s4LRSVp-u>{#zmrCqm@63);NQq) z5z_Q-t`CSirG;73Vl=+ce@YM2R)1Fl*`6=k>M_n{k8i~jDhcBDf$i+?6-eUtG0F%x zF4Ic4Sk@a^()Y&30txg%I@dn->I!`ohF*l8FB&+&woM#O?M_9b)-#P=6>C?1~U(D(C8y_-5$u zx>!)Dl~MRe=Ffo9A4G>8^w7s~;m$36QXAR9f+cXHYGLgS$DJe@Wt9^OBB_XpnXR?f zF8*dcwQ}H>Q5hD6m%MOoCzJy;@d}RhYjM?)nV$?+eZaO~RTfeGu`I0=?(0u$xzd7V zO4iESA69 zqRiW)u%F*uhlrr8S+1ud@Wv=gwDyKjkneYTBpW=6`;go{+y*eqtGQ{^S;_k^_)To% zk{f($mMXMgo|LT60XlQDU#{-Mo1Vx{k!qFJzZCUn$iyNR*dBA7OF$hB;O z;?Y^}E;R2_iLZwEKs7KFEzos2BN~dUz*jhHCP?u6USRIG2qX_Vfc>I&3o3zxlS1P$ z6-nFvcWw<{2xYI~^Km)*Bkr2Jwld>Ne)Yn|(8Ua`;MwG0C)#REDQmf>qR_`OY{)aN z{+LHEvQtc|H+$<`w^pJir}egDYBWuzX5uYgp=>-MlOLxf%Tdx>O{q0scW?XuO4$gs za>KkI9=J2nwfnHbISZbZn-j^P6;`y{H`C7)PM-0!YEN6hWCdnF7)`(Wcq$^~@nlb@ zN|eAIZZ7d0n|c{6slCR9+thW4->ZQAnDR$z7wVHE<4UD#4G!H&_MnV9HZPxJ2ER!9 zJgS{rcR3we=7>>P=B#LC_~^IP6y{+gmaAj;5-os4p5F>23WtZd46e+}r`F8%qA-V0@RV(F$vA9?@g8q(h{!p=+i4jZEOeO9w=2%0B%Do|@rKo}zjRTt6? z<-KDQbS}$ntb4Iop}k!IuABnrpx;a7Y!xf@epfM4sHi&ib!fa7O#A(3DG1~J9hoi1 z1vHdMp&!T#+X_nW~x67}gk+^OjYc*Ylp81$Rb=H5>OmdSXxFA%*F= zw#34k3EXJ)MW{S@93lO!5ubH3T_cTNr-($g|0G;a9%} zp?YyKedl^Hx`~v#0XTD4MH92zwA{&9ZgT{iW-#IBh|A6`j0=rk`g=k3$QB|b^!;_{ z3_fUnC{mtMw2k;Ogm9_b;6%N$Kqcf5O$gg6dY*ITyLNwQwM(*DX!@MS)(@5S5@8!V zv1K=uJ{rsMIU89|QMfp~!qIW^^5ln6h5(QIy^kD(ozU9U*Z^ml%Y1kbu8~4Da2XK0 z8MN+!jv;IEk_5C8V{$S-uN>uT9!^oTU1$}p)U;EA9Mt`|SYhh^90-=$q>2iL;@nUh zktHO=*6Ze}hjdcMDbwJMNYv^mymDSZmQ<}wApt$+ZDTekYA~V7>Dims!hVmws--tX zjRn`?GNByTs)+#p@*$3eZsazrcvHY)K{U#iqO34IzS|QbR7S!FZnwhr*HTmWP30kl z;ZWBb6 z@+)^<>?s5zG+x6g=hZd8y4?LwKyL7re^$!5U&?7MNVptY{QU}cuFLv%VHyiGctihI z-POnbB-pK8xrl8lLk=&i7Mt(f$krczy@u~Z<(Q9QOL5x!m6`YtUWB62x^6|}skv5P zh2#-}4@eqTD}@7MObueohvC?Gp#`mk`YrLzQR#Zckxu@w>hX4;wQcQ>8ub=+_~+*j zDas|vx?5~+3P~k1)y~hB>1l_F-wZy)Pe*Omb=i{9=spkzED!C-2YxR ze5fNw>75w8oA6l+f)##xnC~uG#T&Vu{_<;1}Pqoe{12^gN zkHMGL8%ykCSw~}2jPBAV!|&?z=H9KV(i?yB;PKGKq94Z&NC-~dZv^N- zzxMGGpP)23GV-(@T5meP-*5n)23L@%DA+12z@YR_`OF~!oUZ;-!jZBsJgsyszz6FZ z?}r?;TIC6T}|Xv{MTXzosv?lf&2HG zc;65bmQsoJm-QL3uAl6_Rs*qX8{DeC@M_Cnb!+X72J)q(*7o%lNu*8}2MAW3VGueQ znPAiReFJxgtg_8F!;bO=D`m$QuF+64vO5WTvtCCBrt<1z3h0Ozo!s$a!7{GH88*uG zgIDgs5J#3^}}>`9pW%MnBk#(7@_E zTII15bq}6gvv?PWHFmEnp8{tkv>*0L8^AvBs+Ad!&~(+>hh#f(A}7#rwI7z(9{wzp zemoK)VBl7-YnAMy`%2VR9KCfLU%CZGeCgHV)41yQ`eSg|CzWE1&xp-;LE7fDdI}yM zR^WK|f`=Q1@HDmVJ{BCA$t!&7qWiN`L06P$-;O7c$V2w$g|%Bl1-YQLszKD{CvC;9 zm17c;g&pa4Kin+FqU&D<;)9E9C;&O#)un>W*sGj*855_Ze%I=YT3XB=$|lwwSv6fE z9BxN18mTod-J)W}+Pv2n2ew12HY#dv<2bqEOiJ#o7HJ`5DWqTtld`SV&R0W53P zzd!fm-DRiVk0&7Tp!~A#+gg>dNLqU?1 zEhjuJT3Bgni2!ozC~EDVuG3Ds=Z1*ASLCgjV9Q zb{)wl?Nfp+;70aob%7AxSqAy#4$(D{D6|!euir)<67SWmnx8vg!}XP%g6`btT!wFK z46L@xPVJ!InZq;v2zt5Um%Gi$Mjta%mW3Y$aWOVJ6HF9H*(}6No=V3ag*X$ZmTPA& zm%ddvb!wD8=ZA(8!gP!tzsVc(B$gL5Zkk(vj}bzF{hM;{ZL|W!fmnyzDa0fnYjdQ^ z!ViZLA96#LfFf72ug}3#TLwXwyp1Nx77NKhN;zNEDjDlfo*D40VI8h%N7j$dA{az< zmy*@J3D?#5U`DjrcWUSPEH(En4j`dN!)g=_jBRJax}ruTJ2$BwI!$BZn}gD){Z?K# zbXBiA_vPA_0f#m*3_!kvx6q|sv$78GQyw$in1}g z6T&{NIG+(cNuEthTRth67UX#f)_a$u9x7_DIPiq5cZONI5(}rcO)9_^*uf6{mcPNV zrUx*ws-_dR!VY=w-nZWAO*8Qsvh@VAbAifVBk31qj1U+zqdYmw_~+p0mh!Fpz43iL zxu5gB+ZDHip*!4bLa7YK`@@NK3UP;9M-^cWOnqLkujkcbf?9b)y9rtf%5KXr^PT1C zM`)L|_+~B$6B>1B+&CZl>Y}^hM%6pdoQ|&mIKW!U-w+-eQgGLvNZM&er4^JKwp&XeM3P%~<@?g!_}6H0 zq31>;qIHder)wZ%{`TrY@&TIpmH6S}^=^$=GdJ(1;%9vRwK{1~tgBT>-{>vu29z(* zM%A~_Csa~*aIK}8?|J3k`s0ny5HYL`u%#-eTl>8L)#=fkLSpaGP{d|OzA2E*G;<&| z9v*v1s82iPi=|#k4jgN`YY2Bg7$jMd)0=Xc`S7-nYE%^g{mhe9esbmUc^GegZRSd$ zD}zJb3f=n}>8I$?{M2K#Bb+0C$xv^+xcPKzD7l;93ae-puU9IpXHZ79E8Q@;Uf-GI zvo+$y5@6t+!f6Dp8L}w3kE`e^)%A^FC{ykD{o#{PEJa&`B~T# zf)aUkN3MM)slA{T0sP(82o@RHqopk570y0BT-%xj>JOH~Kzr|N!YpU}g7>!etl5zo ze2H~djYaj9v<7Qg62&mf zApSAXPepxil?GAeyEP1!J_~V=2;O9<<`)MvztPA_pB@WbK1qVCTtBf`ADkLt?lbnz z#)4(F{njhw)+3=$6>h4p+zmz!)GP9P`N4xL6I2bn9!z%$-XI9_wRvZ1?&^dM%qLTG z79?i3H%i`IzdlBX4!OO^X@mmm?L*_mWv|)ZSS(C-l-<L@_@hFW{Lobk>LL(|9W}KBf`ki4BN-d_fP^@fbw~U%c5Ak?YBE z70TR)Hi_m)-lUT}v~{MOQp*{VgvaEUH27Kk&w{Uz+e{n65nO{4;08}A7oslfR#q># z2qhPh;LpB1k5Vb@ST||EkgD(_02Y*raFF}?|cPag6U*jDRyNXKZ45}@%5Zd@$Hal8vtIUo}#a+)_b=! zdFsqh2c08EgUp;&SFjnku=->zG2~qLK*6~$GC^O|^!Iw{ekkBz z+y(E|d$m4s>Y%Upye_^|X@0)I)!;Ld{gWbKC8huXweOGN%kbLaI>atXyNzxIp{6C< zYlv9V3_hXy9EUqLST@H`u9Z%x--gFE6ZesZXVd41-dOdN@3^K-kifL=J=0^N`|K-E zVU0v(erNp&4pSeDkP3ahw4_z@-KXZ$1YQU>7+cZ|_IhDovzI0JQR(;^!qm$7sPIyT zH=I${bL_(Z)dG-6=&bguoav86-AcH6ry~bXQifd{wVW#Ooao7Y6z3=;L>r*5K%4K# z#uLtvLAh%hy||O5CB)imz$W4gw;WWn>*$B%L@W^2IN`nYVFKCnW^} zHkXWb4SyQl?P3albER&RSE19)h_6a7TiUu+R3?0L*)U8SK%A$wRQSOYD>pTh@6jzy z%|}E)t!A#e4_CZ;ljmzj7_3%$E9Jxr<&}e%{Kd*MsIL+x&nAG znA>2TQjRc~xGbZxW$92WV`-Uj#tv2q&Vo}NnTZd#jWfcDvmyuqd4=vDB?sMo3i-+&Yfw!k?g$7IL+tXmFXYJ5AHL7rze485Hqj14CALHJ=hY%#0-;24UzY^ixEDYh>SnaSo$nn5xo;nZ z&%>c`vaM#a++|K^(5i#}DrxazUjCaAHIZfx0Z=SujHxWYG3vVU1V^1j3WCJ%VL$?mO*gIg?}C+@u94kQVuZq6QY*S%d7i z4YPbfD_5HkM&mu|1I9qYHP0(i{`a%>KMZyvPncL?>F#HPpr>B|RvVJ==@I9gFc)sI z#ED)--uenj7;5nOz@9bi-E&9dbRMARSCI&M-uEL$3KJ2sW6wd(P%u4epXx8rZ|2o* zMqeuJjC{riuev12yqSt%kWVOhJBW!4*7y<{ihB=S!pPRwQSmh%E&TCc!o|@iVimiy zu3!K@KW?Aw@l~YJFQ;bde3a5L(x{g<$OOa@^KUu4b0mbZHHWQ7 zxFs{$$5u{?g@RG{y>+2btRG5F)sN6Sv-VvW`4Y^$lnQO7@r|QnLH#zJF!gZvq33oS zA-Kr`g%GrudG++QS7f5D`pRlEU*}=Tw8i{VExTl<=)=nEI$3dee_)nPb)+%VV=V33 zJF9)FsPRxZorHw|Rmw!+`i*{5kCo5+W;a{wqP@>-TU3m=)l^M}343}Gx8KURnv6bZ zWLq77eAs}dcI@_J1}HUHBnPdvBoM3^qu_23#RT+_SiOA;%%ISR*e=@)EX8CiHR}>q zPg^+_(kJ+nZa@6(6-Uz))q|g=c8ac_9*Li6ll2|5Vo{GBiB#SkG=56%mgjLPx+=D{ z-oyp95!=auuBD5^z6c+($fUFH(fnNU{cIggHA_3~v~u9)OJawjIP@Y}b9T(LRdjVB z5BasAax8K2xq-j#!l$^@4+<$b%H}f9{Op(G+An*w6h#*XkbFYk2DO{t2gFt46c{s= z)g}l=Kq*!8^Jgp#CtJF7;%$FoLo63G$V1FB;jmc;ewaKJt9Y3c5(w8~l&%%C%ZQAj z8I;9QDxZ>HKkqfYeP1Lyl_a@NX%ZBekLsO4u5E-PQ#OgsJ)|N(9^ixV^h4ZIAFGmw zw3~G;FTo0waBr<8n!5wrp^AtzFb>VDEx^vip=-B(lG+J5GDqa!apob+CRg6N4M&kB{2DU{kD4X<-Pji%(=R1m6OYJAEF|A zMT-PR`V67GipynsV);hV#IV`^K1w*!xDO7}GA-{u%o{44y37TffK8d}NJVva-+b*l z!=4K#jJ8^s^XtAPIHR!0G2mPR3EEm2bjmBp4Zh8?o{5hI)gFdOCkhOg#x;1U(&wu` z$-i?R*cF0Soso*n@0Lq#>i&fx8Y;~5ocb94xICZQc6ZGUvl~|#% zjLk~aFQdR|9wXF|ERsFH3*Gfs&e4NPsAK4$Glw`TEe^tLLpf0~XUZH7Ds~*6HL!99 z4Q5XwXG7>YOEK)X&0u}{UgrDs=8F9^EM`*!@u!%l!;ps2^j_)9WKMTcld|W|$T}H& z#(Z`fPC?GXZz43|$Lx+5i8uE#EWnmyZ8joG+SV z*}pDYO&E&?7g~mJe!SRLc?HrNRI%kd=AUh&N*MW|4K(;evtO3e zA0N7iYBl%8aG1K+!QQ1_^9SDDkbPfYYQRFy7H};0%liyk+XFX;>{*FK#Sxd%JIQd$ zJlVyUw+B;?UVWmr6;UE{dMA=x3&&m8` z9l!ux2wV#}t=1Gb4)+0bUvm&Nq?uOadtM1J zfIV(=X-HD6ZoS*g6%>_ULTD3*6Q&diwO)3~d#9SC#B(w&EnF+u4f66f3}+eJ3{t&D zmU+->Q|KiqW4sga9%esWR>nV*b@9NdV_!zd3}4Vo8hQNUxM1Aqd+jt8kC|A*n=A&n z`oJ1|wR48AQ7w>ZK>Haj)7g9TYBSvsO~9oDr3G)kurWFHM$_sOJ5xbz+~i^=3WY&GM&)NiL*c_K*!NVtvOKtu(;;Nahw(0Ou&*v#^hf7qh&USHkpwMY#{$r!l8~o? z|MBknt+(QsBD1PX1({|K0N&^(+(4IFRD)ADF`f{|dOzS*3HPVJy+)0jv$$bEoNIbE z>rgK?08I&vSuCsq>oYDnQDKNqOxu15Jln5c`ChXf39H(djS%#W)-eI*%RSl)4=TyH zq}gjP%&Wa$WX74~F3}R%^>oVC;;udE!;A%!Fvfc+9SVygR_KNMi9lK4Cm!1J=+Pr{ zrZ+6gnn7XXQ^N%v{g24j+>I&R6p#4JC|b0N9_k8oW_%!@Glt0*;IdXt!j&IY?LpqL#MSx(dv+y{~9;)CkeC zn~tyoEs(2#fU-Z7?K=dHRn-)QsPp3N`fSA@?rw+*KuJW(6j|TDLrT-!NEg9e#5N1KJ*LpgN+4 zjBoxdxN`z+f#JhEq*`#H(YHO{Ln#)(%STq(mFQqkp%o8WV= zj3nNDe6ZFcs74s83%__2czLsAa4_{wYb7{61(g37D-(^Z)bgQQp9^|XNUKIZMxS8l zSH@a=eJ@V8Tp_z~0FtK~!zwOT+9pw|V2hUb>5!kDRv?%ICvMvaPKF4kZwySWG#-xX z^)CXC+yyGwlZ~YqlqL%bhps@Ftox?%=(Urgs1M9I;lhX@{Xx1?0Sbr z5pNrsx|Y9T&Hkn5xVF;Kz@=ifmg324EOh-t#LTIjyy5(c-^_2qXL&fbS8rcgtbs`m z=+^iF*!H(7!2x2HvDIt(9JocWE;7zcnK&14%%@7v_Z#5RWE1m32%UxiMy!R;)bjS6>%IrH>>{Rf2SJTW8 zUc7SD({?t)?P(vzjw8kll=t%jPZEE(e)5MP{)o|&J$ope@8Xy5f^Mv$ORO0>*oR>q zK{<+0cu$VF_}v;n_?TBNy~qh1KBTyG1T?CM1}bs5*KPEdG|PZgsAMDPGWM!CqyrJR z$oaOqWVvc8k}TB6sGZMBx{bpFBgc-b-4u-4vO?t5s%#Q285_C)E{WQlC?^+Jo#z*vMLzKnP zTa$d@9D&!L9(=L;klY!rVc`Kbw;$;4 zP9=$-j$6X|O+}yzgUGvcOrfCd_{XBX4G4gBV{z z%Z?4|u*9%#m}3G>@W!DOC&u?E-~r7Z5Qv?S&gA54Izp!^2x#h8$|A@cisEn?Ka3W} z4c!!1zZ`Fk&C}PwFbVrUS{y31UjoBe+;x-3w$NjnPEwl*O!4@75+%P=L$`C|76B8L zuwtHj2gtv>)zdf2O!HjFKLLW>h%x;?3CJku5ReQ(OKq3+XoZ&#SSSvcI~&R=*CD@K zp~2F>xWiDsm8?nOFfcm%8%bHVh^$x9-PaV<+yorITp_Igz5y(h3}Qi z6(e>|GYV=3_H{x!I|j?yA0w8iaEBB03oL{Dx1*r0D#?mP!ox`ABrVihx%|bw{tYKmF zM*L`PSi@a4UJhRj%rIzc;#ufP03=?dfv>!7>o>BF@FQV-r{YZAgidu5e(mh#MOKf1 zJce@zYuW`YyqDccY+Op)&tXU)OLxD;Ro`}i<1tXX{|YkVL?J%luvZ7_8PWqUiv5W= zly6|avAr!W<1;SFJvArul^y8U;xaUG%pCQ#jlLuEqMkM*?y=sb2Ex|p>jS^4Yz|t# zR$%Zy1J&6t;A9FDA9{|4C5n0vjob+lKjlQim>0?kKm6@HKK}!OsWY8_%4RWs7Q|A| zS}B*CIC-tnVyn5{__I0vLNlCje;s|oczrPkC~JmrUhS1>@XvvW>dof zd@ovyF^<%`Bz3Y3fEZc#Z?Dc4gBAn(j$6DU?$z^Ie`6m-pL=~n*l&1F#F7alg7NyL zh`t>^VFp|=^U^IG-SaTudgef}FyoWqor>NB0%V)4o#HK_VLqj+h616eW>`ax*d{3n zu<|4`m)WbuT-lm%k@8+$em&jh8sH#R^t5+HK{L3RJQ#jdeqaMRT0Ch&!MgxZ)|A=C*l8@@;gllkt#|OAKwG>Rr8JH15=Xj- zUhOw^=dBCa$bH=d@=M+QL=I@{&m!_Yc7m_&$hWywO#Nnomb;E$;9B6($#KRqZK@Mp zP`rvhFWGGr$kWm1VHc9Xyqg|YTDh{51Q#!rw01Z-MMP#sIsiBd6Z10|eO<7os#{qW zh(`%($xS=~o$+90b`*%h9P^wMK@vU%7E%~UMrv(tFMKtq&BS|Gg67#FR1>iqZk}Ji zCT%aRbqEBLs9x1eutwp^Oo}R(c?_#dAs`uNvTU3HRA_(MGY$Z61c(qd8L2Gu6tzu&H1|h4U)6qY^oq^N#iVJB$>4qBzu>E>j4e9-kNVOodpDV zllr@Q_P4@ZltiLpoP`x=UwU1Q3kDqDYlZR5bbXWNZVhpB1(fCtxX;LSBB0TL#=7e{ zSdOVdX@!}B>V@2LG~@Zv^PX5_8Qv!L8f+euMKCJ0 zYtJn&11@74owma#m!&oM&YNfdfa;r$2?b|S`SaE>AD2n*bEzMHnxr@YNQwZ$=y7mi zx(O4jUb?vQ1C|YQQtv%0v{PMh^iR3Ue|$Lvb@s;tH?v;p-(bWinn009Go{!=ErpD$ zB;8m3Iq&;h_`7@TWBmo?e0%;s#=-xQoFU%6lJ0KNQX~Ge{{LSiAfmgIRP`=y7XAD) z{NGIW5^@iBM;?zM+8s3I-VEmiyrQ!d+<*&b{BeJ@@B324fFkgFMfwT2-}X&6*3;HIr@?wx`Wwf1iT#k4?npg|$23 zx-8=8?Uqmcs2?_;JQQ(TAbncg{-4r=lq%Eu*sX(%OyoHyGtSG{N^N$+K0%4}+r_~f2`jB@RJvFq;W;q5HLX$;oaAu zVZG~((aYJJa=&lC*xr}EaAU0}Z}PwKylzI{1EGHOA(h|!6pAC6^`7-42VLz(jcS?R z%OE#&J$hj!`=}LgK0ZN;;4!+)F5~V2W`kw&J1gJ#!CXTA1fqB4tMkZ-rd0!`qPnZ@ zaP^hj3S6Dh`6RES0svC7qwM^BP#(}L9%x_?$W?uzh|}}in@yqzXK77BzlWAd_~?u- zGOVs${v-*Usq-tAE^hrXEqJwxKntYp!bYlYfs)Rs@;sO+t}lLUD;kI@kOk#Pjsu!5^A7@5*>YZ5G8 zd~f{`K$r2CR&F4DxY!6q*T>GdJn@DX|BH=|z!J}DT{_l64UixU5uqO>L`P%2ifNznc9micrXIR-@tL+DxD8 zY^Bko%eZA|Ybeq()Q8=$yR_C0f=F8ljr z%0EMe#57*_s4tAhb(wr6TWIoIp9qC4Ekx2es})ctz%xV0&hY1qUcfQa->lYV?mw%S zWDzL^&)v1oTu1EKPCArITui8Nf?lgar_M-8mjZTRqSJE7(!$flMw-j|0uCBJ-V_aE zN2!w4NA$QcS5RTsdHMVaFSH*Je|@`HXTD-V5d^=Hv8FISxgp`CZ3H6xfLS#4u(V9R4knk2ciV2(-)0 zh<$|CqMHR896K`AzXJYE5?$+d7dsUs#hvY`E9|%e(8pRTCauWMsF|GqjZX<)%@M~% zaM)#gbFX#*m|nCosDb@TUFP8g2{~By#tE$`FVvxx!U-R zb>6N?Bj&TD_peL*KTZb{0~|_%vupL_eDH- zsVCv#7k|yHA@-tCcGqYfRx(3p{3#|5L9A?hT{mSd?8Y&I_%OWFfmI`rBqYvq+2J8EBb@OB`80ot)4MkQF1 zp{BE3Hj>ctW$FX~x71)uX{XDCed6t*{+kEvzr0S4f1?-8GT`j?EmV8=sV{JU;x``3 z88Bvv!AJCfEy&MLx6UfY?UP>>Gn*n{|J1uuNxT8$BH6K6!pC3%^`qXKc-LlFVbfK7RosQ26dU@@E$Smu@Ilfl@bq~J)PUJ=quX% zO69p1e8#}!tp_F;Xh4#lC61Q%zJ@@KM4M}Q`6Uh}y>+xAhhwVtt>!-MP893U=G?!Z zKl-YB3gaxTYe+uL7Fbq4GPb}RHpKn?Bj-PjCb4nhtk}s@V`s!XbJez~HqH|+mqFPq zyDJ>;eVvsCL4gMW_;Ugk;~rg)t-WuAQXjZaIU@av!C1O+jPrp0DK(hYZ3i*}!o@pJ z6er1;zk7h3vyH=-{D8_Z?|>blMrF(jciK|8c)7TX8r z2@~O%5Arc4MjIL0kmRb8#m9*o!a2iGG4)Zws&4BdxAkh7+o@j~kd?4)!<7o?=`yb7 zKaYiqZwxf>LzI$f{?;k-MHBhe<-FFT+Fa9!vBJrb`1rGP;uZdJX6DCnPjWa*y9c)< z8?w*(53GQAaMB{A!$}aSU?C#aT_ufnB zgHu|%&n(M4*cu_eeiz%cEDLFmHjcX9bIk(!KXW%fr(S~RZ(HGo7-@f6Q3B^;%SnB@ z^&~t`@F>6>Tsade6|t|lvxb%o>*WPPhFB=SSA-DM7wyj%eA|BJj367!3@T1H!KlU(7AIV`pac>Xf%zFNrHJbHU%3eYd{IU_y- zS@5dMAn`2~0}qt!K<$%NiT*|&{QT|=q%z_hS^yh7wGi|>S04>J|D|g9p9d*%ydm5N z|3ys2>xO`iVzuo$;cr{%ro_z~JU^cpjsx!cf$3<5{q5L(_VTGQSVB9)B)#VU#iOJ$ z14Qfo08i|1J;Bd~9C|8|S`=fa#Q0zM)P2C*$P0?8{%>9-2E0o78^;^J;bDJ1c}~zP z34O(~?%#$!z%rI!1FyoA^he+BH$6>45%{>pM`Mov=2bGmtFX@{$^7OsoH>LA186~# zvOE6GcJN=fqXavMNfwt7_kJT*{`urjNWq>#u+4|Kh+FXWdiwd-(!>8MJ{L=T5rQRmVHzaBOr>Eq)HU!5zXqpP zg_cMHm?~GzdK?C5!L9vibghOFV#JKH-oHo(iI$VH%k zyVJ-vfrW3KiX2I<=ds=EPbcaNMJ&All$;<4dXg24CdF1O0%#)rd0xyw#m4s8kowjh z=){x&9R<*6Hd4PM$bA%k=bypopTGZCFZ_SEaAPym=YE&v zM2eynTc3v8F*+rRb0g`p5kX>LT2@`poBeEV8`y!G%BJjrRdI_+@!o`ZSI-$~O2_5k zt$k;QsZ2rxmn~q;2!iqac+aoqs@d$_5v${Wym%uP3zwbr*LTVc{^1GNQ$oWi?~A>h z$_PZBo;Jw$gNd#i+V{gWDvZAdE66zMROmTralgcTps129fAL4zz%`5Sg~mk3i5Ncu zN>l7vN{Qsa3=u25Cf82*oiQG}AHjJFa0vV~@(sQKl)>Ph`B2XMsjJz&S@FaRXDPpN zisA>leTv@&_m6+@9r(&dulXmvOvCNMF{I%KOk4T?DEpSqLORU~;C2zlbIpElLVnGR z)rqY3xLj8M<5;0O1?n;(MkwTVs>9N7@ycqOeH$Gq;soB@ef-wiZn|8Q5i|DF?7jz5 zspv#vNI-WeVm4ymhEMYdnWfg9yMOSJVHv%YZH^Itvy-THJh>Q?Cu{XBS8u&)^LR@~ z?dO?`T-x^%X+y}RC|HF2EJPn~^Z70{@7)taLEbFo2y-&=S>7dlLDA z0Aem3vP35DMRjIxHXf`KT1_Lpa@@1>-1k7DakduKr0^N%=fcKbIEko7y^2V$I@vEZ z?oul@uAX#2*3Vt03_XvFb-Z%*=B2BWX6N4@U!p&~DrdGLdMg@7Qtsy4gns(0i=V^J zt53mg6dYpNI!dfg5tT31N*!ynSv?1#JDw*%{J_P;WvMnBBJ7FmNrx$Ozu;5Gx+U>% zA2)kO(vX9!oVI0MAe;?^YB>xC9W!3`VX0?vGa`D3hM`pr3w!hNDTMw{8S1$ZTcQMtweZ-vC*;+Hc%RT`q@Bgu}fW#8T_LkDmv+j93%>+048tZzWbqi zd-fEDGTBKXMrxbsBw72n8 zt>Vi!{~1AL?g!T-+qG?Bj1gjoRbHDPQHx+O$Zp$vuT;M)SXoWT97EUYn7NAlD!;(Q z$=MgA7l>-QWDoBg-fmAhVffH_TcvAvRG)pJHNx5Dba&lx8|l-KLsUIEJ9FfSo;U{N z3km`41T5JW&*$+Y1*Jx(Cz$PWDqB7xUd3OF*yG=Lo=g{+HX6|`Wt(jyXLnWlh^WC) znnPS|F?hV-R-$rZE_QE>{2zAZi8;4aQ;Ux7+JunZq548!F3`JKPu83L`KJD5d?5kU zdNlOim7mtTcjNqOq>NM#Y2;5QdWD+2jj%mt_~la)DCY0orRY!S@>Ph%r#DDT)rT<~ z@Bj3-e>mPdi6xTMAZNT$o8vb^**y2ky$&wlnN_EC%sbt0Q{DbhH1v>keIPwRm8jPe zk>r^bmCR+3v^HkKF9bQfwlY@Caja-=9^V#^{20<3t12$c(yRfjV0B%@?HvKk9g&UY&NxI{QyT%Q-TcHTXI=XQ>xga{^76+Rb0yGuPPTxvqI zR#9#@C#%FaF|y>#mmYH?9k5}Ja_~oU#@30f*)?Cmivaf8qGV5#rdQJzq?L_5sgpniJx#kjmTt}Tk(#>$5;y`?@GL|5I?RXHX+Zct+If$7y<>O`jPjqA!ui(rifU^lXK7E$Lh#An%2%Cb_iN{%Cg0%*v9{ zb3YKb(S!<|xCtx{Kb)7MRZG9iK248WVt)rZ?EQBDUrq%^2nk%u?mLewAs{G28j1-Q zw*8QlB$VDfF`@OqsQLXy=3usMWGJ=Zl>Os3sCoIS8#aGx0Vw zuev76ISskH#5nO+!ZF(kTT?mFz|dy4%=bR`=FM3LQ6!b(vc$?dNiTjfz-e@}N0RLo!^2g#KaWq1D=a{_ zw>+S`-tKb7*ij06lQooMNi$!#}{NVwq=6JwDS6u($5I3UE?LrI(m#Y7d4fONN*LY=wLXF2` zc#CkjP_K2369{Qat#$XIqZbKpd>`^W%=M|>#myDThuL2KxX>hcrNlozuv_(m#jTYXSeE2FUF_oV%>qS^VFhrUu44^P2`^;O1w0t?Uh z3rJP7Xfva>qqzzBzdh?VSQ#+5z zI{L2(U`gJ+yC5yL#mnIofIX@Te_T&2lG$rY;GUi-iITz9r0 z$J6k)%cO@g^nXMfuu#%qxWcc*`AA9M6h=FJ*?6(vJ52yJ{&?;kPYeFbOd@+j#YLFqQXMq=wF`!)e{%&a_1N@~>h zlp|;LN|V&ZnibO&_g=-xAa#A{nhA6}K5DuwzUmYv9#?B~IBvQ&DQdAT%}*Ef`gQnI zjj@i7$$HPqdyC%#@q9Ejo%N*zhBx+|x|3ywd$-}N86_oy9h-xhrHzp!+w=IG!rI6D zFBiG^ryG3qN9VTmPd#ROQurXjiDCwO_BMyppzO$sKa0v|9`=~|TtGDyogl*`I&=1R zKg&*-5@f~IdW`Ys{~o&f%D%l~5c)Kayp5Zw5+ZzE=(C*1p}=H3yjgC6T7aV0{PN)& zSX#uNOMC0xo=1lC3bD{`CL%VV{%1A_Oq7^d)9sy|hh^pe8BX}luX-lpq5ot2PXOv8 zESjo&41RsO&!5Ak3%-SWVGegc%+K%Ihu!Aq?_+P62=wN6-Ar_tdy5ciQxdU{sXJ($ zw5c}u;xAw>{o&|EQ-F>4Xrb=(PQffDCb5Flj$~+XMHN@1K4JrmJs7QxY?#C(>Xo>~ zKU1fBx7SI@B`&THW=~dPM0ZOaryPLFN_ZmSqtF|o&7yAtJ$$@(9=Ogb#X&a5OSAKJ zYkLKeCzuc6e)>Q*2+F<}A?8t|O*5q=ym75<+-|vF$VR|zD8s|_tGeDngm&_!Uk=s& z3`^T3cm(I>8c5%-5Z@7;XI?U51faq9|< zV;$?KZ@eh5VEqIn|U+%3`nkgr!S!>pXVD6s46uBaHc!?6mFesE7(;QZ5HZ+i3F zrKvq>51FOXYKEtDhlg!>s}z)4nf?< zzjDc6!-~1`O>64I!Q^MrgfSmBoM;Y}1VJ2v!^hI=b)B{jY`SoXMjjO$nmzF7Nhp7v z<6ze->^)MveK=LQ$eW#%5z?-zjOkKjR8e(obY4AL{_@aaKaii~RrZf;;g+=#StI1Q zs_hho{S#x6gT>N9DImc_n#2H#C?HG*h3)`a*o6Uns`sF#v5CaSC2Swe?3%`y1>LtC zsSeP8Lg+P-X1hAn5iX_QrzZ+s@UTM7=zj+VsUwhld0JtZy_^$_T7rD9X!&^_J9p_< zHA8OOn>3SNB<{5rIOV_$J>)@rD$f4-+QlTdC#z*Y5e>No^Oy6cY=*krz6U+*SKq1D zj#a9mto->?igI8{FU_Hbel2M@SkjQ%MQW~}ONxIBEGZVJP{4m$((4juHKa}k;so>2 zYh!IxkllH0VOGy2|p51r-KG@Au&u>?h;z>>W9Hdf{YJQpdr!24-( zo$ZfBICn{!=}B?JqrK;D>5UZRPG*DcUM>N1L+aX|axTb%6sq(-K?^--N~|FTLhR?V zF_NUxnoEy~ko|H@7K!@F`35YUpOLvA3silTuR^-%|I?|;pTrEBhEdA1;ndlWjE)|X zO{ksox4gy4S=l5uvtMrcP&k=J$Vr2?>NG7-h|Z@@<7zeGV3Cjhb#a6IPv1tyebCcr zXu-kklHx9*7f_b0ac_@;EzjdG_C<&-dvVmRqmm7aH|xaEij~U%g1akA$2&bP?zH@_ z4z*XiS!@;*M#U!>?sEGZsJ$HsyV2~)P$)am$(&wx1Cgj#%0iqexen}@)RaP`PZ0eW z-G78+w;NoJ3KtSEYQG6^!|136SESfD$fkb@%lEME->pL0>IuF#pLRJ4ff*9~%x33F z%m{Hlzw7RuJ`$1Yc&_At39fP99yG`Xk>J7X;|H_c7cc)@!*5r?lHS-9Vg31K*DbK5 zJVnB?KM%|OC1*>Tzt%JGpO!R>SWY_tCQIw3(XCwLhru9EzPl8srlucntBc2MPGhd@ zU5!T`xlY{$;!k#Kbz6tpPc=2+n~6bh@x@(Sh;=tt^BUG^J@(=eQ)Nmw@22p<5`~;g zLaF#==o~6RIoqC_oxCp4lGby8dhV8lv-p-m&%V!jXi@(4@|$K|EB8ZSA4PB= zK%m1`PEAcsy4j{qyI$kJSrvLN4NEH#Ija zDaA*KJ*5x3{d0%jde?+=B7hFGLLXi+8Zzs54fU1$dA4xENp?S!A1`U4!ql&LM2(xy z)Mx8h48P{dy0-f%ku}kM<$cAGKMXmeoy;+9C=Q}D&zp0*a&8pC9=Akdtb)$Masn#= zc77DH?b^7pjg$Ob>MEQZ*^RjDrt@r9{d2oc1VSk8n@bZ=J>uQs3;ok3+n20vijn(4!H)y z>~ORBH=a-oXkcH*C8CXXUJEt`b!~Vll;b2cB0WuPsS=o+HhT7W&r~U;acO22BRS)R zqR}jOC?}vE(+^J&Q8^1fc|aem%SMZZ%ij;zayl@e9q1uK#n!E6J$+y|PM;~ygm6k% z!0$BD{8>gAVhKm{P624f9@lVv;tMh`sR#?>vO#Hh;g6%mc@n&^#^VE|aIcqk@(df; ziiOw6b}tod#EiUKAgJsURz?mKJaKSbMVzj*;y~60%AY)P8ImoWt*?|6a%qp^6ARhM z&^{ff12tq8i-ai2%>8x^(T`5L^4EdYW^sO6q6sI*)o=W(=Pkos4utKNo<$7HSz73VtE{X|3niuU7R{Psj%QDSZK4hD@; z0g4w(OJOwpS}?bj7Kdr|A_r%Jcpn%s%ES(^(%r2=HY$v1rF3@>&LXHPsqNxQw`}kze zmrbks3;($*(>V;ovS(r;}0XByhOq<$sXg@ zlXA-N#j9D*)YTP?jWemmbkH;Op%tDkoY!`SqapB_;<&b->!tY0Lplo2G|@uzwQG~c z)JCH(_MFrHH?`EY99hYSovmlJPGju3-9pZoH@vC^uB4k9*kpn}mglY#oq=r<{-mie zA~eOWS6uSQ?8iXO$O)O#SO-AmLS`q!=`@;?F))|8yv z-pv;xTL42AZAv+&ZKsj0Z;tM{G zl!uu)?ylZ8XTJu{kepERkII00$1{XzW^ zLWuynhGp;?NA9KxK*6}vgKx)O7JZHW3S@>|o$v;;oK}bAf$R_qRM?fTl=P8m#H}}B zAI@xb(;RE4Hi+@@+nL=pfIsGH={V&f`hmd51yr^4TMvE}hkV(y02eP00PU?cf7hg} zTPPU0i&hAQCj~M8;rfcXqnqF|^RH>KR@-$b&Xj3WI5y0cy48+KzkWsE`MdhhbUbp z{JjnM-CH=Tw6ytBQ?FG>H1N(88ea;( z7<*naR1HAA!Wq$9X#Pqo_{H4kgm1JwoY>Fc(E7v@cJkSAFerrT&8Vq)#D3!$>&Ap} zp0a1tQ7BOQyWARg**@u(rR8*%;|CS%T9=I}2=icsL_v)nytDebg5~b??c2BeYvrvC z;>t~3F|*t^)^aA4hS8k4y;&;8tC4mbGGiV%vxuHV{w|;%LACxOfRKh7`->6MM=?`& z=iD4lgJ~Zv9H&9{A$@+N-zvtnVvwb}y1G}S==10P(et!QsDLY&fW03a`WfD|`F&}X z_wRPn&+I0?k4@nO=Q}03@O|k)>rq+{_R)NjNdHaQ25WQZu-5e2d;QyuHvCb0#W^<_ z$N)GT4Y~wF_j~N!7KFZgn02Umh+Jp6KCR?9JJBa`cr-cHHcj0sTTj*doIo3Sarg4T zFrp95Ar@Q$ikkP~G*Q|TOscy!S6P>SHUWk}U@9_=e0PBaKi@c6I z{9vqSg(F}Z5k{);C${;sd~l?HH$}49^HwsrRYg;isi@&ZEm$j}UUP?WpxwPVg=y+N z5cDJQ`m?-|z&gR1(Ma|%h4m>~F4!-jkn(QTl;4uJnq;Eppz=m7!DJ3!c1Pvyw`u}S z5WSF|p4!FtobBbKy4gv$JgVGU~~Tjy$A;8e& zj^z=gcYUxgCO;00FCvIhlt(wp&fHmQh{MZf(P$E zl*0SjDlaw$A%gu8MA>UXR}N%DBdtOBz}tONvrb?|_~q)ouK>4UxR%`l-}DrmZ7-7> za~pqee6Tv68c{=-n3<8`z4bL&)bld=qM+3kuUU&0A5Q0Ai~s zzXvI(&j}fK`7}6$o{MZL9rwSo#<;W%GC<5gdK;cMu6DIWrf0Lu_(sxO{pnDGN8f8L z8Wu$M#^Te~CjH|~u9nDb+(oR11t+qc0mJ|J@D08BuC81`h9C-h>!06TdiWjsJeNw@ za`)(9RLPpPQh)jW{-f#ZYg$V1IHy^I9L#dMPYkb;`0!}Fa_qOshLc?t!KLMuP89~{ zivou_1~$aTSQs&bk;mv#59iAb_Y7Rb9$L@g<|}Z(Za2qsKFqdM>(TRAp_oDv!&H^U zG0#eG(BU#-F-G1|&31p%C~!jw5cIqmo5JFOe1WgcPpPDd_ofQ6uDEim(LE^FllKXDz!T{}I6MsbbL; zP~?896elo_j-{QH*@UGQ93RxUlq|OiIt+J&(Y0SUY#w`h$9390Ah_Sfb1Hj(5FUFc zz31@x>4HsaBrCPx(}qra#uOoIWprhOD92=_Eoy^K?R6FoY0h*1vgjq{S27Gi`U{^{ zZ$zYPx=ixDPmJ1!Tu{RDzQ_P@qUeq&7GXN+{L%}pAB0`jKbIQL$fgXqvfCx+sTnG~ zZG6C~S09}7)^fBJxf~aK!mpMXpdE*8XlM_2e<}r9PxgfdYmxKw^MSqBu3fv0dy?yL zyw%nvO~yp1(rtBzl@+zjd*mi{R6*M$ntv!^v4eMrC0cGa(6)Nr$5TodMO~Di|BQH2 z+`erU5r|9v;HjP-J;2FiC1tAtNulN1L%xLMnQGnyQiCV939cs<){|IJ>Ym#nJ&BUX z>6YgBH@_<9xDNqi9t-({4ejw-kcOc#>#0?JV&?%>D{;(>A`@u^^aLj-eem=~aRBI+ zofTb?mmv`kH6AzVICRHxtB3>U4qx6^XcjDjD&F?(_V#uodZ|Ebz-o{}3p?Iu;#}(O zfCG_h7YK>|VYag#wFExX6WiS3ujgppf6QfgytBe&DD`&!IxU>F*8*{3Y%iZ0=?m8v zl=TrdOuK1+9b+a8tW>jt6B{Feo};StoG&(R$ND%+Cyw#hWq0)SY3D)&W?0n*(M#jd zRcR9wK8(UdKnbHP z`$I<*45^LI5%fa#8}IZuG58q@Bu#?_7uj7H59()J^`3$0S}-T{U-BmWae)RuBZ?=# zyRR=+>wfEC$##EvNP?GH z0+KS;5(WZ;II`KdBf1wVoY$iJG7&#vcV1C-woQSp%!Rljd#mpwqBhcd)!o3rw2k;y4i-VgW5Jch zDC7R`O(rq6S|}jqCp9=KM$7EEx2`7h^|=a-Baq6e1|gl;RIGm4W6J8enw|A=##|)tiP(21HEqv!?sxtwd`$LO4`dinh=wAD zqand#rWp@ZzNYFz*r4Bss3_t|0i1A3ZzYd*l;g6((iGr97+22qtE-Ya$h{b@2H{Rx zdJ^zI0-aRRJnUSp$kC#PU@ZC)`-O|p4Xe32Ur&d7O@ek2UA=^}K{$k5b{_IWKWNbT ztE(1m#mr=wkk%HltRt+M2a(}iBR7mGsS84<3ui_*7GR*nIPOK&crzeIAdPC3*fR#4 zlrNG4R)ny$)W>?~h8UY>-*N5Yo#<_GUr^D-z>c6pyj3gx0Yc^F9n9K~>bD83YX@RWMo;!9Q_Y3suJA0%);`BOqrV$M)_7pu15L?`MN{l+DK0N-2 zHsBx$kqCJH{Q0MY$WfK6y?$KAKYw*bJl=9he#NUCYd`CW&efvhKg+4(Z$b4gQX#;UdnRFrgbF6j#PE`+OG0}-?5Ng_}; z2MxRvmpI|Rv}I;qRy0e&zw_7^aTjw;JjA@D(4QN$YSMk3ltv-$v0p322ug2S=Q^@r z5*Ic(0+EB4(j(Rrcz!i2{t>-{NK!KmKQ3xS{4;?k1U69w zL>c3_3_eaB)y@jH+9hLKn0Ch%JB&`F+eaH*{1E*>Uil7?VXt86vqkjzcuSvPicF`a zN|oZ+M|IK9UFj|z@1qYCIJOt3-A6@vUVg>nx9xb(w2cONyiXKu8{tTQxiN0*V+dyq z(14T^l-_k(1-goP0BvcAz0CUlJmCqFAGRr-D{)BV<24r4>3a`HoDehDgu4iJorwY9 za!%K?QzG?&_gRWi=2)HW13U!3bZph;gymF8;ATUO-PIN&j($S5`(@lP0@K5tbxEj?^z=ADWN+WF#Au75Qm-$$ei5j5 zsp@dpZ+AXYfCI@zM{l`rn1Qz`9^3L{Z}T^W?ZNdUU4fB|yYvCGNFSe~GkL7!y(TY^ zdmWnIrbG!kiY>&92BpFdx#QiZwvxFcsKoV7kf$(4^vsdA`_@|uqn?CerBZkI1BdnQ zcn2V!-}UM|b~LJ2rsd!~=&;3#4m{`;Kb5RL@u#A}eKbwy7_xS_NUGe9&nZH;rQ0lM zlH&GJ$7_Iz$G!7f_5{#~9zvhWu+%%Wu$ZWBH)7Bo4atz!j-#cX*{4c+-xh(I+Xv-P zvSrTt9UehU=;_wbmtOxXR6~?LjC4Lbk=H1S93vP@T7i5nJi)SI^?<7X!fNaB`7XZ7gA0Rg6r=ZXk zjeV?#fJGc`L^la0wSwCZn2Uw9Q$LZNP*J5GJlOIZ5$HRI0s*7K`sHo5xx06iHQncdf`nO#D-d=d2RW0!G zNS-g~Nd3d(s@REMuS3cu1+vu>OOEaHEJEFW-=Dnsuy5T2krO^%F|=!!qEW4J z0XIL+9fM0aGHvyXR1-Ed%qvnJ3+Z7DQ}A04K;i64ZgE~bN^rBQD&>p85TC{y?)2-> zPu7!s0HLJ3qhdXGBd+FZO-+$Q^H)i(t3#N@D0sN4_)gKCz!_izJr?{!0<_aIHJjSU z%#fNGvp({q+Kde8y-x@ncA99t=KC_jtm=hBhc$j`{7)!);J6MA966a+ExkqDTzk?M`w}vgsYrrR1cRQ{24$%Jdz_ z-A_i!&VN`A$!NE?*sBxdNI|Vl=k$E?Yj4Tegg(3d+VkR{ zFN$j)w}u|!C))%04{mx%=8r0qXpUONOI(> zYVUurHZt6B`g!eE=`1CRe}~hm-ZETJMc2Pi9WZV-0klDq` z0Y?|!8$3JDB4|Yc8sNKGOCYi)fiRf6r>$0~ljfvN!&~PX*4kIt1tBq$$0<`96{0m9 zHHYLgZJC+n@97MU#9tY%j4(Vg!%z0J*KeD*IuzJ!+W+n@$X0x06TzO3$^e9H6PrsM zaoUpV<4BDyG3mZYE{ct`C7ZbH7v8xoNk<1ObKaOEVH7-Um7fy>a|73vJwh8_S_;tO zBD!(U?AQ)AKyv4;JG315a=Y8LHrzrZU-E5~Q`}{o~$1ytg7Mk1r$!GhOv)-}!e&_G(sOUcY zM-l^{+@XImUv7>Mr}lB%NDW>8Ht|hO1);{%+=;p-*Q69Q+A3Pc+Zf*w!wT~-?b1Sx+Ks*KS51!WrNT8;l@pW$Z>nll~Ztq>Z*VZ82}Ay2&t=oxPvqOA{sr- z$!kJgKqN>Qv{(Jz%}n?~4cW(A0PiImu>H|Qq-Ez-)`y*43251-#OyY+u!7#=$GZnW z^ZEUw`$`R#5y9UBh%%bF>fWd>RN5ZI9Man)I(8ou!R>W_2m)ms5WpJ{0*T8!-jFLV zUq)0v!f2p}*9yr5j1ykpj^u3E`3O4ElMbiYM4e&$I>^0AHZ(YmP1}zo9vvnUN8-#8 zpaI0!$hIlJKy4?{f;gIOP~}h~OyUwKuU>ofBK;uxZnk{;UE_0>F#&$5cdC*j?#hnV zZU%=&2RS&m=2Jnr!38a&)B&QrTDSz_J$ti0r=rzIg~CveCHHs9!sz?)09kOK@5qie zFk75W)=fS-!6~B;CE&MEO_tzn zeX0lUTzs#j(#*Tlx|*-jAim=ET7G$!@l?D`nvydAS@_z-XQ|OqZGrANY{NZp!KR=R zabW+aDW^}`3aL^9s$PS+jru?cl?59~}No~desG(eOj4yyG<%EAfbAIG01SH?bl zYRA*KjnCLSoTqjWPUe5R9~|=zj6ZwWa2MTQl~MT&weSp!Kag}IIneA;2j{(7&aV(-|c5^9qSh?~&a_34*eWVYRz5YcnhpFu2-0~T6s zNLwnwCL(L|0j<*t0Q(|$@OBt?1eR{OxdeDWob-OZmU)G$Te`&q}2KfTIkxQ0B_NsA6uG`{WdWvG@} zRN%X0L&9r>=Tw9$Q~W2_`w>P?SCUP4>J=*~?zw&u#Y>4~lBE>~;l1?OW1W+|lAbWO`#sHgB!N7abQsfs>6rR4)%fJ@()dM|P7e8IQ!aM?@1 z_Zck9yOTceiYBL6`_7;}O#>aE!;*}Q!wl~qpm1vo%#PSKXIXeFaa2VD;%mAV^LL+u ztml}fb$@oq{dPc$Y1x=Z6`=UvkI+pg&maF3Ql*{LXN8N@6jwnh9YL_1XuSiz91RVm z%jWT!&-)teA@2j?kL8IulVVK?Chb@9FE;NrQu zylDICJD`pYFHMd&%{HhyI69*JTuNn3Cr)@Mh-zvaaO##a{cf}e=k$?3;YEj49r+K_ z_6eavFRnFJZHMSreGOW7J>DwC>SCGirWn(R#`f;@G69!N9pwRy*$*aakIkixtBy-J zTc7QkM_r&Ty`Q1%UMsdA-el7inVJ2Y^F(SaGEUe6w+{t7RvZ@s4w)3lYgwt* zyc*aE77S7&@EnUTA7cCU`Qx;%Q}(%;W*U;+m{;``3!pbWOTXAzQEWa z&lVGmT1FVTV3)P+kO2dAckE=k)lijYSTy*I)i)nJ)8HK>?3|m)hh~08BHTuw;Al`f zBoHsYC<~8p(P@U@qo1hO+-Wmij*(_k!KC*EtW_9o&Ex47L?miM3Ok~*r3EY!;Q-n0 zUI?5DAlv;yPB}5|NP9$mWreoZtK?|y1ML+ON+4O&LQaWQX)`P|pU@as77+aU+J<|F z4QmW$W3mEQ2sOUTpVLD0nc%4pSsH_!6$?u(^`^X=xG&=7qj|vbFf@$4T#qjWC$k_X zQU%AIB>~~=!L+`S+P$&vlUoT&A7&Ka7G84r)NdHT4Uu}?qLK9&x@!NFZ*5<&KfY~C zSEFp%FkZ+x0Wm|p2N@0xG)&MFQa=vPCq`?|#8ir0L1X6(eS_O9T4`#0_bv>q54xA@i!YqIo*X3` z=HQyXiPiIvhq)~5Zoa!ZO>mI~oNben?3ynrN&De*3iOGFf3@{F626|Vh74%bAln9) zvGt9wo`O3ZnfM_nleVkio~oG-J|_!w$#`bs2OSY`a1{p8J&S>j7x>xolttuDt41aP zF=9(&@eY6@A)s=6vb~(b$gm}PdhF;h{9L^QC^gL|p$?1m5cdAZ^Awy>AsnzV!HZv1 zj&HiGxx#1$8WL@4#<>@L)4Ih^PFz_I6FGL8ULR~n2kW`b98fIh=S*tE>A1~34BoQG zLP!c!99&{p1~(i?UDqfK34FG{L}lytNu-GS@Nv_9?;XyF4p6Not;aswX;^FUDA1G^ zJNzLPFJS(@3xEiPdz2saoOka#0Ns3H=NBRq?`~f1Ao8(`$fl&e3>4fG?|vM{%ii$x zIil(lyb9R{mn7J7HP7&$pqaC~!F({BLi{urL0!cMl<8fceR-*TVRN5EU~SzI9-p0( zp_;-8t$9q@68}E`fwPp7dzHRHi{51Xd5FZDTBEOx`y~ZxX(U& zpY!hbdavL0^WqN&m|;C@t$W3Nf5M{VGCH~kl>v9a+b*v_hqZp;0u#-M0w*6`- zjY`lu(bHYD7K}y=1QghZ%pk8|f#zvsk?AH90mN8rs-E>13#xEl>|ojM4Sjc<_Me$i zb#*W3)Z6MAn2RH~U@xW{2g$Wepb&@2Ok61bT(C7MhhYZ(Jd$w*y;er^=iR3x5>itB zG#Er#pag%J=I?3e$sal}Q?Pfpt>dL<)&nbWNI;4wHO(FP*Bsdo2+P^?B+{`tSG~7l zg8XLAv=RDqc9_^-TcNajoId_7ydxxO1reNIjWgFBm{ zqr!*^8e4AINN0<%Y+aX0=^C?opwRi9%d zPoiDMx~?Z&n!1Iu*vO2Sr%R9L&FE(=Q|*l^u&Al#zhNppG8)6B4Dy7O`WhZXr8mV( zw+(f{T=3E?j;(Dxg0`tu)b&~;y0Xbd@qlqJUWFo6rl-6=4zg2I%zu4eYCdFmpyp4K_P_|2Gubv6;;-K(L&J;J%Blv(p8^uW6=C@obJCHMkoa zfQLC)Anyc!el2kIAG#JELxYD7D}aQO{Ch&ReSdIi@NILwMPiKu*gqW_3lpKW16JQr zY#;V-E)&k(9taLTInjB;1wYt<%o>N+8ThPmzlB=rQCZuBK3@h3?234skDj`QAK|D* zH%s`Soy@AvUlBT{nn>08ofQP*jGw;<=I32-G-mrKrF4}eUrt=$Zm0G`cS84hPq|H{ zd!b=c5lmQEy&v8Nd7HdwNpgCZ2TiHkhExA%D)iAapk^8PI@bE{wXYmXKPJ-++wvYL z-^UI7LW%Rf09)x^3kePF-$D zk-s+YddqCvB%%Qqz;RO$Jz}4nwr&3y>Hx@y_3YqBx%Sb7%>g2_%+l135y@bNMRQlc zaiamX==6Z&H2-Zf8*nV?C_C5*Wk`Dhkh;|d$O&zZzONhRcF(=ORy30(Ui*|8O?!J% z?|7%C&lTPSC|Eu>*Xb+Ks#?9=Zec2i999k}E5Y`@I1UPmZsAt?$SrbM^bxZ+%RvzH zv#C4$>HM7*M;a}cf%#L!5WB)T7sT0|5Xr#RKALMIB4jFL9BNeZpd$y!V-5-j{TQFr zrRT^Ra{#Wk+c}N%wbsQHsg4nS-BB+7@XVhBr6ix+G^Iwl%dsiQ)7=g&x|NXC48=qo z*XCJh6SCO?YvX?HfnR9pw_0?sZ6E0fnSqEv9)8f$lxn1n8+K zwQUcce6m3Pmc$+*b*+99h}UfKC~d%4Lb7_PBdCvTtt7}(jmJ-dSA?U9n%k^oI|5BY ztVGTVBdUV~X(UrZy4xJ!X%M+#o!Cc*Kb56U$$Pt*e_Wu2JDj0jK5$6FVgs-NoHqj4 zf+W%yJL2%UBRpwCB&W1CcifL)iBc&791G^f5qNSE?s_uh0$bOTwJWMm#IN<*Kvzy9 zfWqN#;`Nc;J1@XYM`*Iw>09fG--8@RK(SR_KH+2_l4Ic`J=5`*_?q4&CBftAf!OEV zK3B-F2Kq?zNDkG&(fVBz|CLAkFCuje$2g6aCC>x{;F5qd+&ISO(Kb`T?Lpl~OB88z zjEB{#NZ)bkJ*cxL1MQ<<^GdgPWNy1fd3<3aEz{xej9E-Bf4_7p4TrCHjugF5Fofd^+4V`%HKVod0Z zo>pWR9e7y77fIncKw(mLa`8sQY)A79$e%$!KV1mnOVCKVM0egf^CNNO2^MkG58jUO zq#H06k!bxu3D3@$RCYG#tolewNoZWvjRbxyQ?x#a&miwW{3}4dybmpwO5yI1~K zx?J1KdmgkztHx>O5JV78ph&c73{lm*=ExswRRr2VutX4!eL4H#x1JzYBT;gP0k(lE z3fRsmUnJtMqa09q|0Lu$@sqEO=d-ADHOf^`c_xWZQ<2>#0rLk7Y5pnAQ^Da#=)7nnIZp$6ZKwvSYDBV>KQB@7J1BPx~0mqghxrLaHpjz!7yL?0x585 ztvfuvXN$LU5BB*DWHG*@Xd1Aq8JxRTeHzcIw`FnZkIC+odU1RotOrP2`mRnLKSEO&r7m=a2p zArmE9k-kB}B$6Zv$`F=R#_Vt(-}AJzvfJVUWM;x-f6vN`4k0X=#U`WPwt$o3xA5`;4qX1p zHWyGU^dW_?oL|)kmzUpWH_9mBcEWG{?6I<}kwbs!8hCm)f9urzhEd08YI-6cWF|7Y zbVue$Fb48MDGvtw#g!E7qV>MpWqRUTexJN~N+uCqZL>Sxxbbjr)(Abt$6CCaGz8j= zbk?W&!{sWV?1T98y*GH5T_ZXb)wb~!@W6vDYMqv>UAlA>`;00MC`acDecot`eALun zL`5kx4n@HvYlAMAj${Wf$|NSLciy5cQp{sAomT}H&(bhk#W8Q>Im^4>F2agjCL#)&fICdG%pbeLQ=x#;lxTe^WYJZjWe1HmgYiveO#ihSl3W#{Uu5LyysUj2-vm?JhvC{=Cuo z#mi)fu|C%%tWU7667#DFWaW?vR#B_K>jVShpq#1eLz&v96xs#^KG=3!vNdvPLYqDtcy zl{r30m~cx%kMSjaTQML;GS;Ph+G!S8q^}Sz`itPjZbL1vSh~o`Un;$3#kv z2F_#jte$BHnywHRpD5ovV>F70G2=?<#S*{qtzDTM4$ZpAbz#CtN>I$w!h5nB(kIlb&Hoyc=GGT^4nRK zL)KuM`LuMOk#6ORPTmQ5nE38Wy*Xf9r>vyE)E~+e2dRz>3D|m;Y%3UuDpk=54P-p` z>6n^=UktM!D(ev14-?I5PKyj&ZvZKj?4x06Ajt|?R8Ra8&=ErG*np-Cp3jrp>)yUy zp(5DTnsr37WKcON`}KrHFE)AG+jrGf^gVFMld4cNTm=8~WAU!@$uf zuBRokf8FU#xSJgQ&5+vP)qIFn``DY8@CYm_;n4y{ohQB{8Me~`7zuLejSn5`h#Bn@ z$csiF&d4e4LS__UVatgU)M7&)kkf|ZiD18%f)=7V5oU-bICiEl^Om#j_%y=?SLy5> zA%c&s1)6BxOtp;&(nLHe)n!V^+j1{gP)ZL$5%BMd?*&T?Oa~Nj|2^G#$9BL&(hpCS z)HeK+vviju^?UVnNMf4P7CKpwZ&e*@WL5YH@BNAAXeRD}j$OpQ4$835?N5f(=XOjZ zTv}3(J+2N~w&%ZC4+GAKr{CvXW?k1>;7z-pR3PK#{q~~)7(kC@YPlRQUvbE*!gljk zSdP`IrJu^gm46yZ>+_21G+OWA)EJPhNMpKCqF8&>*5*_@R{)`x?&D0m^Z@qP+Trl@ zr5MW2qD<u_utm&u#_lt#)Hw_X#L0LMrR`IkFL0fct6YNO= zzXnj2>5%(}*dmsRZ*f`j%A z=%eV(>lu9htX&hDNMJOtN7j$JN;w?#S|C%L@Awek-bl%kJrAHe{KbWxJM2@9$ltUZ zR@6DQr?f9CsvL2AKHF_RuaXhta-}|JV~W!Fex7Nh^{10tMI(lUn#xs#^0EB}=V)PN zLy@*ST&ir3e02yQYVW%`v=D~APOr)=jY=6-`27q@vO?Sof7K;w+<9oi1wXyeZ zAdBcx;I|tjD*HPL5fTQ#?kIG$L{&yfJ9OLZNuEYCS>QP?RMQ&B2spN#NfHO+kvqj$ zpyLMDL{ZW)6uFEESLwAl;Swei4ZGCPyt9g0b2F577&l#+27>MV`=+yF0N#a~w&2p^ zAM5gI3M%pCC>f4CuJD8!^EhES|ABo|B%#Gn3BKiwE_y+xK!PRerkap+i6WeNrzaA- zhIGRIN-xn@$DD5?v9NRyupr93NP?Z!eO&bYD zZQ+kGAa|iUY5p2Q-A!{1D<$1a+sz>^J;}I8*XpK{U~%rh{F$TD#-ldqS--^>quFV; zr7>78SA7>c*U!2Bh9&x4EYLzK^fivL!qxhFOtng6*b>J`=~H749>^`UO)1lA5(0K` z&8(s)tN0EVU(5E*$UK2*dH|dxhlaVY=)+0Jy<%(P1CTMM@u+sf0EvaKqSu>)l@4pN z?$Z1x&1ZYQcLm#T;rxiZ%BRrrkc7ipoWX(c-dC+0^=O{#5oaa5XI;_hiyC9G!~{m@ zfkUs1Wtbu)1YA#jE~+Jhpl&;HHQA&KFNpA4eBNPwH8_>hs_JTdCv?dJz-jK#bP<$- z>NeK{w}m}qf>%j00)w_SHb>o7N63U-7|Q}JI+Hd6o70!qJ?o6EsX6GaAQL~+eR(j% zFvmJ>RHw!Wq3^wzu?lmie}FuRBZ4HoLIbk|(JbPhLSLTvktqbvXvfS7q2bTD1Gh@# z;mqpR*JK_u1SFVr4%jKKc%*~6!^9S2b=j905N~$AS)NflmjiB@G1JPps&H24s2}mL zWIdT(FnDsj2Aj73d6dXFmq6c(Ri4z)9lvqy0mA9st*oP|C?7QMwA3N>>e^X=Ih)e6 zoqe2XKw0=aTh>hbZ6qCg~bKbvx;BsIC!)adzA4~ zUM|jA34pRwpjm_sJ|(jzC(*mD;8!)@5Qc)4Q{mAGYviLOO6kOoc#cs@C=3qNg;AXC z*$QfW#kI9J(_O!?WX$Km)Im`aCeBCoXr>v1-!^a~0`N@jPPWw2z>JNlCZ|=Ebg7pC zPUV!0$S`buliVe6fyw3k$hI=P>kJ!R1>WVlzW)aNJ8&MR+nI#d?u|ZugfcZqE8u*l z=Dj{uTbLj^i`nsG0zV#*0+IOw1d7+{zWnG|?9Zlh>*phl^W6PEcc5baX@$EDK8SUY zoe&}1a@c0-Xh(Guk(Z$3AI+GMohu>QFY+{2zEe}#`Fi(k=x2oBLPJVb_nU~d|I!b6 z_f#wxqWQXl;xC|U&O<+1VMBE1CvQFQZTPmjh-{M0ttTEx%o>7c#DF38n?6* z($@CRNkJF)Rp64NMjh2qqj7%b7|+DWh;8*C1{sICtg$Nq72AP=@{V`Y%aNmghj;?G zh~U+O?whT`+fyCeuC<}y9(zX^WWDyz+_n$!j#$}a_$(8ctw(n)lQRGqlnVegpX`LLw~evS+MtR6|Hf6viz z<;@*&aiUC*T z>|$*432CQBAz8Om=YLA(GTO0G01Q-HDQz*0F@WL@# zGba1HeY|3#uTp0hz05_Aut>R7Jx;a?FEtFnHLlI^tq=y6j5sGn`GD_x+C%K3XcN~= zPy!P5paaSmvnCBjz}*X*yYdad64|XK_Y?)lZO^uoGEioA5PT3!XOHK?d&Pw*qI{=k zX)t==9s*DcZi@GsMoSEiUR$M-Duh$}oM&ohRNY*VIOi9j;B&TAp*Y&&HNppR78aI>x}v>@_SC(ZHHf)Bsp-oIG~kNkSp zUIfT;{e9D*VMUKPJQIBr4tY8osv3!{)N==iR5)7{vs_MVEH0W>ByEDw1N|N8c^8%t~FIoPVp}rvs3Bm69Z$ zt7%-YXdRu(7o4q|6vU+Idtv9hSMQ4%0kd{L4E62hho!9fTz#K%=CY^8iORdukditM zk2nD^rHP#>PU#|2cD9q2QsWNr+qJ1gN45L@M zx%}E96BqW)6hj^4dps&qq9M}7P{)_?0EcyM3xYx4=n~}Ps3uHPa#v$qhQ3d&c&& z&+iQw);gk*b>i;Y1>K%!N2d$|TaTe$`EK7nEM@%5I$t(HwWaN78VwglP$x@~VJ1Hye%K6#v zh7qQJA6T@?3vLlOab>+u_TTXP()CoVIhh2`al)1bc2OKQ%J!zC%(8Fkciw8Pn1hSp zZ}-q0+hDBQZoHZD;EQ#}-W&Hhl5cSj6P$bT@b4%6&5LZ`RM>_^j5onL#AQcFjoQ?yEc>lxe1IS{df z#Nbyocu7rYlvzyS-Oo8tNZg;!K3gYUHgr3heofbI&Qnl6ulfUB2jYChx7JXuUyfx` zwOXTL5Qj$n9m=-6r;q}A4ah%WyWT<&KIL>tl*K6^j8q}XX-$Y#q9;XYp~_Y^TJ3wn zdYkSCOmwtZxJHZa?GG6WHa@1~f!sDL{QI0I^;rR5z+^1S7plOKngvMtSsr#!&+YOL zeoVjWuY&5j$O|1j!NrZ(1~4I#J~u}Oy?`&L3Bcn=f^~@soqquLOzOKfhwXn#8)fnw z*U+^Dxt#8VqT?=0TGp>1ppGN!zQJ+!W<2ieiTSwznr+xd%|Ov0z`J<--su;aZ*bKOfQe)lnk^QZW$=r6O$SumYQV>OZ!8H>%A3h_ru;Yj zLtNftq`Wo6!fpN%8?d2V`h{M<-U^zR(!jDJLP?7{e3)U|r&>+P$}Dp6!(LO`Nr5B{ zt~5Yt7sZ1O^%Mv?Xn3_>Yunhy;sJJ}7~R2Z ze%MvNUcY3{<*+K_*M<;TlWgha`Y3ISfhpLSI$~W<%5)zlEy-#%TN_D0kn>DA+a01g zvVog|({CAA5k`Y>Cd_YH9HuJ%cCxwd7YHAT= zj(YNNxO9Se^6UZhKB$hK3_n6<2$dqi3T?!w$XoUh*eoA|N%E*FQtOA9C6NNF?39$Q z=?DPc_@Q1?`=BD{M9jrj3<3Y~(t_Xf>5wZc46Gg4FHh)2a`5nzy*9U~y3`m&@TUeo zI-$p+1Cho8qq&&#GRodMwdz%`k5fUwgGiWRy0 zwsYm9uQ3E&qe8+$;5Uu%o5_nPo@2SdifU`}3zlk+6Q+|UPC2P*^Wya>!d^x@wdWnR z*>7H!C*i$V`taTCtqC&N^D>VuMtadC8X@N6lZJ0S1s5?=|F9C}qwvA&*f>$U^`0Z& zhKeM;Zi`PdLl|{$;u*<~it?Ke9(?u?qpf-Sl`h^XynbhOolM^Cv{?G=A1|g7lv*|5 zd4@MZDgcL%TW@ml!uUr>%zF{_MHU)=&-GeseFhj%{c6NS%jxju9X|rE6qE86!hFp# zfkYW&RB#GxY zl)b&ZWPWVowh|P0j8{RAdX9m`w?Cx{p=3n$*OBx0W=n!RGJ41p5mSNvXTA}*DRv=8 zQ9e5?;yYLGiawpnETlw=X!70fp#F(HX+UgG2Rgk*7!1Bz-HzbCfywb7^;aF+1kA2P ziHc-l(H$<~hf}y-v0MNMuRS8Y!icB&a%oTU(>#(^SJ!)zZJY4$D$Fkp2cra% z>_1!;F~57=de-qbww$k(uN{vL4h4JFai})R+BX>R=e@r$Y`ZFe)>I7$Nb^?=jCIz( z<-Ci1T#^CL_4hwLPt|1TaJ=m#(Zde?H(pls^e1vtN_^MsO)?z#b14%t9>TKe;O+;nF@v+e`1mLh}N}v*H;BA^$YcI%Vf z$?w7|=nl(&QW@)2{@6A*J_SS)sK$}#pG%f;X@I&1q~baBXqmdT?t{2|IXDhLeJsfyrwCxJu8ghoNA}D{`(Umwm*?+2z`;C^$sg`>Byd z3gfOIQVCOUDjQ>Zv1W4eeDmD)k;dVHm@VVPGFNV`$wi3e0RL4P?_Rd{$QmT6_o9l* zQbI>aV%mckF23&v$PfI|3dzcJ0c5ANk$})zEq6%ssMAu?qAnE}KC978>4&&xx&pig zrv=-010N6x*Sb+$n#Vb8TQ5$RiMJ5vi50IdAN8yPF&6f}#2B4~>&IcH+5do7GWuy{ zmpq+R@u=;x@S4~di{2bl5oKnF9EXLSKHl6Q;ORXYY*Ac1CyH*(pvHMUY%){5DQ_ay z!e!hY6-NyKeDH!7ADxfIDoBEk^Z;lim$cLz_ga4ynTwc^(WzDa#kx-ad%sPf)E?gm zZ(!JWa|GD|z{*uXlz|~H6N>^Q<8#)Cco#K8qLNKZOvY^%(Vyo1IbRL5FO1k3<#OG# zgaDeVO~1kvLpK6{_Bu0tHj?ngzofY1c0ub;)SLUP!yXQSo=~)2^iI!wnfGiOuwSn* zw;25{q+PE{QH4nP8ZP5rhzD|@eJ3vDZ|(|Q>`p;SCd}RFbK7<$wBm^sO#VU5pKTO{ zOUfcQ>@F@Y#HEBTHj=NiMd&86DL_LiZ02ZP;zO)?tv|8TSF(^3 z&O(GBQ1=)($|BV4%HQ|c1)ovaiwkP1)N<(Vtt?l9yDx?^We4 z33sFmE}VJLUN`zi)<>Jy#y+E5ll*)lXba#h)sE+s(j@YH+M%kV9rx!|_D3m&j9Zns zwKj)np`q=TwE-=&BO}VwHJ171ejL42AVs)Uh&Slu88eW?a;11{Y=PE5EKm2Pix!x; zIIwTWHlXI$C2?|-jfq8;-(=(@keZPgd{jo3e$QI|iJ%e9o)3Uj*`rh-L*XP+K~)2_ zo@wBoa7l3u+u{=xZS&_(03-O1GOqwpO8N!giyHhd;InBe6~$(g!;Ta%xCo*UM7=$M zZ*DW0?s$4>*+=kS+rf9_jUi$-h1$P~&_}*B9B_+i@Bapm3YE^(7V{bSo)Gp-&%EbT z=HY6rLLWAt7XV(T!k=0I5&>)QTep{c7j3xSkrxLNl04!(fXIAk1Zs!?IL*EGYAG65 zn&dMeRsmgihZ)dB3mtw%4`w+^v6E|B8E9P7sq>occNw`m7~WCzkWx{dQNFm}1b#)q z3GzXD(4E|JVqJghCt9%a;e`2`kH@E__VLf~6Cvvvw^V+s2jsJ#o)xfb=pLB5U&P%4 z3v~Adk@%{Fm+Q^-946PxYZ6P@5^vsZ$u}#(%O=P{1)_kk+UoL~t8A?HgbQ|G@43)4 za2oig$$h;n#bstQD!_tAIx?sW2Znu&0`ZJhIU&(A%JlmCi&}#d-cU_ zUY5E|Cfap`#D{lb+IXptsr>*^ouA6Sj@s|z4o;%1_1<%ufB-sbg3>3O1f zPxx0D$`yVAavC7i(~w@Gw|4B3S|ww)g8iBL0NMENUMlaz*Mhj$d%UOj1i&qX*+p?< zpDGOy%{2msk@RnBNVHcQZ6}-M6v(X|aBVv62OvOJ?ZA0J+^N%i(59g! zi9ZE&g$!lf;HftC0bAYpI48oMiU}{T#q{fIdLVnZHe+Ib^-J9TaSaDPjmG(EdU ztA+76L_~D21u(X>Uso5GcY8kjmVa6N>!LnkII@&e;E^&HQnL}po`kr-%dg>U`el7v zr4VA&8p^k~8Q4P60nAyi6V7DkMQTGF!nkVF-dk^SS4Ab(!W&B?z>HtQP3K$^)kOrb z1;D*yg;6)8FRZ@f)4H7R;|&~j6e!WJeZcb`gf|JRj_HF~FYPxzLB)lQP*fO%&7(=zG^r~e2!3u-qwLb5dD1~d9!k02Ehlw9h^6z5l z&jek^<=$QIZ5*;%ziB34AjDOH2Tc}F>AYw2;3L&D-u^`6;d8c^QCYF-s*LA-aZlYM3RG@j^bWcT{rv@N8g=Ub%=>BjA zG@6%}x6)=;TMN*E!9M}-XS9tNkXA3F)p6tiMEN^8lgjK8IYab~I$1{v*mM=R4xmM- z^{SHB)YL?c-c#p}>fAU5)+S{or6n)zV(JVB$Dr!ky=yAK??gr1!YJT|cKjTQEddFa;k~oh6*03Q4x0{#9LaYgv z9f`k)A}Fn2wutAiGJ{1nR#ef``0dh?k^<~Sxf8D293R{~8*<$c(Y=*)qM2!%IuJI` zC*o^=Cu$^Yka%q>@CTuOS(vPL0bAhqba&Pis4}K?06JT;OiVV8cLbVSAj{F?2^>9jJ;7VD=~#cPKrh@tD64St z8a%Dvpd;(DbLv##n@%1%IQ_UeDNL7!0y^KcJJ_6z z(m(h@m06{*Gnw{OMdBXx3a36f2H5>C4Dowt9#Q*Yd zmuCoaS;D-mhnby8qkeDNaTva8h{(SLcw_MlO(6S2fB47dg%xP5}2|K zO)7x<4svw{2KVlFeF6HT6V$tug zR8NKgjazB5uY7|OR}WX$*YC8?a>0XXN@%lHV0QlZhbBMat%2BC{>$Qz&CP)6m^_WL zX7@*vWm<#poiyVTnEf4r{qH``!M>lQkpC;!Q2JlLeFw_mL~`s3_tIoJTy6ic#iv&4 z$PnI7I;jeVx&J9i&5L|^rzqJE0*i^C6lu)iGi=jAwU^rpX4RmBC|c@4)|6 z2)z4Yh7=xPxEJ0qNAagt_9rpNr*?f>r6zBG;%Qa$bH4&XqI+1uW;P~m{TcoGf5bUbcVqHdN7YiJ#xazpQ!eyFdw7S z+(U!-Bl##i*yl(_*5>i+AN!-(L^xhf%a&Lpe;DuNqyaaKEz%>b|A43cxfmYcoAWMj z$4M`jcai<^3%^oZ{RKD8N9W2pyC<*Et)>|P4-9D&1mu`Xhuu{448dOWIcs2ZUg0$A zU$gErt=+d;^;#P!V+|8h1Y;n|#YlNUW8%2^W~7cRJ>5~w&)&}q!i_RIrmOOQOhy@M zq{Lk51}5RpaU%Ou`2U*QA+cg+ck#R2)WjnHuf-Y%aBc9x`)j)V4varujzo%WJ8eHg z{C#!*MBE!H%^JVzUAM;j#N|cz>y(cUMBW8!_qwW>Y;+WUQ8GH3}*5|4M)-4THFj$O}CxxUfmrtymuyXi$5@O zKIpE@X(`2COWiKlf}wtzXG80tu@Ut-;jGzB?aidknNHTyQu_rKBgi{~S-_Byp z2eFC|aKt<8yI}%e{kz{=Lmxld5L7xW16-N?l-Ov7_KYmwTgN*Gu&*S(&A8M z8V&kd{YXw@eIoR-Det#yALo;A#*+(EzKe>}g!y6a++ev-($TyhIiV+}-4taRmG*t_`NXFjZy~OMsrq-~DZ4>DIHJDR*^s+25}WTF>^Te1rD>XcmQh z>)G|vM+%0&HS_nl00!l@Pn>_f1zO6<0<<)UIxQFfIo|nRQb@E<9YfOTR|=A)d6ZFl zeMEJU+l)KC7wsti-NX8EKCxRgd;^@tTHYB{&3kKsHYVoCh{ zs^kNFo)fXbl5+P79ZL&n{-;7<;rgqW?|u-Z1vEcF{H-+5o&&&M22SGE$JhVW1 zut|OT1MEXK%-#N6O^Xv!oc8D-U!g3lVFnM9u=((Az*(R`*fmxbZ}MT*}|6Qg-Z0h)S@hFjCs7C#~5i+5?R?X2!+ zJMtqaL8y>6PcF@kM+Fr8d;ALSM)Pu!<#+UV<=lJO!Sz4PxQMWnE!VWpi^Ytbrdr9A3^X}N0EFMJNfjARPsNA;XfCl|LWVn zuTQ^!L=pQz49a=H|2y3NzrFR}4Z(T~grB+mtN%aF@y3q}g!8IW<^JzyA$fo#EG#@c zGec%=T{``~I4dvj8Ndi3o1dSru!S%9xN>-RMHG~s@_j~Re%Yz6rU;veH<_N@A}4z{ zlJojP5B|4-`u8I|qEQqFJmZlfZ2mQ7|9ig~3ZWnWm?(W)Vbmcp3>W~ygVOdtGQ9Hi z)S0_ZZbY0IcpW?){)93nKM~&^ZEbBGI9py+6n1%e+3qtlRpBiyRCy5d5*opR?7mkI zBRsYKTO0lssABHG#6M*7_`S&d?~MWGq&>|y)4mDKmYX(vJBDsJ(dphaH_d87d-T8O z-+#WHqEYOmLcs|2|9ntjLBW@aTmN_wbye>Kw)tN^E+XOshBy7nd-ebG^A?dYtE;QA zG}{0>5TeM)NF}{_1mCCaZSv(g$H8sBG6)0$fXV%kj@AdmEk^T+amFSt8J?qHk^kd> zkw~^GsQkk@=YP!bM;rJX8yg`50|PRCbzU@O@QtOqk$Q`$prD|Fq9TE-hlH~3CbYD;wd*zv%RHV<2XO<3|7wSS}+sB^MwfY#R5${b-im+kHCZOElB?aUnm zl>F{PegHu4{f`}nB?ZLNbJ?zco)v$Mq+hJ6;o#nUMU92!4zxv-7^9Cnhhxki1=Dz` zJum=Zo9@k%C-d1xtpg4b3TZ<8d0@hYcsAkOor|-%qhxbic#i?wfC!?5rFEQ&oKNI2 zaG_>A&NrO7x`?#jD$FN(8I=6OIt*})HC^g}933&tc~9!a-oca> zr0Ky9sC`9hRhbcXBmOD}Ef~mCAg<_Ak+gT!=TAN?gQJbTAorU?S>QjTZIg9fE6w=t>l;2!r zCu(LTwg-p-(k24@xN?ZkNBdm1VIUH70UL5^2aTDlr zjFlsD#vfl(Mm)j?e^M(9(gq5}2L6}5^dC#B-wG{oDiHdj7x33e{<%y1=XWAF{RlqJ zd|S$#0-%}Gart4kjJjX-f$PP>2PRiW#ack%hed9j_>w7YW3eqDMt`<4V|+6|9nTRq zz$zj{Blg`J5ydiAn+Z**jzuyCAlCyZyXvJ4G{o|j7W`i@AK{RsIIl|R@$!z>cgzdV zKj_NyWVhL*0;F>^=Hg~;EG$B$E5-CM(g}mHM%z>)*`Tr=pVs_?^z>36OvH$a{z^xR zxllJSAjTCrHzcxRzO_9 zjEa;K;BQtA(>h^KVx4h?6wxV&sNbZg3wyf2^c~mxf3?T;T?RfM<$j(fnwsqI2UL;l z4tVN!3Cz9r+nlFin_?^mJ|0VMN%Vn?Z_?iyiPcsJVwU7hW=VywRCD?LSgxr3#}$)< z0;yRvl`1MR>WJbiTXEE&o*uPtjrOwx&}bvYKaaq_BQOOS30KI@x0AzQb2ooq69YzA zNARtPV4a!jD-xi3>}s3CY^O11x6*z z2*M&uVmVpHNWPxTV=?l$ndMS`ggl2Z4W9}sJF&6@F%1I+fcl-wW24dfi!Qr zbFS|`*M5_qAecl&s&jfgvAJ6;xnrVlDj|1dysN>Y2=d6Oxuxm2eD_V(0F# zG+$>TI4y2_CRR;|S=jgTn6$!fsas=EGhBMAG9a-N0rav@jRtN0tZ6 zPh(zc6*lOIgv3)rk~RvBVb!|tzB z&9vDZ$+hny1)H6X#RM{+nGM3)u9oIJJzf+(ZS1H~P5q`T)p4wwr;wi3_{zftAQ>{r z)!5!brY5Rc*ZE!$UG`e{r*LwgMT`n4n|3WDWL8uP9ACSz7sdri5`)#w<*%&?A*$ks zdyVit4HUcXs8p@o9`}03>)$8d|8az(f69@E1qWS-(?l&jGFS_Dk{JS5fuLw4!BRzB3fm(E~!} zfl#1kL>x4b#LK+*tLSidrlxAB?k4jj0QFM&`EkmQ;}>5a3rMoC+xQuY;uPGynNEra6Ciuz6=T z`1@JQ)Q@0{8M}g2t1{=&dS80|6U@)K6&LXsb|1WJBo!BS+KeD7eW-aLf{ywMc940Jc>EHnRa9Z8A}vVu~-0hLE^6Jnv+1#sB^ez_-tjXd65Xe z>)u>~h_#2@Pmut|f}(#slP|hqL~Mycm;Alzgs1d4SHJpu7hd;8rgo2x*8GnN{9lu` zHG!x{!;%>isv1*XJ#Ly=yw>;it4nvLL@??xAjIP>X0Ro5M5HmrjAIa&HZ;UhQO&Fo zXe$QeaQOlZv{(sGLIN9~bMF_UrmdkoU`qF;r?%VwtgWx-_k4!tsCE`4xS!CXzUjH? zD5r+iS3|JHV$I4L%!GPLMn)zZ3{LmES!zjRsZOwsjKl;C8_(5NS-5Cbm#%g=y z0eeU!0g>GJL*9QpUPV;57GNJ=k8jQN1RwU&>iQZ>)0Svh?1Yv~@YQ58>7Sk_xoy?+ za(cd3lj}XjtvUS>-z;3=W9d4#e?8#lqyRNwVrlL!PSYfh-J+YY-42D%jjMS~iTq#7 zDG8q6MEu14BdXU!jB1+9*12&fhuv>rtl0i*-MMEY6@}#u0?e2of7Za|Fr9i}@by#G z_5>HKmw))(@4g}fg7Pu0XcMjMWV)!x2)y)Zx?fUK@2ya-21f5VzSta5 z`o&0d(^~j$1wZCRTrHna{#iL>_dOM*AEawjt-{R=&i-=Gb}Mw9C8gjWC*)r{SnD}{ zK_TV2!b4?_60?xZ8i%$+`p0?&22G^x)if>w0E>az6 zyRIr6;J`7)7q&1x4b7T!zkrzNs@*pMwAmjNjpQBy7epe!dL`<0>@#WLpVi#Cj8B7+ zFZ(r?6S3Ma=4MUsPjE(=2x7aNP&?PNj6-O$*?c(5+{!8f0I=U#vN*zXS4duz5YB}yy6QTdsKW|#_K{s za^7)y)f^FY#&@;!@%wW70>a}=Ltr_)Wv7Y?d%NoYy3zqz5a9W-6E*-+`va!0xr$Rs zysqm^8M*k%Q(M3tEZiY(MH!_Uh$ZZVI>4jJ;Qz7rodHdy%ipUjqM)GCM5Ne2I*QVp zqJSbGLO^OjK$`RtS`tA)>4MUmN)tj6kWLa1=}ka-iS$kgEhHhyd)RyT?%us_bN?UT zZ-<1ObDn9>JTt$U!Mx(vA+Ds&rx1xjFm<(<>1`}V@vCW>gZ-jpfqQmUE)^Z9=9vkc zj*uUg-ffp(-2$s$HYu7+epiU4=ZlSy&ZR%<+5^-#THser0*}+)PIaRw+1s~0ZuMs3 zW7yrDY2se+)zZSE8`MC%4?7Y$ORT$EG`h#fQv*q!)^8fafm6PI)MXki#?v;Pso2eP z$XB>;itowd+INRHp|e+^ic%t`aom>!7C#q zqfJ?_1G2uwx7`~Yy5BaCKl&{UQt0+oPj1{U*Ev-5Vcm2gewQ-cLRR*+)!_e<5CCfs zhBj;J4*c8;An5jp8R(RH6GK_r$G1S+GF5Z6MEqo^Xv<1=*8*_#t9*g!kfaui@jAF? zpa=85sjFrOXox-5=e(lQ0kOzC3z!LeL8BO_>v}?~9{P8hkv*Qa>i)=*LfhagO*k7Z zBZFSicKimZYBC4w5Z*<}D>*i+zI4AFh=A`{INtQ;n*{U7yn%lpg?| zYGBu0^pC1VI;Xhq#Ox&A&Si^5oW3ve$(uP2vKIjT-|bj{F07T=Ah$Ixz%OPvR3txK zUol@|wqu^WZ{1C#Ab1Hq!Va$T{tCXOUWeMO!YeZgYDfx^2TY~kh;=zERZn$MoP5pE zqm9`mtYF-i9bV(gTUa0-l6o$Kk;yTBmkD1Ps`D*Kiu$c2t9VCv>1=L@KBxcL_ z%2`Y3HIYZEO1PW3$J(>o3qnkWhzzO(ZJ-^5`|gb`(aw+IKzTuX;PKFb z7yBO&Vgukg0VbIeK)^!iNO?L27>@~)>anjF0Y=6EW@aqVR%feLH{Wsc6EM`tEcccE za8*?m(IW;C9hb|WrSFni4dY{3$>^#FPRm>n>g?)ur}fIEf@(TaCWr0#1@+iiPwuTf zr7hTxSskmUbIUWrD*;HS3t4v(PB`Q{Xd8Ao198Okb@BTZ`@R{P3ZwVeW|>#NhFU5P z@sYIluW((tW!*O&;5Vn31vjr_UasDzVVabV+dN2eY|WkG?Alf}cr6ARYAY{?+pdX$ zVWX<3z9<@^5B2O=v^1#l2V;m9#fE@S8t*=)!>iYjDl{D{@I$OUy1GV2ftZclfL%fy z+wotPA(WZBWqVGJ%QS)_zy$RcA-K#`}&jE#jb3!8l#jDhd`|cs14i;dZkAKw|tncwSLCVf71(jo+scgV#?oC4p>dyf8woQEDQhTzmlajDs|Ud=#t0;;6OkRG4`2 zv5Db}441NA*ty)j3P-wQ1@)3&Vzz$Dv;9^x{f{$U40aD6^k9(1fw-LzC#Ti<;5$|x zxJ`PygDHG8WXGM2s+~?w6I@jnqp+^z{8ME)FG4HIW2_`cmTRa_@x7jH;?VA?*0q=!5fe%(22#b48siL#gvJmrARhD|7-7bz(u4gJ;v$HA- zVZ^6W_Q2HHS#cyxJbUxt(a@%B9y5g_QvCFzGb6SJ7jY$p`H|AS#6saf;wfvr262(a ze6^iTw9^vXqO6A_$Pvk8mpe&_%n!!bREV%0XzyD3DI@p47U(~pM4*-E4Dt;h_FT{}&36yv{|cexf($??;Ue4K#$uLbV9Yx!aE$8$-4T>m1bGWW zIwoB}peMAoK_`w0N~$h+15m=s&!&KF`)J0`qo*%<#t8o6Zd*L@Zgd=ScxIXm8VfK$jUppoVs{tYV!pvfvKNyS#Wp*jjDP z?U$zvqi*{_$i1J8Ul$5jf65g)a=!q%>{5!0ox@Z5YMjQ$SqiE6ap!S3ScltMc$I9W zVy(BfnvM5tfF}7q=%0ShXZ_}r?t&PmfMV&pdr_7$L)GDaxrA`70|56()g&XI2TiEw zodu~oFFA%S2I^9cFXCeHnqmBC&fX{%LbbPv{-e=urfku75uqiy{)DtrY)3A=!h}iw z5COSCo*oX0PXaOA@o&`aD9oD23+E5@P`j*$C;9Mr(7yJS>6wl<4vJb_C9`sSp`5IE1lrkN7|Xj_0GpL3N*2F%%TeKce8dkKFSY^ zgxP;6FSo|`Mj6#VJ?tS(O|MLgGDcK>%dDNm>`%|Sj=BFZRTk;Ag?HdW)|FU=LvK~i zQ7r7`;saOKS5m-B&%01n3L?@-NP9`~ZEyb1_<+a{GeLgynI0|;fykIcin|@1JCmX% zpG>uVm)wfcayuhp;NH|!FAD9*1BkZX6%dxxin%f^fa_?^r6AW2m*R4`@oDGu-No7% zGtDqtlAfQAD$WZ%rG$uBz5J$h{=x>$$4Ycy;J`lht}r>kEXQZ`zFkhhXuq&gdOi{l znX_HVd0>)j9M9;V`5tJIr9pPI-(Hp`Z-+vnr+~;%Xtj!^Y~TeQ8xdqgW~O)0Ij^|c zTi2R<#HNe#R@6FIkT9ngn`)invEO`)dg0G;Yo}HdB3)7HaiFHTf&#_X5=Qv21EXgX zh#O0~Mw;l7+I@b@fTr57+No{+=%y9g`r?Pc5@LTccDI_^n!94fvac60b12v}9z=6T z7&PcR)I@wnxJOl%^_>(|9Fts7FIEC)ZU+b**(u8FF-HY}GhK+^V%@L8ZjWo#=-X2y z3o#cMxa3X(l?v*ka4Hm6`y=D0LYRk925Pd*3pY7w@>`(C&ue_mb5|M!I_c^saqk88 zU#GrvFE8EMa|CjoBn!DIwI*~$Gb714AlzEP+lXC%?KPALJx6Ngy8(QVVqs+3XfF$u z(%ZSS@FS9UsSN*Vs?b4xGDhK{C+N%Wc9BDSE=4{OG%q*+i;{mb8=D$0QzWow$v2mw z;4<)Nzd{cun!Vvo^JM8TfI9g3t-f{Pj}v_l%)hg7bc_H3)vS(X>FMXnoG_+?qT^B< z?cyZ@?o6T8kABKj|4LE+Ej<1K|2}M;(~l7yTLHoZ2nPKMlKHUIU_VnY-dWYz9v0*@szF=o(QUHeZ{&^0e6ZhEY~%W0%KrB!C#C%t7Z;^wUg?|#Ey+5+ zmL47+cCAAtCJx8nHR908(hNV$#2Yrczx{WT=2I$W?FWb`sH(oP%Qf(Yin>&#Q?mD~ z-3>fOYhXJ8ZXA%UPYd+Eg%vVJQ?h71N+B+$z?S$aH(nMgdZfI-)~jDvu^D@471PhD zp)!j)-9eG11 zhF(;m0_t^SePHdI9@DQ^8>IePfJ9#o#IV~;>#wGJ|HsAnuZypiOee+tQT7TLE=)zK z3{X{jJ((K6SlngXQ9tHF4(D{~CC@0E1Nb5jPf9}@Vrtpmu(i|WN($B2NMCMdbo{+} zZW*3-uJ04o$am-N2shOctmJ~NOO=WboCr!!iK1C&7}ze|{KN+vOz_PtniI{5fS5|3 z=`9E_dI2^B-US2g;hLm@y4^P2m8VO|ADL_1+BF&N~B9|Jg zs~;?_rrsArK;)gM>4$N>v{mVqrLHF(?UX?x$?@Q>6oZp47xTs24);HluGI~6T78u} z=wkNus)($c*=wwU!Avb4nIQrR#*PoM4EVi)&pUQxt$I(_gIehV)9 z_L~2*_+Kt+D^8q@-#vAq9D;qu+X%LLgLD<9!KV_(`*%J;O6c5%1ReB&oP%_aF%X?p zE3-$Bvb{)(qFI4$Mg-kgD3|=I?=yDO;24ghurTs48i{0ftln_+(Ls?>Gw(1jAWu2&}V~MK7>p=~5)y(aJjthgrOFm6{ z*1a&I=%VYJ`pHYSq%>1QwwH_Ndg50M4wZOy_7rXZ*_QIVR#r?z7tb+Q+YT0 za1Ci5jqNQUTv({x@*+2qF(H4YQPCp_U5mE;a{lUvm%R%zxW}4Svvkj$&*d{(|JF~UH9d$)h}Sd z#~mcyZsHbchKxKJb20JEkFwDp`^*Z5Hay+g$nKvFaNz#{dC2OhUZ@D0tQFk0WUnlw z6qh{ksPnhBjM+u?9M=Nm=Ggt14)o%e#bwZd)4UdTC4?_|jDsksur`-w6G`1!A{xiP zKO;MHK`XBV8PYUej=`5wRM71oLkhiIvE9siencANM)qAZx^t^ipP8_o{oC7ueABa9 zb#?l3m1WMFu<=bwifq+n=n(WLK;bucp+B<{QV9o$x3f~dRs%+2FSy3MEI6>`<5b6pZ2*`yd4Oq|2sk|d^l*>-WaNCDw0f?zF{V$FH+GA*(sS5_T zuQ`Y4gq(NK*R#dm0#6~EuKoL9lJa^f#_Te4XK4$}2)JNWw<>gLF%1(NuiOS@- zzxX!L;LqXzd6&%Qh(GgCznXxo|Li(u4h^dSku ze{c^aD}b@B5)uDI%*~kq=)1&irf;T_{+b&4w*rifaX9^39Q)r{Sv@a+d4KeZ_fL`( ze<;B*GzW+RBT~Nwd;cBk1nnWE2QO;;6Uepz{h>`znak7fF=77}qW^C=el0tI;Q(bP zmlazNJ{14ShJ|tXR}YL!IIcE?{~hN3=bkE007!KD1poaX|4^R(&_IOw0=nlyeLlwX z5ALy7Jkhg)0ze~D) zx_@m2+dED?Wd7jK=g+A-Uu)mQx_GKOejA4PJHlpmo+j%~qEvKmQqm9p=Z@WdgFW>A z$iIqPe&-i|LVi`j*Vi%Kt|KOecEtr97LKUU#?~XD7PUo9L zfKza@<>Wtbk(|Q-k~SThcmD_X5Mu@m)0>Oz%6}qE-(ttlIuoI-^cs6|U&=o@HUyo$ zEBoqNy33uZfJQ0mNBRdN{aF7co>f)iW79~usP&0&b6X7b^T_Hxl zZ8c)=hz8(DYxjSr;^(i3Xsr#?JDHgeG4kmS+A;k)r>7BDRNfaxhrs}5oKC16vv-m$ zW(xb0;Q#Nu{YhQ3I|qoT*epRi07xz86y|P{8Rl+g7!Wl7%f(3Fy0xm{UEdFUT#*|L zY}59`J-U~sf1-+d4M6DAk(pJ0hWGiI;os6P;GMtd%^SslrrQ8G=h!tD8q_&`m^Bf&oO~c=M!(SPrZwb49(M7Sv{m^IhbD01z5Vka8mCp!qbUE~X`9RZW0zuSo z@pYuk z+C}eG#K;yvb%gGJmYTm{XlOWz0F!9qjwvGe{4LB zjVRhY5>Xz$1o{Uj!ml<5>>_6~qNK7f(ceu;1I^eQ-&atJ3GlOrK~=9fIOKW}WZjy| zRi*-HtPrHk5ctB?=sHwLsGzhMP%~ArPSX|6GMCIkUBhz=Nz0%-3!!sKZY-+|r=?C7 zQbMWXc229wF@lp$lv&Yp@=25ZFF|_Ci;E${!zijpf|SdHnq@1|h)TEFPiQI$&dSIg zGh#8r9(G$U^gI22u{-;rhcnF+*KD#4$nAoysWPv4z_?)kwKzs{=9+0`EKz62DPtH7 zBf52`1lHaj`4oJ#$bBKYVzWoR`15^HGSFF?Q_H7kYAP!UrYE!ebUAy?&~9aAg>Rb3 zxA>O79iJt+1v-C+L6Op}H5F?FXTguVf!!exRRDk_Cr(PHOq5V~fO=FnbtsVq%`FRA z4dHOwr34#REuHui(fXDy?io}ul>;gouM3k0igbGbpx&lo42X;XqKS}>iFpz(5`Kzc zW?V^7zz@MABcZ#>+F%R^o$oqU-sQ^yqA8_VKgMXi0_)p{Fg2)X0*dO4`#zDdVEuV@ z4*oBAMe0WW&ME}W%}rb*Spc1hC})gT$!47R&PqfS*y4to1sy59yWSM^VE!41x*f4O&Tqr<66 z;X15@T4K2{QJG@n~o7z(tLDtQmDk!b7{moj}=@SU62*Bf+UxpYwe=5k*8uU8!$ zWTJSRW`>Q*9|;?0mzC{WNKJ*!;H(X1Dx?81ek$}j*3ihvSB^NJjzbC^+u2CN7{{l( zPuU{UPH4f_nJg%~gqw3^4OJdXQ$JESqjsL*(Xv3*;_r56qL& zz@fm5*>U_3KA4oJjW-j$5y`qGq zWRrcts(dP1r4k=kGaVSX3Y2=C)X*5ABgIGnm5FRd#nu(h69JH?-$H zz(`W@BLw9@Jlbn3tzn7_Vr#C5)&_&Qf%c)zfrQ!46b_)A&R{LS{IUtG{T2U9x1>za zkJM=`(^UG+nI;x=wzpoMl;HSAYI&0pX1!pr##>#eW54auqgxI2uJ~xI(fij$n4EKv#5ayLelnvfvv_K>xrS_sWyKQIa!)a2lA zj?1l_8R^e^FhRx_wtciWvd3pp;Tv!C3`~uG4P{A2=Qs9Ko1JZU%Ao(^& zc5-os^E&{}z@{SPe){3LgscIa%@l$CG^66cA6c0{pqALFJ%fpLK3V7m_!A8L@{}Ph z*u&4bbKcIl=$qYRePh@YTLulKLk3rCLb-ge8N0Sv(8r7c;H9ZHFvZaPQEvNo0GnLl zd#V$4J8-8C4>umv;wbc+QJ>=1Ldp$5W@5U84-f$`)5$m!&sznGWWJQRPEp4~uNn2p z94)A;%Yb=#+RmpYmPWbVwR5=ba%A@VG>KRtlharzwzQJY4}i^w=x(j5D(>D|jfsG- zZltwys>d9{Y&JqN3a-n>l7#O>J51O6$$Bn_!TqSu(sOK$a1$XTZE=jI{6&+&O@vx6 z*ukcRo|*4(VfOZ!WHEnjws|9u2^tr`<9(2^-&?~d@j*u+_N4Y29*Ip+PE-U{kVtr_ z7=WzNJxP7bX2M=tij1^c8Lp;%O6aL&?vdp~d;yX$nLHyjY^mIkT_25_`3Cqa`p(Qt zIU%OD@uFMsZ45MI#W^*H!c2hLd}S042s~GrxSCD+-h==9+_$KWL1$kKRXx=k}$4#e@Sw$>J+C6-%K7wl$HtuQv{gH$6b9hUYG-g z=C9pixa0ttPPL~}$dy-F@kgyl6K|DfzT`k*%F=kR2==b7uU;}$tiR_0{ei9nU->if zi`fA#x$?!M)5uAX{+4X!rWg82llPnjH}Xu|#?d32GjX^%Dkkj{Q|elE<(PPvoV@-* zK_x-S5e}fv3Fg`W;Kx07C|<4-pA1yaeC!^p{%NJ#%Q=2|?vG&9a3?RLnq3z7>AfvH zy{rst(fj!>cXk>|4(~G&n#xTjWKu)0esjKrUIr3O2yY}uppZKns+GrKHsqj>gXi); z<3IFi1@G-vqXZ5Kxf58}0ETV$RB$+sctqu6Yx3V`@lU4{m2@TWW&0}yj!*l)RJ^{F zSSAtvdHGq|wock^dBt+EI$d1_y{IvZ7;fom_gF7<@UW!ZrnBQ6 zYirdnU%t$hS$8C|$6SYo#tA7bC(j)8TP>j;UnqUhV1WC?C$G>cW#J9L2&<3D-MNO2 ztJw@J5=F$~+rXpu97Fa&n_STiFIUE@3%wxCK<`im!;?ljIw|x*lzCM}{>`IhF5Q7_ z%&8N5HHRENH=2561WIu`M+C+Yd*|Tq@iV71zGl#EX7XSN28R$X<`&7#A+WuEA|C2G zLI%Sed2Q%yMxdm<+5$Iuv$AU_4!v9Iwb|XFOlb#dFcoJzl9D*kivb%`&lGoDhTKG} z<^zh!$}2TWdx(^~99`bq;!_E5ubZsAc`aM=Q|wyYuCKB8S0y1u7z$qkmxg)_ zvrfau1aa$GP5Agm8mIE%&CJE%Y+>0_=avyil7~6zzU1PPuX4hZ4)N5~Iy3yj$?p_z z-1)ne%!6K4ZOf*5ic=x?fpZN|W8Up-pA-_{qrQ<{h>I%fmUc0ZZ50Pf*bK9J{-o_0cKntLR9_*{m zh|ZlweKZ$lhbT!;-k8G&(b020d33ss%y4<-I45yH3yu z(WB71Y9mKfY0u~DRR1Tm^uR79D)@-a$=GM1Au!}w19^(f&Wu4gd@rj(W?v2xO>w!1 z2OvF-!h7m0+rUwx@!zN-8#~!D>_s7KAJ(&N<+?GVpm6n-Xmu`*xs>wD@ONve*RAy} z-<0TH5J5U3xAZG5z&pEfbQY`+@uZ)2_%w{yR8m1OP-Qm*dA5nNXzI)(ntI%6&cz|u zZgz?KcB04~ffS&mkfkhC($3yjaEArY*+ypVauXVXj8{qk^Pf!PCohP4n;UQOi3%>q zIm(fSx}LYC?LNJZ7lpZ~e<&+s*%owQrWNT@AhqwSLo&zaxsTx}M+6cUz7uyPmU_TW zb)zF0-kqyfc(okE4xtbu&KcV6#l#fpH^R$!z!csPL}^Ewq9}%wzhY|%To@c$Owu$* z4xlFrff{jiu{lc0X8$)P8lQqDX;e;d>&0dV^#Q!%4#f}PTW<-_^wtAJ{fiAQlrEO{+{wH$LZ!euUq zLxI!DcZ8$(>jS6xH9QP^Ye>9;hs(!Cxf4FHI_5H)U_G6xj@z(vR5&JSOwBoCA}^oB z+<&L*s>vL%)k?u|p-EFH64!HJXvDeH^;d0fnlrqz?5Wo(N&P^hvG377Gr7ZnZw;nf z`6+gQbsDa^3Ygd!j(hX6#xDwO)phcuKRTP4ubYn!B7wsGGQj&*H8C*(VCZv1g{Z)` zaY8})r;ma3uB5qCh-QZVUIh%ql!StfaqrZsPOf0Rbbvw@{z^(?BO%JPVI@LHtO77g zbZ=@WEoRE#Yh>=GOamKip3t%Q;bN=o8Pl1@)u9qT)DAaz0b5gz9w`j%J-BHBoRiL2<5{{1y3<;TO zWX5N%?ak#n9tm9ztX$2}$^Nw1r`3U6pSqWc20GX_E)SKgn^fXoPOxuQo6CNmu|ClF zMrq5YGF#UlTavbWr{EO0@@nQ?%%ZjFaRksPEoOMp+Bk|sCr1EGcXWIw--%m-`Mxcs zj;J$Jdd`*rv>s2^Em{c_^?^-n#>U2aO>vBKQL#pnDnMQE8?=|jSQV|C#hxOLrf^Ur z%F%ojJRHsGi(g%0laK{^-R*8>qk*P)n?cv(u}ZABId1!6ND9aJb^H7KIb?+7vr9_E zS8LoHb~~;*@gHE?uuxc!T6&T@LU%-OO7>S%6i~FIp5<&xw=c`{yhU5b)nO8^M&e1qE_uF(o$<#bAB{3I z=3~Tou1F5b@5M$vEzL{)Xlm(2+u|EQ@BV(RJ`)laQH3lXZsz^PS(C}`8;BGqAS**6lM z(sRpsr@n>>c~wfT8j_OcS68{#ukf{rpS=9s|J+$;{P2j1rRmG>ihF75`D=5`w<*4; z58O&DyK*x#1Sp+N6xtm4P|B}3RslmVL*RU->(OkptGGEul9eAW6wL37#*#>o#^?*7 zZcQl47B!qoT$F5$=)Lv0wnZ4|L;UbTvGOwKpRo83iO!c@= zd#4T|#ofApbzMRo{m%Ut(0KsZo~xs2G$iiY8`<7I69DCNdr^4byC)+U$B2bP+LNVA zDtv3VJJ2;(3(ypu?y)iXukj4P^Um+(d&9>rnj~2LEnG>MivP{tVb!VP zyx`hf{9?R7rw+cD0O$tWmBz$8A8>DfmR~vk&etAw(THp?j&up=m!ftC9T5>RSPcPR zx#|$Q7lyZ}4|*mIJl5WzDnfJO4t9$ z0&5BY3w*igQf=`4td`OSUWo7ZxW~_V>OXqmw_W$I8GCJ1bGXWrzCSPR&vXGG>)UCQ zxbNDxf7Ie3**k&jjR%IyP=HGW*=m_&v3B9aBHF;#kTK!_k zzUMqVIakzf#eHy(qi(IbZ3796PHBAmXg!k6YhYtkwiM{KjRAH}>rNj$FvIqjcHjSf z-7K`9dDDv3n=9^@=HI{RlT2q_fBNTM0KfU_Fa7!T=~<;~n8O~G-o@W+9{t@PCfs}+ zfNDS6HTACs*#Bt|{tqvb?;f}~FZ)JBQY-d%Y4VpA{;Mw2d;iZ;H&6A2Xm8n(x{=UjEr(g_^S(#t0n(teAM4kap0e& zW)FQnM~ttw6cYQz+MQNBa8Oh3Jel*l0zSdO9FyGlMp(_)_T|gKj|!6fA}{}m`u}kF zH2vN-&RTGyP3@QctBKwJ!&5(;Cfi#DI@09vNr`{91^MUQ<}uO}KG}Bjmq(m;_B-L_ zURzh8O*kQM|D5=OE&QLl5fLo=!#+dujQL|<{&K4bm;G+~oKeqcedyQhV(G%gyAdrd zvqFeC24RadZ-9#)$JQzxA{>{tCt==;lcyixU9u-G|3dxm+hQzP_WZ>d!8+GpeqZaN zS*zYodxralU(qM8`%dL1qgSerBzrZBS7CTqNJH*T(kiLHnKS!$d0Ttw4i7VN`lP}> z)|6jppvJSr*zw9j>u8f^%Cm{HK=LR`w{7XYN-A5lpY@^ z7JmepvHxO%ze@2we&=&+RQhOu-1Ciq#IbU;%4|uH#n=0uNhWFRy+6`2v^_6UhjPn6 zR!{eoc}rtRoQ$$Url^~&BIC;>-Ni2#1ulf9oT-PIoPEZ~9VNg*f2+Zj=D}=<_mXLX zoQH>{ul)=((Y2hUDv@!?%@ncj+RD$D9Tk#Pa>9Q_7T;FUBB&81Q;s;#m+~s+c2n_b z!2?+x27F8X*jL4w8@Iv*ZbrmBu)Zyw-DxVgB)NRK$??nU4CYR_UC=9wr3mQd2|Q@_ z{v(9=0q>yGpSGERoG5?(skoLG7VA7Me9n28Gy|k)ks7I2YZoJBd%ybWsE@{zck~r- zs=2{?%aYyFP3{D=&C5K)YUL0%X_MP=?SkQpF)s{3U17J;wryMl=a}h+`op8NS@Fxsj6K6Bm?25Je9G7fW>fKXY%!*3 z_*TR+-)el2z2x$%g_40=oykL)(6*A~_{q-5Q)uJuRr!ls_H1zv47CD==Ys_;@vA@H z0~+e)+~tLTeiX4lj&4|w532o`ee3Ie1fK+ ze5PFWO+qosw2av#g6CvZ+pajp2wgAq8QVx-swnH!{jeo=oe zmN_k(k)&qBPOc9O-AXaT@z$SMarIb1fcl&kciIrx(WX$&P>c z%RqRf4Es`mg|0DA-(Y(4a_Py9n3WzDqfF;IolNbq6A6Z~>CiRNgH(Qt?EF}TF?-93 z*G~0{)mE)xrL)Y{9}r$2w62;ExOMqjs#8y2kOdw2fz*}hF&#AS(Hmp^R<;nNs_0u- z{^%ZW{fJ!|N3bbBd?BmdSWpzU`Ak>Xh{pu*Wpj}0s!|sBthdPt<0VIW;?%I0n8vut z2G0qzxn|f-)X{WLqbfVZrtkWI8uy@MGGRbPoTN!j=+V9T7(f0+>g?!9IreUb@QR12 zmG@fNLr=-Bh3OpovkTw~tno&e`*tBm-$h=m!a%nhc&_#EFuvG9s{fqmk@1Np*I7A( z#l_d$SjI$QFjWdHvSd2ewb}$44pgTEuWg>2ub!*y4ys>@EK`qrjqT9NO7Owg9u3#J ze+^uGpG%+RCEv@C+9Exzal)fSAw#5apLa&-{<9O7hl!UrkV5$z)7yFKU;X166F+9+ z39%ksYdeDM#0n~hn9(@VP9^tclGKt@$!cF@cJonU#~VoGDQ(;6Ee7@&+W>kXSbF@B znz@F@2K=5G9?($-n*u`PDkR1 zCzY3<=MEbB=4>s%s9XN9ckKJ&GUK&h_B1|$ePPUt5+sXsU+9Y&K9QC)kqlBNFD$mv zX%mm_ds8=bQzxYdBeQ?ikj1h#*Zyh9sC2q2ujnhq4fv=QX#R+y_FEe+t%o{ug_0MV z)6;46`Qv1ARVfEJ&+J?TTgf@rq%~=7>jymk)?DMjt+aVz2uAoM6}| zdST0sHT2aF+Vxmpmq(WDSRM}qoA^g$hsFoJni7 z{E$EmMVp-pD}inMvqMuisyMDHFsCrm+Ks<=pZ4;c#VZaRK8twW#odLXT4~pG9Cmu5 z41u$3AlD|idrZ&Sr!`{@4l7&yG%rr=h0;n%*Ox<@m2dB1huxG|^9CRE$4HAuGIv`C zj}*^+g)mDp?YTvtc;7oTDR|EliXH^|?nsPQ=(*&)iOM|whOwIn+jYnxY_IiDvzjSC zKASdV*|wsQef{M05LPk`^+mdu25MBc5<{EDac=%)eeHU3VngP`zO%dn8=Hy)q2z;% zQ3McuZqjoMjz~4|-v~B6NRs8a3FY(A_)7SuLw0&;&&}#DIsCk%y-{QF5y)sLM>V>Ci+f`p5)uoM8 znTj_{v6stae~`;*uQ_t=NK=VNl4nc!sf4Z&0#|irf;;x^kXxK-N^bL3ob?@M{;vJb z7xw&`)3gUr9Jn}X`a`P6gzYxAk(a%Rn4ab2drfsiIOfYwl-e3IGi{=J&tuuUJEsz~ zS-D;wC_c6C#!a$Wt&mQhLfn@SW~YuSPC#CgJ;AVw{N(iRsN{Zp@o*2d+Iu}(MdF$f zl-ip4r+S%g(y-t5s^a;2ULncQF(dsi*VJp^t~&+|fdi)Z zMpk3(s$d(A?1~l|xm(xtl3jc>D>F8R9@hFh=?+4sVzvS--9D~vv_5f|2J#y*i!tk@$`M!`oasID7<9c6zTT*GT@O!-})L()d+u1 zkhvnv7dbX|-L;aKK&{M89bI2L#B_WoT=b#ZmVkBRatUwAqb5UBEu_quv$H?8QOC#b z9qSvmU6NRhPU0pGz2Ds=Rr20088GwegccKbgV#9sy+wH=sNtv9E+fEfv5~*>ynY=* z;Ys}AnR3`{-LJ|p`x6dquJg+PO(f(iHQWSM&E9K7XDmjjP&+YPqn&@ zS$G26;Z>hKB%a*qpx1u~=hXh`g_&sa$4lbgb|HtbHEwOPWltZT;eC3v-zHyG_dIKx zN|=G2#L#9@&xI-!;CS5W5rxO|k#ybF)RUE#vaB?cjCks7=7Z<_CNs~sF<}#Fr7ikj z@_eaaJp7^-cgL(HB>%FPlG`t71sm@9>ofWfK z^Q}4v71P`qE=R?S$U?ZqHkuaEQxmcuU%;dr z2y-abouiJOb`;jx*^gGgau|LUVUdbGV`^7qxM8pF6dp}< zzt6KN%j!VtB(ImMM8%T^coV9> z`Ek;T58EQ3_X-WQrK|6qSs1F!ogP@4{=f~uw9#EVM!9+yEl-!GjEJG{ray#IwobRi z2v+ll`d0f6%?`XXy=B{%@!<&Z+ER1!)@AQbL;iGzD;}jn17HLF5Eg|oB*Gx|ibHtV zsq#Ayg$FOnet=2nYO9b98!}R4jM89>{5ji2bFxlC@9TNsA2kOmfIQg3OJmlfBtA~t_@%LMG{ZU5^G$A7W#I9NYa;uetU8p~EZA=`t6lKSqEuba6mYnT_vE_MNEfUhU_7;MEX~3`eCaq%0N0 zK0_dHp8LrZA~r#Qbdfk-m8!GNpI$&<|q%|| z)ZIsPFy<8eN?Mkdf;!3FV++~n@R#U06d^2#^ zLjjI5cI^rePv34Raep1c0q&`d&_Jqi2Lya5*( zwLB}83X&_-6-ol9HGwkyw^=NPMsFrfBkJI-%;DS5ZC>C~8H?6yeKtM1MHA7MCF~z) zpkLMv(Xs-O+rq8h{oOZ6Z;VQfMUflT(z{HCzMvIp6W6ubB7^_C4{&GM68#e~N6@n>h;DE7gnv;b386e=kpRYW9j+!4;o8?XUX zi{Xm>5ufsBWsz^3WUi4!S2rSBwMcbOwtVZb;sdSBD^BT{P`pZw;B>EAznZN3mj@dn zkxnW*5(C%J7u*M|i)2r+MNG~4sug(rw1DTTrZP#=V4B@%Gk5;CgxZ)~h-#?$+3$@e&HE?ZO;sjgOvuH?0jsXjba3 zIwO%nH$w=G&arF zX);sWv8Ou6+IIy!*6Tf`z{HznZ5_H&|PF&I$ZLx&B|sF-?IH8{CC&jM7b%6TT4LEd?> ziiC^GI_=4QsGH~PN3bx<&;`5tCgH0Qf)etl>u|JgbVQUkEirpM!SW?AzKg}^02~vx ztM2mwaUTCp$7nS<80j-bsh(TV)^2K){yLuc?xTt`97~8eu6^W_hS?$OgbQGYsB&~7 zJ3syDsVDEjHGZMSz1Zw`mw9wvO;mS>c3_@$d0t^4iR&6aoib83Sh;+<{bDq(~@h*wslfYeTu0 zYxv-hA5mWUh*#RGr2DbQyFT3*K8nUqFQOsw_Lz4nmvY{m9eg+Ewtp;-XG>NMJYBA+ zm*8Whmu8d(#aru=%tt=8_@LoLINOb)rho1YKur1t;p0a|n3`Kb@06LG>lr2KQ8t}=5Fd--@7IMRT{ zhxcjO1Lq&@<2(@$tt~XWgEXmcB41O#y014p-ar53HL5OZPDwDcy}Hv*ghVvf53@p$!g_idW=jtDlm%0+i+2*-Nu=e|36FhbWxC*C8dNdf{_m6mgue{Qu~B>%S)a?*Cs!0Rd^0PC=x*Ltum= zAkxz1Xb>DV2GUbnNl5|ePRY^T4Fd)eqeqV#eD`|4uj})Aw|iipT0F$T?Dr z2~$~q!SbQ2DH>5X8^J!nuuij&G;xb$Zm!5ox^zZS_=H`^GL^eqnl|t1UzR+G13sSM zSTImsV8*0op@z)Pp7$MTyAa*`grs20xw1aL6xTzMH9O|CIMY}h(yTL01=W-)L~o$e z@txt5P0ncT;dQ>Ut7y5BW0M^&v5&y`q2I1I&<;pm$?tHrrnZ&se2`^PFv{^|XAQP` zJEgWrdKy?LTW)4rr1E4Xv{}%ti#szUBp7XAAEMWDGxOCOQK%_*`5a+;coqF6=&}&u zbDTyl$(NC?S;@zm3I|FokF%tC`4`_hAHS*QUvPz%{jjg>@)K3JweD)Q3HoUEf1i&3 zTy@9Ze@K49HS{H@OByh4rTp9O$V$l3_nU{6&}8pm1pNQ5L9ptBQr6Gg4%S z=4HLo-{&`De(1BPT*Zc)D#uX;nmjhYfb5*(4x>jofyVjq4A@Xk4U#*jw3LcbXIk0c zX(}2Y%EzsjudTvfod>ux_c zM*HG8VS)GOHc@lv1>oe6&(h@_|M9ANdfeh%4*%^_tmu(DuIx~}^|^kglS$>qmY&^B zetmkp)OE{rHdo$z3%YpT2Za`_Z4T}XNP;5hUVPi$8N1lY|FsrUt>tmN*Mc0-EQOD5 zVZM0mjMb#o*y&2`v?@_Da&LBuwp=%cz1Lv905X5pT#>5(9( zcd9nrGlBQk>Wb--w<1#Yc*GoT+}REl>Fxz~VCqFc2SsGdiPWkNB_W+M2k@YQ7x-YP z$EOjAd`CSZlNJZEsN3YhNbfs@I7jjtD;ELl;pf@$%GIqC*=&WKOGUR&=_;#1RI{8P z1fans$>p|7-#I4K+aEM+ly~3taWiQMg6c40y#*iQHE%9oszvlkfE>!Y9 z^j%2dX-#+KPK@N2(Ey!VjVeV4?sb*ZciijLC4Xt_gLRsfm_PKS-(lEU3tGy6yrmxO zQ8~AOsO+Qh70JBTzqfY>IT_l~I!DJEpHnk|R;>Ffv0!pr<%g|U)%NzR6&0zbIt8l50-F^j0rlNm6^rHOn=*n*Q#{rxh36w#u)T|Gl&i1XCZ6Y)3ui=63{IwMj)m&0& zDb>`clc(oR80@sxTX!-{%XLz(sOVh-9O(-Z7HLecJ>t@Rr~Z8 z2`6s}XO))x*i&GE6?~b?T0dNW-4GF4R*ags3_gDpcZZZ8(Yfy%$Egamu^de3J8A#a zS70QQQ<6*qc|-L53z!;}o=KGEoFTdeHg_}o4IPt|#k_0w%FznGW+7J4Rq%5Sw7?J8gaZ&BxkW1ZH$b=?r2b~-%3-QHdGLC>k^L z2h!Wh5z+B8@75LL^{6Z7f*_Sm<>$r#iQa2G-2QfEBYQ~98$ED7lOA?&grc9OjXg-VVVuox{?xqC2WO4E-L^1|l{T^B>@01|w2GCKF{3sv_T4rcCzJ z?OM^W68z=QP!Ms`M#j$ZnMsv9yRo+Z(a{UkVN@Z4-6e8p6A!~iu|ZPpt@|HB{r8we zU3>sTUZbK&HCNU3YOW2!H%mX2HzwdP@o+F|a{GA8l((5UeF}srbLLDkHh)U?RFh*yuMu!Af8O*>^8R;x{`kX*rit#KGQ}g4+FyGqZw9;t18Mm~c zN1CFuTZB}}X1}AmsGT7N1e&D3Fl>K9Re?T6jlFWkIEjRsxgV+Sb6G0g+<kMR5kgM1@PwQm$m zeJ^gpEnsN8AWgu8F1k|2`Z9ytiX+ z2AZo0YCRb(C7s86JURIkJk8{Bc<+w6F_fjlP8n2CG!EVZANsX{=sPVVEtyuOoX&O$+QFemGD)M-VBy{?wvR8FRGU&Dt@`e0kj^iWAu>Mzf z-9-T~yHi=ma(SPO<(QWbN{vBZ@BsKhIGn5<&pVlk{I@IX$d1T&p0URNdh$vdS5PNo z(*36$r&uG_(QdmQ8va2QVB)PJt{->WeRnx>O2?e#vXB*TQ4y8}`O55snThkh+1!TM zmfEexCNmCl>w4dl9%Ppj;D+4b=qy@8Tf?9LuWjZ>n%axyQd{?kFtHA+U<5aq$RQBO zNN>tQ^!dD)bYql2fQ2Z^*8jkV+nx6*qs2WRzUkXH?7h|fyAu6dUN083&ZXpLsI-A7 zsf37RJ1#d`Pl7Avzty;j>Uy~jnf1b(xu@-dhe7|mLyOmzG!S!qpe5-E~`Kgta-&BhYZ%(6-~c0cw-7vhEs!=w*syKw|n5WrDAS(N?@X5}YsQm|1wDCJE?gt=Mez_+t%WS&IvwKJ+EgXND{ zlH5tUq>a!rS(+`n(Pc|3Z#HNOaa#9iMCgU@O`y_d%F@`f|Lww~_dGWyZI~dhDPs^R zQFFa*4SSBWM(&Wp@>kM%im?0l7*Wq?MIH@$2L$Z2>Q9s)k%l5D+J7E&rc&L$aJ#-r zR{r1l`=WTB7^NB^0IO2dy{79h6(JS2SMw+ql>F?tUrc9R6>T4bwmN@iV}KVaf!Q8AtGNf@4!Yd{r;W{bPr4 z{L*BzwIMOsEJ-jO%lk~KQ_ENmpzay7A1_WhvS?{jRM|$}aZ8xpp*{bGwkDvQ_7jH&_-ADT03&ef8uZ{l2hmZYzbXAMm74di(9H zFqNVS_ZGaU!|mGa;3lfuD3ST3vddS0C|rhRC-^M$^W59Z|GNKlr07dbh%kV&j8t!g zSd7wk`5TzdzBoow-pz}QoO&|k|No|}*aUowT;ts?U+ty=Z7VtB5ytc)>cEGI0s7iY z;}tUbnN}(%D$5wXK9Si~=135>(Nb&mxmIkr+vG45!R1zk z3Pjowha+>GVYih+f4zx;I@AC?>ROp%L!>7ao3cX4)BGqR>0UNg6MrgH+X^LNaLxp~ zX>p1V5>_O2b+VB@DppKdh!-udI=MnWSFG-!z{dKueWswIhc^4TjoEj1nolfk-b^Uhh@I(}U zfSh{jnD;y!rd|L5T}<>j>0t!~bmA!N$L!`LGj8<-10q3_(`Rg&;hyFamnJ!EGo84% z5{R;BNsW(}-`izADp1V5GgD?4m|8>YjkS0o+WX8%-AMeAij&J3RGQD`sjWF$oGY7d zyCmK&P8*AUd$M3`Sbvh8^GQ^-=2r#FH?3?j9{S41=sKMnOEm0lH&Wcp#JFru-#1=4JuSt<~Wh0SH0JEo2T6sk_2| z+NA_HQ?PtvyAl@Py)J^MjYf3}E(bEf@@?M=G99aH!FTzzIAM+2qj0Br&F?#_3bC}) zod5h5h;t^+Aoco##K?cO;CZjMFTA2=?726q=U2c)t@4I+s4d_eXrvI~p2L8eEKV#G zH$xRbq4xV@zeqh`SUm~;8uo8JDY7{a?v?9z(!p$&;_P2;{$b-u$81Xeb;Hi$W(K0= z#fGuYfyviu8B9%lD^~ZoZUJ@dN+%$MvfjE7cEEhR z$CR2$ECpUVu62wLxVDy*8QCiTHB2tUVeL57X#TL#@lmejjvV_{tY1yx#WDRiSQ$&& zs@)8la-~>zwPXga)Q=N4S1FyWd|NQ6AF*iJPNF3l8QS-qP%4A;8TWlD} ze59ZB$|Vx9Eqw@X>mOW+ozPXiHLLMW{Ti~Fz|pc>Rrt(hlvr;FTJ1Ks2SJjk0A~J_ zML#bg%ifvh;)6O_IsZtcleg*gr*2UDIIRAaz;bf$iEFQr3JzMBwqm#wMLFWZB!hTO zusO=`_4W)aP^g_1DQ6F$68J+tkwUwwn}O?*^^(KR7GKCB0(9cmunTrpGM*L6D12xI zU-Z^|2c%va*Z+s(yO7JHQnmBAf+3PRsZ?=w4bb zAi*SmJ0OAP`{-jU5j13O84iU_SyGFcGQ=~1x5b5-b`e5R1o%pyUm5Ol*k-4}hXxMi zC7>jk)Z}!Mvv?`y)nc|GySSh?-X8<)@JSP_*01!U$tqjOU6(iQa75n6CMm4jKB_v{ zciVHx1-02L%&`@{8s%g8#L3Dq{1c|2zat8G*eXBudQr+jpRoeuWRC=bd}LM?ga(t$HV;k?n-+CZz#2U}CB(M;LcHDX1C?5(uKa z(3KPLe;QP@Ah{8v;723fs&NW^0IyPfxKzkiE-5r~b^Ie>KZ{VTL$?Gh?Nvtaj^&mX zo7ipVQNn_y!MF`C(tSK-<)1IRl|_g$<@&SvhSX+3;)2vr}?}2W-%swa1Y|(W}@~HWXDS-~hZ*I#WIRv%jhVeO@`w9J>&C z--Ur&^EVMH9AThYDeF=BEy1@cSEI$Z3ILbT_1#)jrCq;#73-oF1Lk@_jcI%{Zh712 z_xr0XF@B)5`&NaoL2+qS+;+{d!e=9XY1JG`qfTK4*7JXLon*%P<%LwI4QxjV69u$e z!6hJ~nw8oB*4;Gc4nv0AEVHQWkM;eQc=|jwgd}^U{POtD@{@hvp~RsDFtOual2ru; zzHv0Egfq&sSzh9V(0w%1=s7#<=y^roR7yTrt_9;FkFh~lrVnn_w_>@!O}P(QnvVxa}62;KMSwTv(d3&d7kp=UE5OamMyt_(-_~s zCET8}d0*w2(ysp?$#@4<__~<+E{cC+e&fZJNaA!EFn0X6^iybyC-wQ=$@ufD}gUMa{V)fL%fAUnRzS;PP?K7oRXW&!k_cko#yH=U>-;hi6dRZ!-CL*7%)&Fj(WH=jMl=HK%1+ z{U&L=#iA&YS*|52OLJAu<&=F%#vB@^&Q+a~B1F z@w-a6XEpJ-eer;Eg&z)o<6E@f6Y6^2*ZHmI-RR^3aJLT9qy50%QeVXCD(A-Qq{E`>AWRt+(GPIVi$_jTI4LC6)@wwdZKaLPb;vyU#FgErmPL^2_3S{fyJBeJ_UhJJ`q>P@tl|;WjZ^lZ$A9{&> zwHEv$F6=skrcgGg$|HS0KFmcO=le@Rp9k}X3}NwEBQrl>74w^WY53zabLV-DQu9j? z2pgMUCHp7nB(~ZyT#V}Pz6&n@A$ate+o`eFo)oEgz*PP} zlI`h_H7eYZ8BYsnq3|rN0)hyX7&cvO^wz=Q)P8P#-f{Tsk*aaH_2UEVnEbY@$oElZ-HaxgT<#`1tmj@VX;JUO6@9e5A$)F(ml%>T@~Eo%SzzR?7b= zRHn^_j7Te+Wi2?}e7_*YIt|0R$yok*=b$!VbRH|p9st$GPL6nz9c@c?H{WbJ3aVi= z^o_p+aroXut$JDQ{g5|56~4b~VYdIuZgBiMGqO%JZUXCw?x?TjhhcQ&Xho!1)hc_d zA&;8&G7`B~W!=6Ar$6(+W&?XtI_^YI((IE!w18V@tLyeZ*c*}0QI%j&?4 zvqvjGur2mfWo?8RZs+g=Raz5z1`rL)TB&J!o)e?sA1AUh<0y9pNvTT-WIE;ofR~>v z2Tb>qyU_YEEPJ&irM>#|J2#subl`EYf2WE5ipa`oa%@~qKuoH-D|IhU;O{TW_C z6XvR;FA{k>{m7Z?=lzYbr1~H^>0}DPh*-AO|0z-+7lljftTaBUtg&7M<|PH_-H&Bi zhb;>_x*?XI+xH?b_(;fD4`={&Wq$e!G?;6y>K5BIAv5LI15$vM96SD(oGFk0-zV$! z3DK&A4cg=B43CKqCA^jT87|M|Bknw7NI=EKiA|1-70l^nNU(k`JbAn^`h7=KF^>J8 zE2oY&J*P&GnX+F6-b^$rPpVy&xGmcbBtD(uoy+|SOTD6e$qfA0j&X&WS-@^$HRtKC zQOMQw8}`kvzsvm9iCXge@qNF%P?FRsUZYNk21H4ajRhsTD%0?mHe|e4wg@c=!@FJF94=zM{rF3BfCEsgLPprxIvb}^ zxjuVo%vJu0v;KN-k~6&0Y4%_atzQDMDjoIk9gwBtFIxP_fT4Y=bsD}uBZ+%*Epwn= z5(7-E7LGDa;sozSsQD!=-?sGQ?!Uz@jdFYDl_UFVBMET)8DNTxU`X?cz$YY&Va546 z1Mgl5l5%!AWtm(1e|kZ)jfmQVw)-ewpSQbNKX#AjCc&z!!eMw%RJUVxn&^ZmlHwQY(!-|tCXyrpc^%@vijB{xbaNt?F*N-PV65@~< zsi>#T3Jgg7pGaO<#| zH$b+HJ$!U(OLUYSm)N(5xHzvK&SQJPEGNtqS-D5W?r~^n4Rma;hFn<(fD0=Ww#Q=L z>8U!yY(o63t?dpDx{md=u13rFSBHK-t1WVL%ig^UizYcUY z0x+E7lyHP9-_?ppnQKRx;Cv)y<}<6*+jRWQY=~7)CuBwqdn4-QyvpJ;m?WC40A-+jl~Oc>%!XH{gJM3p`G4)g3g;RJ|fP@bH+iMc0)y zr(yd(&1DG#WEIox_C}35?AJnTrgXV!lM-U=Sh@>yAfhnPVgQp~IzPleiB$17t%;e+ z^8BPgS^u3YVKn6BJ)|evPn)Xl z1D()c{_QET+k3QX5x3P07NtLl7gt4y+Kx7~PS}Ez62@+~4nO_6mC62OOz7@94-u?I z8IFKF<;I2PPDkv0FO-tQTRE=M!Dt2%%)50{^Bm>Q4{16BP(R={Grgp{jWFW*yO>?O z=i4^PR0n6Xh&c-^cClpk*5^*4kpP|D9F)x*yxBTjo#da98 zPa#+By`CVOfO_V!yqGu1>!CkIiwDodP1t-+@+98QDZQFADAIjhSZvjGwkzJZTMlC@ zHYlx3t-d*}8uX?8#7f}HgMWohNTp%#llDH0HJL>Agh-k%Uz20w<|)J1FxP0{m5;dN zg5d~iY$Q$TeDB5JimPID-Q!G3ihl)*yuzi}`pp(HLgRh3;dJ_U?*^?Gw!;mtY(MdiV6Yf2^a`mf>k7(n-7tT3qqN zuD8CPlEw$KOef}3YVV=GIKAUOJ0c(8EMvv*%D$u(M{BusR<7C*Nb4|Y33s~ef}?Nb z981YkiQbXD#xnQ6olo3TwB!2(u~Ehxqz5gN0eLqf4wY@&mR-42AHgpSK7S|pebT=v zj~)nOzXP0mjq*(RT0e;fR{WTfWH>KZNk6I*{iN~|{f5bgVTX(;n%{Im?{w$^D-mF! zTKi#9QFWg;$dIpeeBWxnT2a*qkw0?bIps3Ar3Dg*u|0VfN}q1+jIM6Ym2xmLE{UFT zi*%>!dO(;IUZ!r;5_VJ)!uFugifTv-#oNThIw_k!5>}y#a^TxgN7kq6LSuP_T!BFq z;niie3a;jMD#>+&$vqTru`s<4K3}P*bbju-f%bPeGxLRUPV4&n`_wzHWz;`jP7E#@ zjr^d0!k5h}@l-=7EfY?vOh{ZCdYtJ%Ps-BOjwr>_-u6;eeh9p<6jOit1_=#?OkqV} z>_4sWpTX3b;}{r^(Cd|whwWY^j+e_w;dQJO9(6J+)A9lff76Ta5_}jX!%WZjNP4|W zZ`MnHNkT8flVsvnSND5)#D2uly(Z%;G?{WVXz_;_Oy@LZwHMniPju@X%-WnOTP4i9 z2CzLfmLVkG)i8Jys!pm3nnJ#~y;_>ARx>q#9Qk z!&_a%sTWN!7xASX;(e$&DrhwgyFWm!pYKja@n`^ofcJgk(8j&3j@u@!C3FsHGT|H` zXQ=7;Q0S(M{Eso-zWs3JF{!^~X`xj7hwIZ_Nqd5;9!-L;3YA(*w1x0$tYJWfzPR_T z35#((s(OB~VR%4gbMhw^*X8rtGDz)jP zZm^E=V&4mn`620Ch6$RLOXZk+>J%HgFpDP)y39m zQtfusOu-(t#699IlEsB7q8rN?rd%dkEwxNLxMK_|4cA8nZ(HYjV`Ex>!9D+fOSdGG zabiuL*(|S`xoqj0P4|BHw~$txEGJSk@L8r)?r0=mfC`qbTmI^~d!0;GZ9ZSyxaecr zSpXzsm7KqSuQ}Ovw%s6F`Jb&0whaym5Srt~4na?LDYX))T7CY?lIvG_545amQs*uP z*sYtuQNA1PVue?Rg!R}zBN?+%uc7HFy|JrF;?bm267E-CGnqEZl0Oc!khChZX@w~NV7H4 z%W+=Y!zOm=@;nf(pp_?%7aDE$(_Bvzt54zgaX)uRx6fZ`;O##T3EN6C`rk4gg-+)h z7m_&f7{S{x&v~=jcRe=LKxY%AvJ*UA#CXLK>BV_-AfOx&-1bGV;(U1EahXEZ(6_7D zOb~al=nWr^ab#)Blo^yPWT1N)wkH80}6sJRP8m;a8Yn)!rvSN@Xv;=A_L z%JR|?BqXu$F-NFl97hvd%b`6xow7eG&kL2fn=W|Yh3$h=jCVQd ztt>FG%%mL#@?Cj4k>05UTgSTBG~tK1@Ma!-F!xVI+0Rl4<~uenA_gkY*kh@T*~Vkq z`Fjue=dzfddb7v9FsE@X+i20c+i6&85K7rl=kNfR|0?l#n%Alt?!D+3BO^akQ;!M| z?DYBekrC~ezX?pajM`~N+1|Z*6P!BlEVz)RV5y*zufM_O9Ba@3uO&) zl)h)FcDU%;sB*D(uzY?jplQDlo%om#y5wUa$(HfetQS;9=&mJ&Mhjy#Q>Lg^wGyH0 zeOK=doEN4tgk!EWXw`xMo5@uZzI`*^iCd?~s_M1ymb`w>AH%aKOnOG+_*-S;x^#?X z7c1M1g5-w=TqCtn!95?zd^b1MyGLO{bFJYT)p~eQ$>h=4e{tKXnwj~Rtk?Ii-ku2o z;l>SqYRQx}{A3}e2&umyVoy^`s_G9O=c(T)W%SDS$*x+zI-bqEn0z=pJUE65SZ(S5 zNRjqww2ayToNj+q5@;>zv^Doz=sn)$7tvYh0MszCfJ&bQyo>u=*;nWN-Hs~Ka9g=^m62xViNaIG!~Tq z6i0uuHRJWs93>1TeU5J$ejB!t6_K%{Id&Nrn96T&7A4~ND~4U|2*I(aFeePL5#;t+ z*qz^eTzD#``zqfolbMvgr|RxWGS#+Dox1hCdgjrGuvvaRNmEldn4iuD(xY3i<-MWY z;U2ZoF9goGM2L&uCuxAM49zsEk^w^0h|9B zU-KUrFxGPGWVE0A?WvaR1A&Lwo>xRJAB#ryHc*SA#3557_n$KQRb=|VB8sXDwR5?< z-V$!u8QA@esuk~_HZbL{Us}-mC;|A2JxUWF&eXu)7AFLJhEKur?sEi%aK1=;^g`j} z<>URi0&SfBIJWiAKgWpQ0{-n&CH>L%oA&ppeJ}@uCm)CHGDG?(WGI*ZIgU`n{sQhM zC-wBXJPA>Z1XmmC98D82$Z*GCU)O1WJ&++f!9%7JP)nQ@e5$(HK&wsSqY!^G=sLJw z=%pCfnnt5nZX!{2(RCM<6m^#iy;{k=R5Gjx1Wb3>yEbQ+&Ev(fG5WgHIL+71^03K1 zG;y`leQc->I_IT~X13C|IfT8TRsP%B`eazU{Xzl6ELTA6>=HbAM-Cc!y^G3H2@Yz% z-*~Bo^P(lE^T=E>9QNKC^$nz8B2rvP9Y91n9N-8?f`LzVJ4GV=xY?T!&q4&Cs0tT)5F;C}lRr>>@``(fEcGJCH ze7ccXypw7KWxAHVEOvSlK||K=))wpYK!JiNs1TsyMci@wHp=rsYX11WXQ2(&G|1bF z!L>*?fqv5*bsL_j8Ylh*#4`xdMz;d(YX){` zLn&qg<{ktVh-a8Cvu*fztzkOf!Xr=J-l<3;FkH_Ix=+g9NSK%32<@amt$Q8Bv8}Ed z^7~dJ*1|Fmi8Hi)GJ$YIK2mtd+SzV)xk8cxlc4OT9<;kt)iNZrM>|S9G&`4buQ%xD zq=@3ck*vxNxwUzO;{?ER7WI6Kf0H+RhA2@|gSfSZIS)D&wFJ9BoRieM1;of=OD(s$O^>APP zq2+FUbk~~AqfoVK|H*@b+!p>KvTY{F;hB?<8iTi>8WiGbLGWTx-Ia#C!EAIXpe&8t zzf9FOkI=ajB6=zvu7Q81E|tg9<$(;`;9I>d+lT%2oiFe(+pgrp)@^;Z2;<_|SMeuV z7#DD)jh1v+df0v&ABbVxV!u|^oH8s>-wC4WDapLB#}GXh)W^nTAOA_(0{?1VHf122ZBP ztz8XTVWiiH19qZtwc{TXr#FG{ww2YPgc7;T0tK_1b4q(5zM1|wZLg@!HtX+D398K? zn(4YL$P2FAzg~`{U*=r_7cRKkgsql_@P^$l>9U*os%v1}tc#5p$i#`plWXghVpVf` zx=a5cM<{x?7m@a5coWjj9h5mweT%GLR3`X6mAY!3s8ZG8aYTWsLr2T8mG0xu(*3)z z|H}gsX{B-W_iR^(X13gLDmfz9{$=6{gS~Qj$ zX-A!P3JUf`*3O@ZCEp)(La^hTry9h*yb32-9Gly7FkB|H`dY9mP~!u&xc%jC-msfu zJ}h<9wAL8!Iuw^6HL1zw(n(4~gQ&4UhR&vh9>Q#99|M=aCbl=kq-&7`aF10(Jg!Y? zeRA&X@f@o$9@2J^vYz^B=M#b7_rzcM_2&N9>=&89J(_^NIn|QV2eRc`ex5P&Tszzg z4Z)@&WLKoWM|n@HAK&EFUtgx{7wIZIvS5iPZCMJqv##>d`Nl?e3~vM#-ypbM2b77wIK%@KNzYYf0%QQl>gQzbgwX`9yD>7M^J?NApkuIO#={5?Re8W7r@eR$t86J&@een0{9{1t6MG&A!-l-Ci+ zHaV#!tDMn*HfcCJG9RjR@pTHZ{CN_;r=50frmM4~&rdsFuD2!>=XhU`>eg*Zo5{lM zb4uF%agT%6U%z<%pqX@L;F$!HU6O-=RawOIoSqVhUX&e+m+W~fDB!1AS?Wgq4xlGK#R;7n}_zI&&o!ciF$u<>R?R5J$ZYsHp1MC)7HzxtQ3s9%@L zboB}%|MBZ(a6zAs)~XB&#$MSNP-b>btT9M)^o(9f#r(>4{c4_-?asLDYgx5t>g*K% z_0y;~j=8m=VV3+G#H#TOw5wgqqX{ApXz4xO7#7Tz&E*32AH2+CZ6mil;@r_kwwkAA z`<*!$P^{uT_=Lda@7^?;IQm!+G@BPHvK z%(++@D{cQ&=ulnyud{l{FJE_1_n)W`zbu@6mQ4+g$=+|S(9vs(kYE^n*K3I`dUsiM zFg=XZcSyWb(V1fhI4m(vvR7sb*PB2t+Eh#UcD&i1sta0* zcldU7mL-*;P!Iy%%}#}hqU*392J5e007T)s&PwOg9KG#2`ddqfJq7{1#Ejq-B2;tu z|LPquW1f`zximDknq0RDc#N8O_M-EKd5VbZxMzzhByQfSG)?cN#a__)bUBvaoCh`! zgn$$dP^VK-|AdG=Ffz^{E^R>-o?@LI`)P`OXIS6+TUhM|yE(pLxlaCv7hoCT!<)s= zvoXg<+@@(wOw*UwE)G1!j92qDbXa5Gx0O*$q>4QLetzxgDf>+o);H@UxincsMWC|K z4Wf-2dFe7zKGK9iY;j;R2+j(b1?pKn#7Wj8z-=np?o4=t(|UkizgA0A!hP4P+PpJ$ zm4Qe|Q%H_}eu7pkcv;0O)NX9skYU$Q;d-g^SRHk>xgf5@wER=}eG`Z_tnJCn6(`_c zWpDTCg1X~i?sfoSG&)R$x+n9xg#`2KJld{#1T!gZ4*RHG3E{%mJqq_kj;#i0q&VG* zn=sP4;SToig)iLfi(m_Xo|k6uzib#ILQ=>W6)+0zm$q>v&6HbyfTlKLmt8od1HGbx z8VhVq{H_mR(h~s~Um0IdokKcZPT_a;*X&Zm1$EOX^vh;PiwZKo;7;)2;V1Rq$*u5T zcMm$RlOYGI5S!q050|qexeU(CA+;oIBQJpmyQ$Rjg%z&SVhXf_kAUS=$Dx*R$YZ~V zQ{w|EDLaZ|l}vP;L2tkGmGk%|gbJa(S51SXxq7XS%@GnlquHz{p;~WBGmrjNOnDbz zj?v7J1`_*RuDuLH~DnZ&rfXJVp+)U)+W?(#5bZQ<(YYmg-KmA2kyZjn~&9svY%scx-F6HtTY< zU+f8z^9F06((=LsOrtu0sDO2&QNcOf#MCKnvW7cpG43R;@3IpYwJ!X5L>X&d2*3n3 zOiJLG_^y1*E>`5f2^>T(VufT3!YVf zE%_xc|GD~j0#uc)y4kc0_0zYkD86T(BaJ!mZ{Il1TYAF&NdFOg%Ilj)_M}n2*K{a> zJ3pS=fa6e9ZX;fY2=%vHC#`;sF_!LMLJ4#5<3DVh3R6h-uIg^u3I2SEyFFhtrH@2+ z(Ay^eom9wwgA9sJdToDLV%$U_4?87`}ltK_8GQjxaWceQD~uk)3H%ygsknG^v%cS-vvlG!88C_Zc|)XR}F zPjIt6tM+R1Ilr+`rr)(c3ERWky;2;Of#a=Vjg3vYUt6C&?`DPv2E<*>Io%{eR{i!{ zzMS}=3W+DOE$vem-@@%o=?;Cx6)<6vG^SH!Rn5m1bk)nFuws7fXG{0q#}Qn0)T$AW zV_MC{skV|!dl4;pm`?ZznF+N&iw82=m7>jc)=q;Vd!juuJ9u5BZZfZ@u$eKCuCt2S z$7uP1FS`%5_3r8$RTpqe!q;AuU={fZCFKvE@+H-0kI3FGL&|~|7n;iY@!)%By*D6P z$NoFnWjWT7u13SBHGbH-LgE4T7enE>x;pIqF|9-5-#ruj*47Zz5whZ7YCQsi>UdA7)%76<3#Z4ipaj|@w!KIBtdEAh&{&cexqBb`S%3z+M`|G`(X&M5U!A1We!lN}g`atsd`Eq1EiRIl|l z<3YPW>(lj&Oet__f1^?G7fd>#&4|#K((e8geFKT! z<~8u_Z8G%V5AZwtDBe)zc-9B8y)ODQgEsMqmKiVqUF{vmIpUu|qD>?5^xbSGWW{9C zs|G}vO~?K73n%jTwXyo_$@^@%{CuVzr*isk=n%)k`2s^4&%cX&UCDuc?D=wfICvg) z`3D_SOTXKAxnIn7hqZ~@EPE(K$GEe-K{%4si&_za%U9;Z(UW)J=Jc<1(Vv0q{a?Qd zzNoLcy7Od0izGga2S!t{wI3+48+np;bL@3MXLB3C?-}j)q8c^Phu2Z)dPY~| zk%a@i9TB71#l$y3RXqLnVf#@9^w)p?)R~92B9pF4x-hs(>588(|A{2|N9p;Wfxu1V z%fv-E>W_h4*LKVq9Ey>HnvUI7p!^Jb#76>My~J@eE$4sENx%F2HT zJXrp$mmu)RK$@PcI#Depw^XLi7+wLi&ntsFXkm}0x+{Ie>>jrM`}^nBFWoZPk7Z@d z+hR|zy6^IO^)6u&ja_4XX`@KRuosRleJ=WRx5>`RQ6K~LUy7lu<9Gik;EppTadEK7 z#IFjDpur~vDi6@{M`97&B zt&x7AsObSU{Nu8OdwKeU-{$satN?S69(OJ!J1YfpR6g(aH?DI^fZcv@I$x@OaTPA) z#dxr>TaIEIHWm>r#$F&~)0b#~ToX5JY?8h}%NZU_?{+}9Tz96n<8~VZ?)99l&GBoMdiQdI)3QOfVmQKmYydCBItA_m)v? z_?2kKkmnje|9!VxWtuHE5J08btnyV(4x~U^%GdEr` zq}6-lL6qNj*4BuGaV#M@;6OFLx8$WZP-r-$QW%lIxGTO5GAU*|dN%H8m&4to9#B*F zi?0+>GdNYsDwXZ=>p{aKVyUCO!&SDbV-!P>nk6ULEPTzf1oIA=RJM@!SDZoH9F^fj%nBL@8o;&O0e|hRRI@Nby6EA|<=h`x+&I;3GT7?b9d}+iJ6a6=x zmAUiGu*NTfa@sMlPu~BDPnl7@&T%@~O>?w9)+FLLn}L(pGMvuc)F40UAly!}zo`;+ zeoxFtYxHGk4e=$Ji?3<$S}kwgs7ehjyXYH-Sm0X~If|CDC{rXdqvkpnNSo_#yzIvL z|FQR-VNGURxFe!qLn2M-f|V*t@2G%EmtG?v(t9rf5fmj9N2MrLdJjl%2_OLlA@tBg zkuC&CAdnDpKhBghGxwD7%>8kH%)dM#?C;xquf5uS*SoY_!50ih!nP&GcUOli!PrbW zqLZv0&*>?DnqF=CgCdV_+yq~hf#`>v*0Iu8VS~0u*NQ(}#NAsnyZU2|yIeeMQfThp z!U-?L(b`}OlGC1TtkZzJ;oCE|w~xs-_5t`u(wX3qqxrm5Q9jbXy_c2%czm0P@0Z7u zAf3jEwd4#zqlTbDCrlYec6KA#*W!llXpwB%Cpm->iT^qNB%4HOb@F?Z5rR1jvr0JI z?hr6;$W{RcXRX_-*d>C<&YjCIxAixI!+iB=IRW^*;E~O2ydhwsM9-16eN#Uh(QQU; zdkQJ}*P>SKstCc|?dC!Xod-FYO_YOuZ|+F#isDxeR)RB$pOj=p=MPhD2}4~VHC!)n z-oj~C=wp!LPNU9nn>d|AmAZS+wvVDTZqI047>0h&lrw_WOE~8ttf}et@{KDAyDBq5%tHOH_iMel z^Q@FNZ=VWcsL;?fka>HDtD_0XAp$jIIx#UHv!ooyVr=-U@22)^iD0O?OT9d_ZZN1m z;YNk0qfzd|dgV$xIUN_8BDaVf4gljGQRt;PUxM(<^}zd&Z@o2-QAwdSVO|J+i^Lco zT?qQXyJr1z|?;^j46SGu9){FVuw_D{{RShvj+baHBLgo6>^WIY+3K&WArO z1@Wq!Yku(l<`Lep^JipMEBbjAtYo*o;#34!?$W0!Rn!m^{QBA1W=iD^F;@UG@%ZY= z;EO!$^%=gHyoruCf`X?Nf?!`em@sc~s)AT7~-n34NxfU@7h8c1&p{q zbjcl_z8DUz_rB8sQt;kKKU7K+4|w95$~?L21|X68kSw0;IzNE;Pbm6=Z=Qm$uY$MuT}u{BX1Rmv|&H=Eh;5m-YupY_|pmzy0F7*F9#A+mB^M z+13+)yRWZdFk7~hh>*ZIP%9@^-7nZU?J>sH!Qu(LC?sCs`~jZwf(r2Zt&$>4{};GY zk$4Pf`4jx?ODOd#Wp1N8Zys{UC7;f~)e0K4%4H?VL2?+qm6r;${fi8B016$AeiPi42k2tOdrs(NgvK75nd2GD;U zS=hu1U=5f<5AF}MZm5x!!G?y3%9t#z}ZmkGeeoG zhD;reaePVlv(;2~TqgHoE#vuqv_>`(CP5(FbOTGFohvdjj#a-bU9Zrm3iQpA@lymB zT#GXhf7iIyqjH3_J?0wueAi9B+?Ha$Oxs=V(rEHZAtd;9iACe+r4QUZC+oT*cehu^ zE}xjET%?y9D$!NzytUw4QW|w|qMGaNw?NQ_1>^gUrBm4BzC55HZ z-LTTr1{%Dp`obw)Uv4$guw~glSi2Hp6!9tOQ<4hRTcsa($8UW>SNI&W*UPq! zgCth~&~CEPWmGxPmq60;nBEmrMdE2w?!(W7MSEKUyb7#%gq#8PoM(&D{PXooAEX&^ zDW#Pc98>&Wx;hxIoHTm(km2kN`|R;ueBMLb&b>R_{l^+MrzqvVTy8%4gvP~$0A=%= zRC<75GTp{LXcC_|LPL;z@p>W1<8=b3%nJwyI$`F5kBij;5?zb(S*~<_()_rY-YG!r zCe`@?okQO5C&y$cawfLj>?!trQIAxJC11ac*sN9(_SbGKu)ifIQO5dD8EV*VOYZe{ zJbK~9{m0hW$!=5}oQu$Fm@g+#u0Pc0Fj_&G^>;GrEYwW!#U-2Q0iRxl?Fl`%()?9^ z^y+9OKY0SPe~Z|KAy#^v6>2e-7+xOAOIloNFa*581Gnj=*G_7p znKVqLbo=gKl-mTy#xGd&+%y3(M{VyDFgiDS55j5Wn2n&wW$4-({O*=#3vE-JB zPD^;6Bmtv3JYbYotGGya9^8y(uyyw4EC&WKLs|xbko3ZUXm#u9bPL$t1 z!F5-MNn~`%HrYV7JywgdvCvnb?+qg)#yak|r|pM{dN038)G#!MC>Jm-LcO6Z!h&r@ zC#8b-5)=#?=B>#1bNw@Sf9mm>qQZSMV1w%dTj2mpGrTT7ElB}Si-T<3YJ7*yqY3pn zbJpWt8h#5kkyz63HEN*osjsHOX6wpQA;$lG(Cz-qr?e!%3i-0GPkwEDr|3v6BuA<6 znw^{Nzl9RgicR2jt+xtt6jzug1}_A9hj6bF6oT!gt<@*E6lJD0`;Gz44%6>V4f!U= zaSB3?%8IN+<0UX;;|69B2p%_6!Ech7IKAVM;*A#q894$~DtQ8K~Zf-mwl>FA> zo!DgqF*`N2xOI8LWw1->x4~N6eLXMeDiObA9^MA@?FO$g=>O2pE)F!;(|UJ<6t`Jt z!k{CEU3M%HfDz{v_FxC&ZiWQo#+F`ZJNWdr#mHs)#c)ZCyO}QmC6g0E-}pWP4}S6- zOj^b;9jZ=5Y_wH#d8fNl;ADHt{F$k*pKWt38Uu3pOBw{^aClDHxypk0tPmX#!0?mtWPFg&Rv_(+Pq$Jd43N2%GO(yk~Esn zdo+tp=$v7)s|0LL$bTY9dT|BDp7sbs;n(mki}_W4fBWI`8^s9BA&$YF3=<2RPF(GM z@LXM0ayQAoP=rOu=#4yXNjS^$<2{NcQ25;UG`n;nQN%iO-E2;tNWiAa6LDMuTeZ^U z4K7GYgF~OOZG0}amIUAQ+G_oHnsNd1&6-Py857hm<9L(^-#tw+YBUz1wBUqu97~D| zTYOiXC9*pd!Fr5QFD)t(EDA{cZAf)qh3>O~0kxF1iNL;c)R5u1{P7DdFLY{!Z-xUk z6aoe*0n2$A9_C+)%ngK%WhK2&E0He;WNjzU2jouE-@x!u4vP z{!yAj(%YX}h;seABYm{oegk{1~$y|j^&e@1}T&JUdU3*2-W6AQwH~2UdJhS-O z%7@ujE;OlM0sg*|6LAxRPSa5_>imPvBRhK$eYEDg;BjFJ9ZOP+BTX z3$b00zM}eX6~z7YkB1?Sj?_etm)Je1kYq={!^k7gM_n$)?FQQcE5#bw$Ac6EILnr% zn#1}A4Z&|q>WG_OZRfVm*Cg(TY3=(E3lV!Z5h7L>bSvW_w%vVr{)WTxPO6_6p^?C{pXrz7`xd_!Ij?*1O1Hq+Xqp; zah_hu|DwfeO!ebym<{G?g@4=0Rk8Dkv}}(A37OEBpM4w>H@Imf%~>YISve{kJTrTW zSyd8N2JbDaB(t3ij=BM)T{H+7%7YC_GQPjhcAu2ncQyBc51&Z@K1 zLGJWfZG?se0%#;G>)-0B*D_XWY0WBP|61KDtiq!(=ul#bRUjG%%K%vYqe59CGas?m=AkOkjEbAPwlm`+a?P8O+ml$ED1#Ip`6wpgb9ZYF6)FrG<^lh)o^uZ|A7<-TMyrS)06j@;j zh9KH1Bl6b8gIj7cSjQ$C10xIE%Vx5TJ^Pw(dB@qmV_o0M?u;>N^!Mzau1k~Mv?Dgr zgp>@g#hbbaP(cn(zmy)KbykZ@>^mWb^rRkGbG&fJ7;;uEzQMvm^n;!hEPp#+VQ)h8;isvL&IAOW#eVk zp%SeST&3%qg)vW_D0D&rql!&Ka~KWFB-+f=#y^2#e0)y~onbW@dYI@u#TUo!I_KCy4IJ5t{8}c6n)5|$@EoE*!v&F*O zPq+YInI)#50vp;cUIjZwb=bHzR_4o+dSio2WhaJj>us$J78?3*L9;z$x7#`-Iy)x= z+gIn7mS3C>`FeOMZGYDkIhzo?v`}a$A;V?a!rlh_-nMS4<*ohRjyuw)(4bfoblveT z(B}Pa?O4zIThJ+Ee?vx5Yn!W~#<&f}Wd}1=&QdKd89TmYUjd}$l}7%S&jf zgYRAhV#zwSB-sA4a6*{JX*oSEAAVoEhdeQXtE1j?l?VzAVvsv8(J*L-7mIprKk~r_ z>6L8j1I6uRDi9kpX3T46r(b|6(OOw>2#K-1+P**MgO)d9=!*h=R<&a`DovZ^Vbp5K z=BG?S&iP3$dF?gQ3$4BTI(K0WQ|qnzhUOM}qfi`FJY>H`ln`gAwlLD@XXdr>G@CEf z+p^_j5~G$S!~4$hcma!dm*u{voB40ovFGWL0G}M|*yE6m=iFcyoov@Gi1&0paGcXv zSWtig*VND-b9AO04Xd`A=R$0yh(d_9R|^J|12L$^%M)HWnf}*W^5%Lirqw5%y-ug}>maa0bF&PXHP5~(=dr#y zLoje_l~sOAPQjoy~dm+;!;%^S9tTbVU$hs<_^Wxehfh(f>{<= zo9kTn$ml-nSh=^%fX~@2>lswHICG-!oNcnjS_p-#Af(0B*$U`_z9;uV32Bzf>-o;4d2a_?;K}2t;Nh!}0leFD0Ed7Rn#hNR%pssSM zTjJ1g0ZfpBIspUbT7-6E_97^8@|34qx&4}Kx=u2)4fa<#2L*51r%}e!41`??HD}sqv+jI`$ zS6wNd#GL~jN^B8m`)sZ6LK*iIt|)k7F;`WY6SDJ>ONx4`7s z`;|y7X0&XN@2gu5k*kf*Z-b-L7MG zg5>4P!8azh(g<((Fst@niFwr3K_ij)1n_PWKJ0!vtnMF)mFZmhq+q=Qm1@B<$aSkg;&%#)8kg_v$;A8g%r&0DtjK)$pZE zee#U$dFTQfEph=;H?m}62x%hYo|O$|T)2B7|8m;h5PU?MGyR(=`mYIyy|@aGIkwsr z5DG-K*M>&C<=lm-cCI+2sQPFB&yIm~qhj?2LoV0bjAiNwGBj<+@3rd2ge#=ph%0WV z;A3E=_!qKcv16=FZd$O(JGc2AQJ3*yQNNz6_N#SUOI;F%+35K$t+6U+*9{>O0h^G6 zN{oL-ckWd@%;+u8$tqv$B(rNsgNzS$QZ5Cuoh9lp@TmgQ9#i4h`^q2%yR%{4|K{pz zsN2-5zSRn)xdKcLhg3VZsH-SX1Ey&5t z;P`Qst((z`swciccAv(`QYzm@Dk@0IYNRCRzx2(X z_UE_!!UgcJ*m;Q9RWUw>xgQR8%5 znb;8Zk0}3*Y*FI2X@~XgCfJA3Nnga)>SkTya|eLHuot0Zd*k~yaErtdh z3WW!p(ESUGm_9*GO&pCLYWgcu8>3#+dw z|FZW@Llu&-H$_k80Z%rB}@+guO8vwqVb0?xU*@{C5WT*Cqe=A6CzE_nZkdf!}4b7mY$)CZv5R+nSFRJAwGgWIl`KXZIq# zpO*dA=!z)1CG^j^KR-=S8mg#Uo#^v=dqp7OvpfAADu@O3&nhBRh+CU*x`?+ED^}HT zBKg&=?{W?7Btl*CuYWWIYiCSv`iwP(9;3I@1A-hKR{md{kd#~L`&`=I?YdIJJ!q+t zA2WYhoIf)GuScQ8G+)lUjgO%I_TNs?1 zFHGvb40(7F(a`o?!E@7#D?P(+*n!@8KmOT`5Xv zMj$Ek4&9~mb7*$CMsf8Tn3_nJ{r;AwR5NOF3Bd0B#@GIDPG@#gPS@DIkAwOf& z$;$)i_>MN7p6kGj;-!{;31uZ+sJ!)eHrJMK25rPW$bFlcxod1qBOxC8_y@N>^7Q@6cm(?CO_0rzX zGWZEY1_h0V7V9V%{*UxOYyOJVOJ^MZ-fHx(53!*68+{ME49)#9H3EWhz) zem9dpzKr%&`llCXECy;WSJo(RtwRd72*n25;u?IIz93kJNq=;VHjk1LRxz_|rKv6puLgCL%QkZ8?1UyEdJLvY;(Z5RsAop7=XdlUI8@%*0*GAy?`e&CRI`jEj@!Q2v_(SLd#Q*)cWl)&Slj zWm8+`6ed4YH5(_CXe=w;r6fiF(?inrsVYAN#C1j@p53oa*Uwz}u7sOg-|;tVO2vg= z3@>Z1#1EKsYMKaN)=cL(B5m^&?wPA0sOgWLY|IBP8S+i9eqmwSVE&SXiD7IP$3OF4 z>4%~kwHY}kDjf-Br-s^0h5ej^N9%0VMr%C{*dqi z#kqcI0LBQQRP5I}J;&BMDyHwSNjRx*^!#E*71Ns)GBh9K+>d|P%d84daSm)tRMAig zGA??B!VmU>o%lYCroRuV*4HzAtAAjrzwc_;(QKtKnmE2LXJaxQw zzV1pA&mIx(*5prsU8#DjCyicO=(+=7jRj*jQf07Laxycf_xxemS9MVuuX8o;0`MHI zSlJ!7TCjR}uX2KUa$B;G$ut{IrV4wdn{1mM88%k!?AoIcWB+Z(-5&X2F1u_YR}EGp z8qLRHP1z^+AKWU(cO~^RC_oz4ol4)I`}zFm9oZu-*)0?DC}6Yfvek!it0Y>4av-S-tKO(KUERT~ z)Z`Z>;~5mNJ_+2tE~H2YcrA~?TuIo?tPg z3vBg51d}`3djZ_Y(G^(3c7fOSS5H;}DEXAoQ#u%YZS?*R8{hD=BrX9Mm*;3AzT4E5 z?`F{4)9e?LXv>cgxoj>$fuMKLrrDSGxkb`K){+(?=(r37SPcSVY-%?xZ4gN=h-{^^ z*M?5X+C11eYj}K(X~6;{$!>gHz0C+{DSPF}kzB^TT9(U&iXRzGY53KpNks$2j3!WV>|8CRWVBu*gbr71Q7J8GQF>(+C~}NWxh9ly z!^Q(t0PP4|L6^v6NI!6Y+_je|ZEYg;<=}2u)VG`8o?N$u6iJbrOXg9RsPhWC(XAld zc%3w7Rl(*Gpjw8>yI*5Boq?b4khnLv=aX}c;W8tDq3y@Ny%Bl*8B|T_W8SvJJib5D zA-K$?yJ!R{RRUnL9ywO!?f9yF<}Em)0>vd69@kP~`gqJ%^NzcdyC9ESjU~A)W7DE{ zV?CC3P5DImQ&^X9PUe}Zd}|e!_TVL^glrmYV7k#7&G(q0->xe)ZPow)OaKK#nVYL= zBj>8}7<=m1UqSEdJ(lt8YvuVAkJeY9d~K)10mU`O77bn-eO@DR3AtDMy9IAT`tOkv z10@BF*7V*PNror|%)iL(gic<0x@aI^`s1Wd0g+|3!?DD{TTzg2S$PX#=dS4s!7B8v znb$*i2eXfd!y1Vu0TfkvZR)A+)F2_8)OxZAe%m6C`jFzZ8 z?26oEs&(98*WWsNl3hF1rSwDG4G?z}vru#jY4ju0{H^cEC=>C~7g<>I#XPz-KzAc(+fT8{3v6-qD6Ec`l`Bya zUpNF+jyB(a;5zZH@+y^2nX|WPwF|%+SZNX^W%c!w)nJqEfqHf>94wyy@IK;jBmll{ zQ5GW!1UV~3s=5FQ`VLCLoJHsx8WF-80^kjTTA%k>`$c-X7N(m~4tX~>ZDo6+EZc)l z?)vYpbl#`Ct}=W*g3g)@pw#Z0y9{!}b{Ik?5@a?{o)-Q*##x~A z0;Yw5N4uiyef`ZH3sI|M^{@tKz!hPWgL&lkwdVEa=J<~9bzAkmzZKCpI_TI}03bXx zT%GT+4Z*bfAE|5g+ji%Tf6dGJ_}+9WF@TV-fLIRAxcR_E<0+>n*LONkzvLwj_Nb@k z0HCUo=5OI14thYEMC=6?oNUqO1gu-b`RJr;^2(CfD0l0AHhu$x63xu^C{0*b3i^R@ z>NydQha$Csgw%$-J=vh0^@AFN#s0Q8EYA6PMurx{t*r+q2PRhuG?i_1@b;J2GV>Zi zE+v0n$O_B$r0gU1u#AH=p7n&1z({UzYU~2LdXbrrX#X_*p-+|?$)x;?sBu2jMPb}` zd9J@dMuIGhlba>0{pGIt(vzQbye;duR)!>4=b?=VVn^XzSfYEdIc;e}x~PcHUd;du z^P@g7W7!-K_QClfn5kjyU6F}g*ECVfnQ-2CRll__cWs$F32&&}0KknxhL-M$p<;7W z3&=i_B|aB-8`ZbdpQFXE|9Ry#fZ9#q>5=r>>9N`>NvV$kkiE){8@8ICk|c=Z#irFN z=s6|;yL|N$4WLDEZHkPQF3w3-^mUSMg=GSxK(-Qm7o3RDZavxs?0rDRj1H zPV(!zgqxndM^B$io+O15;ugR``?koa8c0F?ueuY+gviCZlvj;8 zY)*5J95|PNc}n8&Tgtd--%;TS4PMYpdpFOY&b2)5T8g+l_v z_Wdsr&E327k+VI`VRP3^i!i0Z5R-}rmTRL03%Q(uJdF$8GWP*`2%sXXTYz(L)^()e zBC<@`^n>4BR^#IoY9raDNG`>sVnyp8a$ zTOAr2|2u@?Kve9W!l!vvvnC@7zX^swjG=TPljDY+WWXQL^q0P5;kUh8vF)bZK-+-i z9R-m4{h2@&Q6s+G7#Xw`OM6BLupJQEb+^F=+H-J|0#-lD1VL>0Yg2QbgCFQi=jsFA ze!${rQ5L|61Ou1}q;)6a-gGVP@kv4$*Suxl6MtNPgQ0~WA)5O^_^uTgx6l~BPf#G6 zM$SeucLu&0QXRt&=J zoPRvayYJTFzj|(=kB-zur%95v%NwQLBWN2S_*pLgnDx)9Tv}@PnDziUk5h?ae_Q8^ zjUOsDPT)y4D1@vpC?pfNSCV+tcNRm)6||qC1)iQH;a5!yI;zjFj@FhBJ!-FTT7TJW z-!s*0zY<@Z-5&4Z1`FunlncxP6n%CunZz_m;5nOUO1Qu1@?|METQIvLC|aNW3&fPj zKHW6U`z#k&_1rB-HVMZiKO%^`XX^lQbrY>{%vLcSdhE@I{cNASonZQA3T+>kq}#Ps z;iq^4b_^VpJ5uipeizgoF68rwF5&l1=YKv->AXjn1u!f=Sg` zexhi)ZwlJSnMzs$@s203`!LlH%g!4lHzd= zaMUfV>i^#ZFlyQggY-@sB3&F6#Fvs*ZxT4EOljxz$LCd*gT8y+{<;5l#fpR>mvn`{ zFa&qc0Y+{pY2>?Z0RFIq6(&F~dGMk5|FnlcPp23_r#Uvh75txVh2mqyLf~e_MB~xn zpOOxLc*%b*+kbk^|9_Ta5okZ)^hc0s@*`HF_t@6j`0nuUQ7FG(M(MK0#~~>@Gm@geU*w)%Dn3DPJX2{1@V(cmLviNyZQdSl z{;TO%Y(J6LgWfK! zV0eR3@Q|@>Hm6(wdv80{!T5{ssFK8;?*(puv)^bweeU{&t<|w=B)Q(9edpU)gTLo8 z;8C)QIp`%wMFW4x^}_>KU(shpGCJ9-4?Bm10<0ui5}2SH(QFbz(d<$m$i9A5*z=P& zX5}(~;@NyKA~+vNEL~n7k^ZTI`}dpr>*@blvAz%l6sr{&Y^(aS6E!aHw2Ql$?Zl@a`P)R7Y+9LB$(Te`fGSGx6FqaL;B?WTZ1L%F!Hd z`SID@$IXA_1f03)v1OoTNBbms2vOt+k%#s*K?0~?k6)`YCvr%8T`4xNuR1Fu4+h8~ z%qO*iw!6|45Q*4(q3}}k4-fCMO1e$CpT>*L89x*2Jx{d}$O2o~$vg?_SXD3P1xRlF zrk_?=x5ucQ=4yuxT2nubBVHYfwes-}_B)~_^3 zfW^#BDNGSlmOT5pP7u`{IL8~#EVzH|W7$4k^_DRwmJsfjBF;1q#TC+hLw-PmLmJJ> zq~}TaCEXI2I)}l6jEn^&z}sXGS7;acQ}w)Ez!b$?vwqEo0}-ZxMMDkyJhZf}xO(!P zrqtYh->;r((^}ZtG^7{h)2X5QcSNi9u2--^EemvQT>fkQvK=3vB z_R~Lpp7dE8H@;LuTrTYF*U-5AgWK-roN~er9Y@?f-X50ero6`-ahlfSar}lk8?)^a zVsE_vAdCNUk5bgr124;PM*Ldk<3>gM4Y(SUBYi7JzCFnyEjr=e&Y=oKrvMpzWB)0M z@s|he3;5`J5S+$EC+$C)&Lro%-D_zAn8*g zn*}OhET!mY!b`Bwak@RZz(wr(6q+<9Hz~5j$kf#&vE&Sw(yDlE+sQ0=ph9mGc zBJbqcL3OLBb2lEnLA#DtTx|pv*M4hx@M;ceYsE19&#y1DFllAr7_?-6++LeV z5_g?6bE>!N&8TpnZJU{20y_D@Wi8ZiE5Es@7N1q=h-MzUedL{=`a9A5W~m4yK(B1> z#AfX@QCB%ERK*3*Hr6c{6tYH{N-@OiG+yf#D}c31wMLI^oE5dxW(7h_ok0K$5E_@n z)UU%~yT7-KbZ^`JHdZZs>g+Y{lwjhL{qjKmspQ$54MPk6i%7`?ivas~60Q~jz!8C} z>OFmdt6Oh5p$jM$UA%dXo29#%yUVR-C-2&(L44j_x4GvY7O)i>?i()l@oLQMfUcGx zK!3ts8XZqy;ls@*s#qT!OYm{GNDP9*|Sz5 z8?`L$h7&VqXKrL(ui@0M3fNlKZy>a}8&>7DFfFjAL2K&_w|4cnW0xeKNyfWHxc}&a-clPpJbl&>{$z-JMNU7z= zj>1{l8I#rpBOv&<4GKnzK|NnM_Ohae%*JX;+#x{jGVPLjqPE#Wo+u0E$c$%UKK6<` zWVU9;q)nv6xIAY#7^o^lip6+Fp*BqpRWCA-POV&f!ThMK3#k9?of2c3eWp_cMS~kO zXKMHfuVc?!oOa(UlQ-Gtd`U^XSynouYTEE@rt-|fnc_Zv%6^+KVy9bs!m8e{t97d& zsH+uwcaN^s$nuPJdu`i?otQJ2h#4{C6)}Ux*i3(|Pl&@CN{>Eb;wxduh8=hN z^$(YG6vxB41YKBTlctj#LESPdMdgt$aiK3v#*~eOU5a9oh>id?%_RbR;-^qJ?S*lm z`1#Z`=wn4$56RUDY_Kn=1*-Uf8sh7pEl=gj|;U7)M&vL~A%2#f}e<9^m0e9P}U4Udo zE6nr7>ddpv8e@6RYD{pT`{5u9PST5{nc19m+UxVo)^n=i!dwB1?;8sIVyq)osgIV_ zuhku0X_?t*=W;C-ctW$0DCTGofC`+{r9Q*zbq&yPp(@lfWYNiL-kMj3ZR5;mtZ$qgx%G&PW*EoUp#rQ0P>K@M~T0p!PLkU54UfrmVwr_)Pc5 zjmxZ*L1s>Tij3`VP}R~!iS1FSm|?^4ERPeY9kW6wcgPGVP1!w~=Q-v+YRF=2Z-P7K zYKfLvM0{0^yp$2sdveDL(9FTFH4*d@1bIv)B_0sevx#q)t+dbu4p; zQ>_`e74?+1`esIUWww=_08=4aza?rYGOAcq4?@VR#i>~#2dPm8(&4)AfMKQUDx_0 zzV2zGaxeF+sL_!vD6Pnz7O|>P{=0JtES7I+6x z&$YE=J+?mNyDAo80 zL!EiubUE(FuItd9Ri3MBdm;tMuR=6ou%p9!WA_a9Z6zN>`DI=$4(gtXT+ot7&kTNE!0y0A0Jv=E0`nSt}2mB#wtQ%H{$+iATD6v5%UDrw?%e$=XTTgoqmZw8f>l!@aq za8Uglx)Zo$7TbjCSm4^8i$uTdsNU=GbJ)~WcH;KagrI&8o2^W1uH8kI=aCn8{di6q z8gE=|VR6^>9;fA$7*$;tGz3K^?aZp|)>z8Ad_oBpWZ$+E9*r}!FxCtEyprK6%jq{2 z%Jah+iJA6OQ*nYWpJMrBHa8-J8Rj!4;uK8s{q>cbv2ynLj&O3^?G^)8^3uOJLXeYQbCk&%Ief|DOJ zc?{{^%ssZCZkcJ6qY?r6y&z1}&dxQNZjn41(CvYS6;PqNJf+HhS|jCQp_`H*zeqO^ zYD*6&`SvHN7-6getS~r+utz(pynZF3g4c|Zls9PN5e$ltMAfoCfuaiiLb?Q>SZ+M; z@eLAc%`79&IrBKhW3aqCdP_{$UHAPm1g%ZGtrcC`+-mHcl*Q><<>4zyDrW`dKk_Pa zDis8AcKLI5Rz{O=cMfvm+$-@?@(VaZ`n5<@*Pcl%?n@|q&!1zq%{FLmh5zJC<+lrzvhhR<+D*) z7pyLgM&l1c9)U>AUJ@OGIktCyLl-?PgWTKEd!1mzI4k*d%s%l*Cl zan}ab1;T2c_^??gWCl`4ym6r_`kH?DEbkAPSM)lbq(U?BxoeT!{x*YUHAA4Abzfhp zIdB@zN%|9wV9v>aXWwaE^;N1RQR+(qsomC-nm$t>F8)!<;+Br{HKJaXU)S6{=Z#(S z2@lC)5C8pchEARxu-v0;ch2%+@U7iFzS!)|v#~zX7PE1ogR+`?lk=t4bZhh8s<>pZ zg!^9HbhE4hpte-^;1Amy;6$mLLAAHa1DW0IhOQyNg!@)Esb3+4)FBO4^W=~cr#tt zex;~%zDO^Tun-?+?GNWo<^6%4>Di;MclvtxTGCoQH0j&y(N$W$+dJ>CzZ@DDnSW`b zSLKiIRdw`1#n}Vow0^BbllKk0(t}3y(~p;vHZx7LeJ2I@&drf;pXG58M;)rKc`L&O zUIUZO_nLaI6+$O(I^5o*87Z7PeW`&3UD$sgF^kZUU2K`77qo2 z@i3-4`6pjhUzj>G`xRB-rxmbTiMG6mtZ5KEFsl$0D;P#b#ITPwUkBcL;T5Y@*yr4T#d`=tBQ8n>ZPM)G@@$DaP zSqV;n*9=$6uMf?kYCE_ehAVk;i^`YtrgNeBW8}IOXy$Okaj3F1lQfunv{k){L8W|o zxvpvBGY|Um;=>S7Ns%|)@(kv+N9BxUEG>n@xs!9Feqwmv%AD>Ka(C!MHz^kFMNP=i zPihTnpqVd-+?XgMZt3m@e6}`Fknl;a1=AU~1qC+R&O!Vy{l`StPXUOXMS4Fq@ewK@ zUvjkVrr)mKP?5=No|n~+wwC)cg%S^VMMML>Br-cIEFed&V)GJdvL?Q>ej-+qB#+H!%Pk z4zy|lptX@=mjD_XG3vh<{h+MeP$NYIoI5R$-t*am_;}+vxphC|J{bNUo)0m6Hq9(( zGN6Z^dA7GqH^>xNfR+%)3`fsIA2DcA=$0sGtgvxhd$!R4>u^N)ZOk;(^e*_F`kH<# zGo=W~(S$QRchm81%Fe!Q%%3>W=j@5_JKbejO*;n=fQrKkcFl~Ric1e1pBNlpzrtLF z-}CrE*r#`V!|+*v1IPVr)#4_mxzRQ63mN*W)>rep*(uA$(jWds2+CGlsDC0fxi`lqb+w}7M ztw_Mn`8+d^pQkzv(7&ni8x&m`=vJWYS@(@05I%#mKyoG5ueGaBr}-kMT7r<^HdAw` zQSd^FZ_d1nJODWiDbq;q2e|0^zLpKuz1_15M`pW+1t$CI@kU*< z)RTzB1@CRN7gjlpjeKLwv;aJiF1lb@vv+c|9Dt9pM9q`MjvWMfK^435bTZ$G<^{Pz zV=5;XlGxKZ4Cbw$ztr@M!@VO#orpD?C@w?wSIItQ-z#9hv8`a=mj&=}wemdX8r+)s zk^gInS?z}!3mr&x{tz>|D*OC)PE2O4H>l-H-1*t^Yx1z!(D{Y&#lqe-(nSc)xHzUgBAa>F66>rd`HdwV|YmF3wFiQ31WoYhv+HdEhu zJ0!-RSYA?6$5%L2cS$db|B<4$nem%z*B(FT=Jq$DR-!+4IEY-hA?i)qj`p;j?;a<} zYGF!Bq~j(F)|NwwTMF%2Rnyyr+Z%rdZ&vDd zspg>-2nrn!2N!-IulXYQ>#=ZBDSvTuotFu)GZCw%{4Y=H&s~xrSDU)jXxgNWze!8C zq+;`Ts-Ez<$Yjp2oAm>8n-I6Uy5I4L$3s-0x9RxE7WIF@8!u@ zHrn?SDZXLU4rk6T7q?Vik?&W%o_GEuwa>HB{h-O5x+%LSs>g~J-Wl;f-6~PAOb3WZ zOE-qyKAuYY!NfIf5=|}{G1hdQ;vY+NA4xE}9G=>KDaonN%upD+zUKd7@4e%i&eFE=5wS$X#yA$54RDa6B1AgsScb0j8Z3a+AV?=5 ziZY`V3ncV1NH3vx0)i9?O%Wml5)qToLV$#XB!v7kQknAlfEyF5wO4eMY~rEor#IN zI?%y98_KOFO1ET+v>?QK`)Vngi>BH5!{Z(iuwHiR>dqQT>eIO6UY9~ckjbpXZ|O2C zUlT?;gcVCQlf#cTS46SL4Ghbb>9ux6Oe3*5k5>o(xZM@Lk2Hc#R+m%_X!KWt-0tDa z9YL3BCVIjF0-*XzOn9N?MN1$iB*#yl!Kf8JyRX-yZi2z5!Xui(=Iw~`t1hz|+R(7U z6xCvk!tu##8F7VauaM$ZXb=Lp)br$SMwxeGk4Hv#zV~h*Azf-9a%7jK0E2?xB(pqC zQhTO^r9FI3WDkT>V^jw2rWtFmehhyV3#!a*7x&JHqDH>+ExXFzN%MjXry-bQBoIn_ z$#(5M zcevt+^U!<3^|C?^<9!{pDQSKjM_oT2(dm(BV^xf7n$B3f*FfjtQQJ^z%{Ig~{eqHA zhXXesZk6Ro6h$@?^kACJMCG)=4pHdNrmK(?W~%Z{JSSb@O%G>!cZX7m>b-)&s4(&IgSDK7cby|47zaDnH z9Xp~qa))dcmE0o=3nnFMKs!ZYzq*W9uO8@al=Ib9c=i|fon$W*x=?X7@R0zgI12zY z+3nZiM+S`M{K$Pj>ayRNq=Yw&vCqT@8wO@%ZykqHVE!W(Mvt=?MoUNLXT9c2>l_0= z0=;)E(6E$z@%&Oz-pA2JJNSUH-f8H~{Ko6HMbf#)uG@~H%5Qm}Fu{tBxu*MD)*b8r z!Oru24D4fs`!C9}suL)$p76I$ixX_(3%{Ln%Skg{eajP|G?>~+{*)Q9!JbV0bUXPu zY~jO`qWhl=8PVTG=Gq>~>?k~|`XL84adBoz=zOEvC;*Nj_pb^-VNre>U-vVA|cpTS|%{*>l;qmNhsSxtW zjj~@?t28$?g|rm%@PlK6hucja6l_I)NgnyCO=MI>$nV<^N*F%Rove9yAhPR5?nd&$(W7fH}Be3`mIUM6(=rnbo72x6#3^a<7i(7xo5RV3>T#oO=J-$WR#;|IRV_n=|64SPiV%PSk zZ{Dp{{+eU!ebCrBM{6I$7F70{HbwT_nZR%t@h!^{=S#inCTl~d{gCOw!*fROAQK~P zwq35A#U>iT+^RRQ0F9t}$Uw^|d9xFH9Pv3)s|B0hL`}qGkWSP;zdDX}WLA=cfpoIG z?)ud3RPjoWUwlR^tR%}#RKU;;3!mR?J`3~5?z}SUrGa&IeQhH0ysKIper+b`Wy}Q^ zu8CEN8K2LV2wQq1*1Uy&0C+UK>d5UcNly)e85*RZT%bhV_VN2`PD+`g$7=v6*a$I? z{|GObJ%w)e4ZI=Wou|F>BHmMIoXzq^W@g>1mBSSsCcACx_2s!mKWG$@nAStS*kdJh z4RVI`qHS=O=ECVN8SRhS+?Twf$ZW&H`GOWTLeTWfIp^aw)})jmP+5fHVkI(zsue3^ zf~q_2^N^`NP7g!Ja8XTuJoJSEPng17&5v}-xcIZE_hM_k$8oXib-zSCclU^{eL}bY z*j?*Qw9YVNL%GS{5tCgr#ung0b|X!WP_eaVH$}1W&XPH^nlC7(q73I2b%Oe$n>rap zXWn#ThHni0Zf5@KCun-~ni$G{89tvv`p&jaB4TM1rF1G-&{L*!1U8@&Ulj0v5Y~#wK!EJtuLkz6y`eUNW>pk(u z8JGjZ`I7Bao7D%X3OJV%h7YB+Y-22c3mebr{dU9N9SF!Of=!2|J^#+}VwfC%Enlp% z`$$TazC+8#*?REE)L!e&Z$d;P$G*t5F3S048+i92Kr;gch)DLgwA_azkfoS zr0L}O-CpZ~3dNQGEc;g~vq6KNe42!B&rxT9>PhcuF5Z8+l56KxW1+rK-qC15lmA)s z#E{fL?6^sAuUX?%Yt)C8p8gNpBz*Mz{QMoRAP8>x>Alm^{59|AM@LNElaxnp1x)(m zR-)vzvIF25V2GaT-BkDJiDtx1cC=b#^x=~w+ys*L;=Oo}i%~TZT=h4PDyl1(gOz@d zf@`zA?U>NWUp?)Lj_7d51{ZDbgvn3GxN7s5IW1`u3E@K*wy9!jc8OuQ`DhJ6sf5G-7F2dg8T1v8OZV zVXg3XvA2b#-q;`(Jki3^tyXucTsZuLduQc>pM3V{oV|Z~hqhco=DbsBo0Z~OlZ#Ug z-d(x1jg$Vmb8EsSE0S5YhaG!WHL+;i6Af*;+{c7@>&bX~C|Lfy;dU*oS6#N&uFAx@h!dc4-71o|ERg;x>u#xASi0p-Cg?5JccQ3KM3} z%FB=GQ;r1Df(7a6sWN#L7ei${yw$4?F4@z+Oc~5D+X%y!rtg_@Nl(p@vXJvHnn<(K z;S~-Bz{HQ}InC~?Uu%DHDp$cF)Dym4Av|&*w_#Yyv+R9?vD}@+9gTDi=$^qQxTBbD z6N|v@wY2kYTm$t6yXUDN#a?4KNZngk4q}aZs3R_i3N2si*$M^ZG@MbVzm9mg>Ck}D zNW+g!h)bcNk~xybm-7`=u)U(oB8|=O%K~4At4H5TNbM4PSO9(DiZR4i^cb;LHD|A3 z?S^I{$G&!Llqk)YE|UnI~lL@eb3>^G=l)Nz4S_Q#^Ad!#}OaK z%W5ljvyt^lT$gb14__OQ=O5E4>dGsFrv{Sf(OUXQsAjN0+rQ#|I+djlV(6nId!jsZjJ-{sQG1G5toe zlCJS{A3G`aIJOST`3~VRuj4y^#OkLQ^19?=(AU4OwGQ7a+ zSK?22qmlV8)2^<6-2vfL30a@2)b(F%wa#Wo3}OSH8Y>dneCCfk6Zr=+_z~KC{!L17 zrB?&>XgLF-WzO_EL&d+i+H&_yY4C=-8N}x|e7-Ino_;wl?NwhtlxJHsBN;MN)%Xg- z$QA2#)8#v}{I@!;wIA<&#Tsim6xefg+it*uIevBRRcZi%kBOj8*sJol_TSfCSfV84 z>#Y6=jBz$fvo)yrWYKsTK7nINLTI`x2FYvP0qS1*+NQGm)57IHrM3({Fk6 zN%-B0wzpPkc&7MZ^K`Q7Z@2@`9OK`6;({pR?qgz-WN~-xf3xR*K_qwnU>&e@^7aqr zPOBr*?8P#F=#pKG>mM}17Dm-O}YG4w%A5K5Sr6 zqw{|zyZkG1Kx8yf7m}(>D$nLFrs5#&vFiujAKPj9RxQ4Mh2iHc`q0`6^D_vNs|h?7 zbyxhkB`|6M3B>t7UYHc%`7hnvSj32kh7FGR*42jTIL&RbDu3d6_3WH-V%6kNmDS&B z0GWKRHfv%y<4)4QdaysBw6=sFti$zbyo=*(RP);YNv8=P!1#y=&wJhI2)6?XL}SkH zE;>4N8`8!ld9grs)ILX3>qA>}F#XEh*@CmM&?iwFhsTfA;*x)#_O}mDY^P2P<5|-yb$Qa3_HNH`%Cvk?-wEx-axhdk8xF zlkA(Vlc6;2NtmQ=1T$@=?a}U-dN2}zoxhhm^RJ%t->l}DiPMd(oIImUOP58h;=W`= zcyj3U#Fh{5Dw(=756)7WMS8cSlN2FTP>fL&_pZNCRY>HFvU>e-4y`9VYrXy>`KZ5p z8GxX1Q&><|$~XQ6PFB*aQ46$8=1ynUI>sYTziF#hfX2%l{S-J^gA3e@7}Q$JT1hsM z#szlojv!={9&BcM<)w#@A0ipRG_v8k$Nyj%{-5LVj|b+@f8w+(qR_Mkoyt~B3Z0JK z-7>S&DD_O~iLX!sGPo*6rZiWc2&R>N@??UH!Zu*y9`{01dDr1$=uM0>(kAcgJ?0o? z*W7L4KP>nk-}}G&iJ?^F&lcJ6#aKV=ggANE_NkGlhy#5(5r*!Mig}4LziGe%R>-M* zdjQ3n14TYOi}llSm6ROTN#L=KR1|qMz##B_CxdeNZ<>97 zmOl+z_0KdGip|ZJ7NJauoZ)=c;VV0sM)Qo+hA2z2N9OTv#G(j@Y1Q-klFqaoYUzl> zCu)V@d0=(`fmgPh8OKcdcT4~0b}yUU_Id|nwYbQtW#5X&HXfM}bo=Mat<_O$z)}g| zh$H5jSA&~JHs;nlv~|}$Kbm|R+_5T7t#0M>Xe32=_*M1wrAxgSu6AKbdHaq>E?~x- zE^=Cj1taC9Zor4C!;b@4{7vPAFE8)lA8g(K%R0a9g^JiYB^#*`H{LvMdHG%;qW(1T zE?L?jQ=Q^qdZmZ~MUoVsVv0*v$}4%h@QJE8>l}MTe+{V@#8@msDD5}v8vi{L$Z_7pf!$s#}rEBK6)e#DPDXCiQm*!TNili24N+XTX=a zyTe$qwSC2zZQNNvLfX|TG%ZvbMHUOJyy_74{*wcpqt@FZ3l&eh$umlcg_St`zGj2s z#MO}ect*&gs2|oST|T^p&a%T@!`e;^(;DY#qy=I|Q$4hU^Isg#G@)0rLxC&t5w^^T zxlPXFG3+-Of_z9zcJmYy3IK|#L1jP9o0##r^<<@ zscZjP=|aol(RopZC=`aYyM#D2`uCsv4{!Fi-#{309X-VPZ~lh)_euBnbFPH$!K|~D z<7$h;|HrPjo!N2{9u|5e|SOh&j%BJ+fy=tu=;%dEs1a6g@1URNlu`j5VE!AuR>A( z###UQV-1;re!|SB_x}8U|KThD?3#0W2zmi!4}Sde?K}7ni`LTxO9_|%rn=)lIi=vO z?3qb*GJWwk^okF)u|#_`8a{C+(D*Es(E^!~4L z{I7BRgDd!BOaA|597r1U!~gWtPVORo4PSc;MV)AZAlJ!WNDzhF)02PwV-NnhOFwNP zY(A4Je|0?ke}*&P?#bJ3?FVs8#fqK!|NRn@CP0I^(c<*&dEn7c+X$Nnq$^x)5BU3& z`{xh#N+1pUEpwBGzx(j%nl0ccjdv3X|GGZ@=QjVB?`=D_Z_kX~PeYb}YmfXhd-&6L^S1{#vvXn> zws6JjSX?Uq>q>dFc1glEX$4)*`@vpN8PUBbQg`ATh}Zw*=vjZ+BI}h!!)EfoM17G8 z{`gGxesGJ3rl8WXf*qtZI}KbW&FMM}<2yb6-@hcOUFenC5cF~UuMV(lLg=8ey&X#I zs_x7oL!uW-5?hwueXW!!U6?uhw?+S_|M;bC-+`G_HH2-l8Y0A|m7b>N)m1I1G^R|6 zxMUKwxR;zR2oI>!+Fh9Mqc}g@(I)?)BmQ@9{F~m5lRHG=dL$-2Aj86Cv2r-;xOq28 zeM;vI=E6@`Vy^2Iz+;>m|6dK}^lxpI$C*hdB%kb{uX;OPM9emws8$`dZYcp(?@Qiu zXR!)qYGZeDDnSqE&ar%A6ey~cd6vJ4shoNuo?u}aY~8BV5oZTDN4;~!y0<>*bQtx8 zg9`BwcLYxG^s5a7j2&zx);MnXY&HfIUJHrj!~f4mKk|%-!d{Tmi<65(VW*fzPxKK0 zP)1gcX>9Dj$;ktLvGrD`oArmXeVg>^PT%KAEh$5ZbThN82>_BqDbD5;;(flL~DCG zEiGk)E-F$oG@HU!Pr;NUxOj)jU|HYKz=lsRdR?}?#yi#)NP*)HbYvO4MuAo@Veqf&FV$@{~5LzE?7 z`AL*Duk`wMO9q}J;{g-p7~N4rIGXTmn{-&YJ>aUO(`H$yShexHhGn4X5kXH62i$RQ zjPC9h%b;QU19!knB|JYFusMQ29(Q$sr@&bl(${n!TRfS3HrD)ZfZ z1C;P7xUaNI5lPu^XVAL=x*alQ=i*nzCK< zEqbMkoX=X0^hUL@yZ1@{P&CmjFYsY3&z9V;2DVX+`3A|8Je!&?xBd(BdX%;huQZY zH>G}O)k-tJ{VDOfKftj?R_w>#&yi$kV*L`RjMPX7e)%3%p?<)d0j526VU! zvi6ODk)p`ZBmQQO`(Athg4#d1mH*6TIT;G^#GG#1zHO(b&4sm4ze?+VzY|++{snuR zpD$&a=P&EG10ITeR9||inJXKG5>&#S zyCv&r<~zsAXPY>4@h6*T6&~tzoU1n1}bi%U4BY($ic4atU-Sq_MfJ zttAS0h!UU{1USDkQgi2&R&>pga+TF9!xsHucO8|V>Rx)x_O73j-Ov4@b5r>)fs3-9 z6CSnq>(7&SoUGU;6_izY5Ji)BLpcQ4P?`q4NQVZ4y0WCV7tj?%2LV&uQS8e&f;E|H zI@M0duOF56{RB2fQ}=9zFWvgnd`;MY==7faki>|@5R5YY2wPfm8qsDo@bQ?nbKzJFvS8s;$%(4(j%IZxR;jhebfukVeiOh zF3m;ToVWOz#xDq?XcFQ5%=AkB1~=ijqmQ+UH=bJSZDd#f2<-QOHm%*)m~!SL<8hM( z()joI2k99J=n2KZ7j1nMaO@}s#+$QBmp-0kw?X4?pzQ%?G<^zp?kU7q|eG;}HwD<(|Ix6(kl%h4^$`{xJJRs`ysLQ@XpVmb=6Q!;@Tc26p31 zpq@ja1!t%$8LLCB4f7jaq+d-I_%wz9%D2Q_#;Tqj@r`z`BZPRUuYNR(PMKKmF)dq7^{Tb8iQv7_-NE1z`51c6H&2L#5M+|-A*wsOAlEBt4#hTwS~job6!G>l1Nz|yYAX|1sPy=4+u z$tDrrLrNPf2l!^YG}qPp*Zgrd`I?A9ZRs6f9m3wrY;F&$8BS9_;J+DB_`%8!S=VQ^6Dj1^%4FUPS~fL(f8qM2C8oer(qnKBlQmXH(*8Ux$=Di z&p>w_{80exI~dydm8v#%_<#QTD#DRD#`;wzyQ6huHGAoHI_7jn^p5$F)3DKGQ`mS} zlh*gwJg=<%HK>lUaRLs0X5FVvhZH47=4a_$M2M&ip`l**G%<-&#wnYMQ`9c zf6Pz2p#$ifF}Do#f3_H7feY7pP@A=Y=f8esERPQAJhj!xBhy6Pl>j_aAV0M;YXEWO zYf|{4ADtg+s5t~hdJW!!F)LA`I~X$GamI0&+O)8hcZgA~2y>w|8iE$uG&S_tUm<$p zJF#y|jI8enz0w(ivYs@g7ljTt9vRlKV37&u>$r{DLz)s3@7Tz}Jlhse`7O`5V+?Uz+0Huek5 z7!6W}@7n2Xreg{fH@yo}qacvXet3A*S2_nchH#>OXmG@c#~p_&I^44+wP$cl;X@`M zIU=k4#(YN!co+_HB+7ijOOFdoIij2|59?};TuvSg%d_`+vxTI)F|E5*SfoiM&M%8t z%Tg+0jVF3czql%8vy%J@Ie)ADX~QU)ljh#ZCcK*WAIOFy=I#1d?_HO`61-R;?+#p( z>5sb5J?yI}0QtLBii5x9%y*qUy=FXdM#<@{Khce1vsuk`Ao#$}pq6^r#J#J0eLvP< zQ)`pkrPbs>{v||Xy5{DF_o~qQ`CcOs8nwdJTWhk}@A$k%I*bmhv5Q2{4$&!lTx^0p)KP6d z58(ZWkTJ}HUz5!JgsZj7?@Ypdq1FjQ<__T;IX#O@=Et{i_VrvNMr~JWUTWWjM}X`H zskJX9gshUOekA6_+*5U_G5+|zu-@a5Pvx{{e>Uo=FfQ&6%uf$EZ0dh1vPeTQGMQld z6H|2;Br-PFxykH(AwuG8L@kt}FSSkbzA5hXu&>kxa9DR^=#8}{y77I(Xy>Lu&=Y?F zI!sBr6L#@2dlwNMJ96*BC}`n0gyKCN9<_r{nwu2A89IDXe4Z;qfiK`GS4PbS^P>nE z%Q3F!XxJDN@p<*k;C(Oq8~h?DL?fOMLLIH5&EF`{7vhKfp#>0$(NMZsD!drJKK(}5 z_a*DX!!jz8z0wn^xT&Au;}&;MNP~)G8A(BG7>n>GqPc{z-dro6+q!Kbgm9dlW^}p5 z+|a;ufutM}>}SjrAu{c{9&Fh~KX#!Q+B_xN!8oWj3xw_9Wva=Qfb)26(_3`-YVoUM zLnt3Rv}-NB6prv<_gY~-gprx489F&$kP&TWMZDxkZCZsO9V6AX^1SH5tQ=`xfjTi% zQ+BCtrS}vo&Ra(u=%=ma={v~grQED-s7>3g+J`co!eN*-vM25GDe5?yIjoPXzMo5A zQaQfcphN2W*g~+gVVQ3 z{7dJJqtEpm!L+b~_*}2X*}V3!+na{JK_t2SfxzDen&lscHGLrT8Q+C3#z%|?*10ZL znh}=`xINmMM={4?K|hIop07hUystxS4xv_k(BD0&7dT%^KlSCK8M5tsU>cg33zjRc|uyZMW`lL&aLW|K4X` z*%Ndng^QujqQ>~`PLp`t(x?C7#Zs#3j0vyzK z{UD`mwD7yLv?>sMZz8J9X;CQwNy=|pO5~1qMEUOX?p@iZ^$C#tQpffg7QM-gdQ`b# zz6HA$1*sCXiMXf-(K_LY^09unrJ+LC;`xnq*K@?o6>X&p4GPMN7i!JR3=%7ZODp5n zWX>%EZ`WFQ9<+hg>b}X2imNOusj54CqpI!b#)%*#GV4RN+i(iv*`*@9jP-CDIr~Bi& zm}1JuE8<=sqeT1&cQQEPMr~)_q{rYmB46q9;WUuGr4ABMfM4D3+8ausXZS8ubTc#E z_Qq*1zZFZZk-4@8+#W_Ed~M38K$+5U&EoulDHo*`3lie_cjOWqdW2`;R_m)xN7WbA zM+=Lrh${YsFe#9tNqacT{*){qz$*$R`|049$k5h?uxWAPU5rSY-+;Q&NPlC=Wp)3Y^*!+MG12zFYNXXB7W~?_|-YR*cHQpPddy+M9`;HTIu;U%md_m6l!c zd96C%QnV6LIz77?;RkQ7=u_CeWp&@8CMftPniHFxm~7{9hFk2#hv6-X!zgC$BomLc z!pu2Oinp7+_|oWsR<^PawxC>@FUCqMfr@J6+RSaj`Co1knz_Bc*wT+NFi>35Z2df` z`>}57iukS+=yAYM&LHQ_?99{+H4gs#W+U;+*_PVkvUM{_Pj4N5+cur?u_vV7jCexJ ziKO~5m(&37-q4@w20K8t&$kaxFefZ=ax|eP#BoX87n@b<>*kd-)t*Vp_pzR$B+oe2 zWNL3jZ9=#Fwb;WEd41NA8~vU8+V9H}8Ntg}0RIbL1XVufp$f zWupnFjfJ4D8Nn`Q=_exd1Dr|*tTM&giZ>O5T(KTP9&xj=6oQU*3Bnu*{lqm?uVIZd z&jx1OZU;$`@kw%anS{I@`Ofa46r@f@A;IU=$jI|E7OFBfWQt-^_jlY z(fw=tJ{uT(KDp@^;TKn$FfLsY>u&f_|6Gk5wzY*@Is3+oZ#KAU&)ZDuKfCvH(5agp zQEBA9GS80L&U??nWs2lSokW{A>9{7F?O5S202 z60d~a4)`0Ga)HCTTQzP!?~B=qV}f2s^3^TAEe8I#A~93dhT_IqTN%Fnhm5ck&!(c) zyX0?D?{j)1jD(es9oU%g)7;P5N0=|gpl~hI<94rt^DcrrV%O{wc3JDjla+zM=GMZV z4+PeP;RHHYpAa%%iZ>iDG}}vK42O@Q?LejC&CPfB4cojy^YUFXd$bONYuL-;VCeDZ z&O&F>+4EH?S2<;%P?x?#Wzg$lcyM~^az^u@7`6_>o@};1S^9)ixx`$$Xp{W-cZ@pn zN7?5-lh;Zu{pYEua>vpbtNh#>_TwN%(LT~=m7o9Afpa1u(LI>jW(`7ivGdmqw$uu* zgxIx5K^QU*UKQP+wZdik+z_SLPz=^i-Ap$>+yvxZ4ixOLdU(+dlCfIMNuULTVyA-{ zTXEiSCU>he9)E6hN_cSm!n?jqCtFds>PMccYe;EweW zh#w#6aND?z%kdQnf3r+hJokD^Y3RP|i-Z#4{^WGd7vsiq)q^qf{QTwt3h}i6dK&(n~B$?CmLGaektOF0iV8)$J*6Efgl`A~fIkouSj2 zO|Pq?Q1pgZNIIsD)C_#kkZX}DU^(v)<#G0-gD%&t+|+p{R^(t$mD6Sa95@1b^K~u% z;vA4<#hJ&3;_(EvBX0>*t;btef-{gFxFB0mABwi6E=|zW41LyH1I0DIu+<55TzQE~ z00l#l{!pcxZNkb2oUyKs@}hYya~9~iaHeRDs5)we{e1p9&rqTWy)1lXU(c%A)Tj05 zZ9a;K4_4noy|b_hcXHSIzzuSa2DB zS$Z=xp#7!lP#vgH?mRRDdN#Y61LMucGHsv|N`A8~aWITikKuu7EPv1Jk3y$gZ_st) zsm+dg4cD2!!x3ztG07A-NR&U$F5j<;MeeUwtPA`4Ku2Wy@eyLXX^r9UHQ>|LZF9wF zLR1YP>~~+dP^@9q?oLI{1=h$gM$q6Eb%Tt4^f77;KZN?u(zbnb>sqIB%2f8oiO<81 z5v9#Iyl!GQ=W%cdH`N(`vn9w6Zqq_>ceHX+ZR8Nii?bQQ@^G)DL)E=1jwtZiI_OS9b%u>2g z5w3W&q%YO0412BSwYf;in_`95C$;0)g6S{M_gJ=$($%)9h1CJAYCB)dNR*E(rSVBYfq2`{~=vxV?L)6J@`9dzh`aDPd70Ud=i%V;ZQirtS z$<}0vyP4b6(eA0*p#elaW9qBvOx6|nOw|A_I7_cRvxcNiEK5mVu37P*4%5`%loOm> zIHVTy;&VXjpu6CSno5f$C3Ma#b6nIv|2HtV3=+#@$=i+Y!TUsVvcqm9 zFQQu4omCp7C=x{TUL%7fc=gS?s0Oha1FY(6*lA4i`6s&SR{OR}myHy?rOprcC96cI z=4vY9IN4>aN#m{CV{RNFKK7*!s20Moj6V>ce}&n97kOE^{3l^f``K|*qWKPbr}LPz zbdho8gs%LhJ7HepUd=c2ecy`BV;9kQ*S8Ruc12%W!zC53E-*1_=u+lL;mM^IZ{p5o zQYl(`MV8E%ygfLXk~$keMBpl?<6Myda|vo&FRVqKhP6yBE>sOF3hGzFyZv6_0mx+* z@abNY@B7De_(k|&7|tPrhA|BKqwNA|7#Ss{_Q@&Sf>^P50q&m}uw6d8Alp{7CO6{Y zcRcU7K~c79ePX!k{c|7Bnt3VTj2(ZnBV=S&Y}x+O<(x5G3EHyFZpc5M13dnym7V+| z>5Toeo3?Qt5A7jv8FQ2#thDDGs9wn~B!(v=rw_0~7VuLw3h;s>#+pk=pLb)fnUm&_ zKHsbEtLJwI#&C^_>CSB4!%O}x>bmnAcP4@)@bq|R2Y>M^A&UrFEQR;d|7(!;^r69% zg&&irpYO4gcO}0-4t;eDrH*KmpUtPy5b-tf?|d-*#_Q82lg#E4E-r(hnMSwDLR;nX z=WXAeF~c7%9Or`m6A$@_Q5-KLlUbdwn%zWj%VhVt>Pua!F$w!3Mrs&Dc&n(tA(fn> zst*Pm)6;Yzd2$6QbEyIx0>#mT4baTniWT>hR@j+Y%!+PucPP2ep3kk}uf_8ZE)iwV zMi6w_X~q^4)#iO#U!Pw}nL4~S@GQHXVbqj)9!J$q<4&|GKroX?o1rx;yDc+3R5k75SGWIQ1mLE)=euXfcb;ahexAW6oRh!bEf~YnL zzmxc}Pz_pVX5;Kl36`&!5OyeLNc#1OV*^`5Z!~;)TC9!>0CD%_#?{R*G=pTXdf>9ofB|vAQishN3p_C z5*#$Tk4CmYTi7*5r)leS$~mYEjXk5S0qye|c8Kuy8K@ktj1_>hD(iXBmfEz?#RXq7 zI%2q?kgJ@|v#HZs{?%m0+e(AS>8nb?h_NaAl{If2JK$>bmpH!d;+_Hv@44UbQz+0; zUl%9EuP`Gl9ZuGYMw%3%$DP47#)lwHlc9@&=bhEAiCRmvQ$gM!VfVv*adb$_= zqllZ|fp@p!)yYYFarD}{mQr>z6}Nm_nrq$qe)1mMUKeC{P5bxYZ_ZXU!`D9`I9LG-!A>Mlg0i@(xAFN_sfP7ZT*hDo3FjNwt&C9B!0u-ib0>RQI zf#+nc#06q%JZr~f`@@FwN{J1>hVV9F&$o(=UD5Q zUWH>tt7+l5ZB03R-m7k;N!SWZ6Y7B;_rw9mePd+?w~xu4CRs63@BFoAqC7$5#EJLb z)S{aZp_*b^HFxAY>!ydMp&^n33_-P;etc^&3Lw;#aZ*cZd-U2vZ2~nvy~wiB%(8Ut z4d4y(*G=eWBjW|NdgH@%KeADM;i5_Fmg>!$y)wfQav}Dj{mu;)hM&voWOnW^t`68= zZC<4T1b|X}torsbdzmEYg_TVIb0zZy_8H~5+bSFl!_nq~uW2(iD-fNzBM+eqV5m=6#xK-ZE;FXP^{aZcZ zaeM(VNM26r*2)v8T9TlxY@iAa&wnVfm-kMy*n z5WHuV5j%oU<0m=*t*#zkIA=Ih=j0ObWr3*yXL3KQ#k7vqJ{Z#SkJE{*yb{;j|943G~Qb*)I<90PG#d%d4|9fD* z&HRA%vCW8A%UPoLh<6*{Nxf&pDx_W0mgfgzbBa-}TJ)?wl3ylo za0OY1!PxCp9+nDn+K2@qL?A4fqDipYg;=b@PD|BUnY}<0c7{^YBTg08vO4)3cQ#r% zT&*fmlU839To&Q`h;2t$k26+%_K>kYF|N<)M>|2>)7H!ytwo(Izfr z>{;(FO;23u7aVoyP>(1A#Ej_PUx{nRq18A4vOK39B44ktCq8ASG6sR}o$UD`yH#0e zKMdeBu{Sh@hvF5XIJqH|)B6EUnpQ!(!^15#jredb6(>BpkI?|qSM22j4(32KXMWSs z){^OmB1(6Kf(n;Lc5PX;<6oPuL=@gddiCkB7EOwhoDpu|cq|^T0GiUNURqy%Ii8|F6#U|W`9(*F?e7{{i~v0qZieYq`88*@8`bA0m0RCOxPLx_ z_1H!^>(LG?9Kzw&?0unLjHGNNl))<1* zM&fQGTE7O3K6L@3BTRn&lwZ5xchOp%d2b+)U%i|`Uj;cjkFdwZk)kDG;IZwVU!N5B zQ}?bUZC&e^dhQ$A2|n%{qKUyPn8A&8pcywWepdq!dQz5dab%r}VvV!Hx4vd=756-S zH_u$I|K33pY{kL;=KPEh_rk<4zQ#Kb2mxDfwI-}}L76<(y^90KYr-!~zaZNr4AU`- z2$L9RUgr8&xor+`l?Z|+Jg7sw7wEC1zx+HQs2lvQjRKJ>1+r&Gmj}Z$$1as>ChB?2 zA2^R5?pjH6`h2%|supe2fTYm9UL*Y|2U^!I`z?Zm3U;utX;%1(;&YZ>S!aV|ELjEq zQDpS5G_pUV0sSp>1-x)Tsto*kUHLF>mttA62LdbbUm8$j2Yj#T#&LKZMTaqWtBb_MxL*?v-XQbz^~!mZRF({ppua zLmSvllbF?h8dB>62sp1$WEr}F+`jQ(m%4fWWSC!Iz4lTpJgn(!MX&tsQ}{rk5RZT& zvp+gf=alj%bvu9u4rR}@hF2;GdO%+d7~K>1KA&@FW1bp#-+N-fXdb#c@CD3Y`8{07 zT4wtX<+o3OZuSw;Uvl@^Tr{Z3?KQO(iiRkY z{6&wB=(G%i&<2qV^!KaK0w^(6yuw58h($*w8uEQ^r}MpP^P5jYg`x|s6MH=hqY~T9 zy3&DUT)&HRla-Bg;-=KKo~t^QeHq$K;%y}t8t)fPbsqluEC24L@2QC&_rQuI$VsC| z?Bm`5&IjH_d{~zl?>FI>6Iz?I6}#_B-#x=E-4a!R3)z(BKWa1i{AQ?wfC3@*MFYU( zsY8|nnQ6-E1CW^$5=nckTh*kx`&2Ug!I8b&6$`w%rcux-bRfuGWzVHPeFa$uV9?*@dr+H45Q9 zb%7&t89}$i4#J+d-JzXrku}{Shy^ufzGWDs(-{%NaXsR#0GltCnc}yUn-=2xZx_V2 zetCMzK&nPN1g+gm&*V>#Ny)7`#gMP0&Q%=IwCUU+pyK$j0Q$Qa6|r-|A4kNFwQAP| zGRo0{j_I*7X051Z$9cJM=9*ZSdj4PI{m6GGoc{g34bVy73hv(Kr!bX$yJwY1O14iVr#2$bzq#5>e1?KWYy_z zO3;b($)9f;uJkP5g~MvqFSHVn<(D3s)YNQ7Kuky6e@V{hJv;4YbG4|76eQdB!dbT2 zs!n;N0XW{)bHgU*r=kEuV;eMay|U$Z&Au#`i${V>z}%Vs`Kj9Do65{@^&X$^zkQA# z+YREB!`|HJRJ9D20bv^iE4#!Y@_EGmhQ0B^x8_3?y0e~CgTREcUO;geDAXEXRLxSK z*jP1A1(P+*j-JJ*I9-?Wm#l#o?eebZ{9)DEtp7}=NHAKiIr(a+=7%#=orI$J$t?#D zp6~&54?qjn1=At`gEH}Omc++R4=vMN!iY3lEl_Bi)Pw0% zi3v3?RAN*11k=K(xCZ2ieg@=;NlonrXC6Azv7%EHxp=U7=Cbmf)T1y2K;q zSCACiow(CRgW2sEbK&y}7CkdDwEhdZf^WwBF%@1hmemjsxO$pUocE^~leB6XgA>p zqVVev`>N+lKU-aLOeU1wPr&7PEn_Q+F7!uLRU&NQ7efcbnN>}Av8WMf?VU6OyNcK4C&cHuW&<7eRm@xpTR>ZXI# zW@C#!;b^>KcJ>0iNRBDRXJCV0usk_#$3;%`u_h=3{Jkj*9T2*v%BLJ3RIkv+@Tl;> z18aS*I#rqzn$HIt!Zz}AZ@Fn6vAmQ9Nl6D(s+Ot%m|ThFC0GB9#kW76HGLFA9damZ zT1Bi!?hea>H16(v@%bwj1hFo@Mb)EAFz(?r`BW-$aKqff5bN)8%vR?r&^oCLIIucr z)!KN1F@`3oECWw^JMe=*eo#{Fo0<>+g-!0WQhP2fDJ~>_BaEI2GOr2SQOD5Im*P&~ zX1(1DB$2?S;pZ>5@;i^hy;!@ssd?B$Gv{~aDeiCLKbIh{g+{K8P@bef6Bn+iz`grS z3gky(_I>DOB}e`EWXCbl8jJuLtLYUWC9-{q&VZ%QGfJ!l(;$L-#%clT^|D(a$5Wz< z5R*9Q&I1)fTQSCA-^dQbo^VN@woNj+<!U$p&vB7cv$jsZecC=-EnHpQ8QsyjTfX$t` zs@oWR2T$+u71MEWNp=otf};Hl0|>zs@{6ZVkfj&~dnoXN*_}H&!#Cbl);1GwOfBJB zH~iUv)@OAUdZ8ieB)3Ka#XQSdeOGV3sXzEE5=mb*fN^f97|1t{`plVPuWe@lW}mxk z{@lEWBR$3C{K(ji+_Q?HEuv-Kft0y1o7tFo8S*UeDSjKEujUN`{CU_A?#Ng_)W1Qd z=Q(Isy6%0&ab{{OChqmXaur=x$C)6Et#6(+g1eU$7mg7_$iK^)(|P5^mQf{0RmmW};GM1%J?iexCX2$VNiq``=Lvlba|Lj(HW)8wT$ zdO}mhCww1GfSf<&DovJ8E~&zT`+Sw~-tmvh#1Bn+LLlpJVg7=D@!VwGbklE!27B@e zYniwlw^OCnHw3CnSBCu~xnH^C)*UBBn*Trc-aH=awf`TFlA4z36m3FI=VVJ%mdGHI z${L2UO_IvK?}k)pMGHa<$zUvF9s4Lk*=91BtP^64v73b%!|!r__x-u=`<(kezx($+ zzdwF|eE#tmX7RqRd0(&Vb-iBCrR&tmqb-?Hr<%{wwP(^CQsQ*$6Py~|mhM+dF85dm z6;YHKF(R>Jf#jZ=*{^spaC6MXt2cKwJJc5)+54l-;e_0Iy#FQN6H7+%_*`@Xk>oi8 z+Pa>q;1m=!{Zgy@QrqWtu&+yQHD49+jwsl^(T%ORX;)))Bd_h=M@jXO?S^rRBSjlx zwuXL{HUWFpxWs6UZu3L!JLiH3A<25$QBEpAfo92OJ#=+Ue9i(|FgCSxm!$%q*!#q0 zul0R*r#`|%pM4ER`X^(Izno4kNqnC4z63_sD<54N(JA!GH0 zPqUs|aR8*d2rw)8s*-yj>9_gew=V9T)z=-^s2u~Gvtdgo$~=+>y8R{y<%nuEj|61D zQ+7NEu$AYK>YG&s`{f3FLH{TrHuT4_4i?#IMgeg@HJA9tIj(Mryv7W=td`)H0{idg zR>9z%W>cE zxjJ@y0dCOuCy&0Db=Bs}PL|SF#^09rTDg1~mbL~aR}s2L6{s&|Q_9;1{g-r$aJl7A z9uha0PZKZbybUzTmM_g#0^ZKKyM_lql2!0qfeA=dxi@&9x!rznFqiPge7pt}_F6Ym zd~fgT`ZgB7GHJ@s$F0I=Dy3YN&Lu6?2K+-*W^Us;MpZ~m2^GyFcG?AB^ zUpWftfbN$6O)UsWs)7B?^-HW975{jJZS*6^=Bdg)K7NNi2e|}Nto=y!KgiA32Km9V z2=*ym1&wM;WlHGML^H3c&an44S1&W`qws-^^2fxf@?ZD6&XKPdCtGL+z21oPnUsLoAFr@c$C5rg|{quhx=${oP z{`Wfm^-KKU`S}0sd?cVC=bXfUcY2Fh*uEjZu+VY$k%w7>fK z)xD#8Brli??eo$;^5Ob`#Hp{pG{5|0`HAxYT;gxn`G5L@ z__M$^M)~KV){7Dw!~H+~n7>UGKhn@GmMH$0$G2Y^j8AvgLK*VcrhJc9dhYf;lP=C7 z57++6(0+Sw*(E-Hq?~s}$<@C!O&gzOXn0QxVK#7#A^4Zarwak&dr=erT>CHHo_p>( z^KaoK_lsp{{^jw>nS${>9V~Da`fIn;(R*5E`;HRH-K=fA^M84KFTnV$ez-ek{*_hy z+k2Cf!BR%!^d_Nyd3;ZW!1#U+kQIsii_7mQv3m>9m;T>S^5Y_)TRQ-4H-q0}p zU9R7wKZZ{&(w;zYHgg3|(H0gKw*j#SSP+(E-a~n`T(zto+ z&Z3PDOD99EnG!g*E3dE+mj(;zzp5u1Z}|iGikxOE58waoEXNSOh zaYc9l>u(43cI*$^fK&VOt;$=09o{W9V}>2MzR(zA$Ips?zJGvVQ{oHSMun$07Jn7xJX3l$O@<4~Dfy0_{0aO*ZbD#r$1<-yTm__dv zZCJ?gqlaa1qmuKx!^>keIG=m-uR9<)jA99$sE77dVJmgn2it7gT-v>WMO1L5;1ML_ zt3R-%h=L!e;XUu8pvwv;LOuGce6s6SCQ>4Netdr6*GaZ%Muje3uIv;FcK=~z;d4;( z*!3o=p3kHT()DbFEBkqr_1N&QcdP?+Y)e&K3yLqFVLjX(>UT|6WUywIT~r4fvsI3u z`Q*)zgnL-=VFj+;FAPa@IVe{`FHa$~Q{H)9WQBDHGp%A4Kz(a*49Th8cbe}{BA}vv z#I*6KQd>s`=?`0q34$Mz6X<>*>d(9Eeh~> zJ=kF6?LMhR*+6We!Z^8p)8AfbA$pHhk3TorfV``%Rrg3YZrkvbZd~Lt=?D0ckS)#hBg(GVhUv}VMm3~9 zQIX5T^CSBcF26e72TZz=3%Qm83Bt<30535E#G^|=Y4E6$k%uDozKwG~T6;4RTf6w= zOMTpNl)ikOq5;O`AuEE_abIAE>-_hRktg2SWrLLv1B^vvAF_m&&8_Q#{iZb1OrI%_B=Kv36*Mg!@$(JlX;dcLZFB zeB|;-^hjyLtp{6oLFVA`Ms0+iTIUq>*OD_zk>iU>(DolGh2It+U9BG*L%mIpT8_XX zq3Un4WZstEA?*HGf0O*eBN%@2S-6a>EZtn!oFa6V;oqrBeOO@(W4_KsC~lcD(~4IU zuI@25bB7G)#A5vW4i*%X$p;5~i}Ev7kC5b;%UuE6>eKRVsupjk{zxA54=XF{Uz>T1 znJ2xjS5{9lKV_~N&Nf(^@-y6B^?A2+cL2Q-EmC;G+vm-ZKS>O)?D>v>T+wpa|NA_9 z_r-|E`y~4}7@}$tuO57jaJ!gZwLK7CQyGkwnV5WQan#swq#OWsJyC{cCbwH(OWRjA zIAAn-s(h$>+Rd``l;F-w_Yeo;Sai^n+^r+DrQPD{91!EG*l0XD`SkLZzeu9GMNN!R zV&SV^X2KffVzA(Epjzeu;%7lA16chvfXW(vkGr{_8vM(v>qtd`l?^$dXHQ=ivtDV3 z^sfQ(j?|Vj0PJ@#WNn4&#%0Y&v@3=#&p%sI>HYv5q9h)Q!JVE)ZQ{?`{x)bLDMK59 z(m-CZ);g1-A8!Z$%J_UN*7Y_lyJpv~#F_vPQc(KQy7iwTqO1++3LV}+p)}lp50J_r zW##TeYFh@q;gi<&VRddp)k^JoM3h*6Y>I|#KHABmz{)XN)c=~g8Z&ITHYj(Eqn`VM zyKo!qY!ku{%v0x607y&&**yxJ=MFUw0a+J&Hy~YaR42gk_CtI;5XlrueocD$tG;%Q z>5^1;c5Bqep7G{5*?TJFEz1mtgj47@gSM5KN4WA)6RPVZQK8=YLyEY(3w#3o#=z2U zFZDL=f#LTu`-0@5HO|B>0(=i zysv({D4Z`Orng5*jQ!nqv+3N0-boy`;HrYlQdWa!bN@Hb=t&B@EeOtk+0FlD0hq)&WGA*l4~IR z$=u8N)1#l1!TyZgpV4|6mxqgrZ zNK$8&QdqItdXU)MDQouZP$NtF1X>U`hI1mJzgOP)c>hL*7bKcFGe9=dPtt+eZM87e{9o?!o6%tHRmP)s}dewfTkJ>Q#TtuZG z@URXKgCyp;#3Yk7;E|$@0053h8}LK>x7yxT_|Woj-@&^ge(EUyekc1R8r{0uH@A*6 zm509Kp*F1MHzjocx|;iGflS~0yF*j2?g#WoW9%|L6AhSEacEu^dQJK@8a+lJ{3;p^ z(DBvf;*k6cnl9)fHNZu5=to#|JyL3C#W<(RgU z_~o0Z+RnTa-9I*hE^zA2IfL(p64aLroGolEyEaGXzW5>{x>;>lXj6BXxu)-+vM;IX zhatBuMZH|yPov;k@y}yHW2C8a4J;P8$SKZPrv`xjt7C0)FcQ0D-cr=5Dtb&U#t%vK zYlx*XgcH0w3=mP_y}^FdvX^p~EHbX7hN$hw1F&;Hu+!N1;3FdjR8!EUEdkeQN&Q zdIGO@I5p)Pw}k-RpT@NXh0@+!mkPu=WNP)==L?hvqmZnu!DGmz1v+A#L-sW8t@dl* zAAGV7fDzFy)%-({5hsm9^w4Dhix!nha)wm{qIa{%$l2~w=rI$x5qy_2i8(AvFpw$P zC8mCM79$)I+ua`dRD9x)pP-Ipq~nr!tl-rTBxXISm;5llOME~eRzn@gxN-z*+!3YV{jQJv0ws| zWWG<;l&HWHkRanVrT6m8d%}AdqWaV;gTA3on)I41{0`u8;;{!wAei@+NEG(PsxMxu zBskag;WL*5I$K+>Z~-M(%tL#az0O;74_O(5`Lj-qE3bX>=JL9SSk0@>DHWK0HF!eb z=ZJ!7)JYFFAcDq*{yd$3I%JtS#F`hWaieh#=UsVI^9xD5r=lGnbTRU|YBeil^r`rC zG1qQLRAwOP4xU$&YLRejtWdqY;pCC)?g6m+0W@L2;Xvv&`^0B5714GBZt_TTw=?@^ zi@&U2bCUn1%|tv_oY?9vQa{H|V;_JIV6%TAlY1RmH7P7&?9F|D*8&JL{>s74o#rElqPn6!`UkN`_5S?|GZG+AbxxPwGv^p8Lt#TY?W-j@)C+CyBL^&nm&*ibtn2 zJc|>u{JXE}MUrjm8F4&-j!lBRS}6;jyaw%}^WBB1=iiG}iM+s2sOGouGD9)e|)+E0euE1KRb0P-r*aQ86QUM-( zbJ@g#f~oV3#fw~RP48;3mgrlqm(?Z9BVii#Vpinkv%c@fv>Uq_!9Z9lTkHZ`u&W$q z41%)4f}@itWNh-Ik7zb!eU07NIIm#`4VL}>lZxma$l=Lrc;Itn3MVUeyubX)wezj& z!Vmj%im_r3$;Go0pVam_E~jsLZ*}CQdEn;?y^N>djyC?ReQ5$ng3alOk2)z~Iast7 z^+UwwM(%PV|2=2OmQP&8w2MH{1{=-t%Wt9+eIaU6jCW|07t!2?d%V>wm*>g%cR}9$ zaPelhMuqMEk#Z-zMRpm3X6B-Xg`&C@n0hUywXw+L<&8)Qc}Y9+^;DmU;m3Wp$Hve; z<}LLGzMZlcL@G=RlNMGNLQ>BGJv5V4AKNiMJe_k2*mQfweill9n)qs_u5)3-UA$8l zWmL%dTIB=Q_3S>IZl+!YrIMzWKmRNt+bfx1eTsd~mRderxpHU5$HOYIVb}_%Vr%AG zPmE=~Xi#0KvO%$r+fv53=vnb$zk+< zHL{HCD_LHoFd|P^Gt9$Gp*>}8NBt~k{^s0^WGrzho#@cdeYs}UzB0BnbtEnzv~&4v zopXH~S#F0a{)%eacs_HI34CJ;Na8cHGk1{k=8@_%gi2aS^=w;zBIw}jIGdjmvx@2! zS3yR1$5Qml)ltpSPQe{{b4Xg$R7K1&ru9c+Wi*(=1bNcC;u=6DXfY5W|_^VOYEel02Su@Z^HG~<0*jI4{j=9HaQU-~Ke zd+a@2>%nmkS(S8iyM(e}S(T(tJ*a?vv0ks0=sQ_@I>gV&@7si((}=p;SVuj@2~|)~ zBepV0eNo{*AYb3p+-acwYB)6Ii5EMFKBi>-#)I+Q!hAXpdLtp?KUOHU&in+bm33f0QYGgi2 z9n2UB&as5PY#9*_|87Y)B_1nmIH2iTctvp~Spu?e&I-Eh0BeloB<1zHHs=G9?Afg<(x>{ij>84C3~ z%VU9Ts%AeZAst4Oz144JYrSc(QPU7|zj{_{4qh-0kWA;_*r9~;N#5PQtDo%>`sXl^ zoTHDmg(9O$8tyNyE-IifA9G4x#25-}MjIsUop}8K2NzB=FSv@wY|09gxv&TTbH@y_ zXEt8=L+S%&>>@Nxc2f?1!?npDAVnivZGz{A8$*-rdc1xVjIEEq*UsqF6g}6^s;qWI zcYHmSTa;&IGcFgVg(pJb{lkn+DYfO|WgTjgq!gRgWs5A7t6_YJ&=I0n21b{bVFQ1# zbbN^eVbDT*7qa}pahWT#h)19Mkt#vhqPA72_fd*Qo3R9+twZ(_78V)! z;zBHqF2^htj1$_@<7fNvs6dl%{<5Do*;Es!Uv+z8Dg}dghG95pCiJkP)L@?Pv#De( znrzCCR3iuvGSOTV3tLFQ)MjhNb;{aC1HKcEC}@~B2F~G-FC5+l^1fYURqU89c|Z2F zbvw_9@3JTbawHe(jJc4rR3E;+Hc}cUeRqtC>%2Uq=U4C|y3jb;@;s~UWX*|oGYT8+ zLx8Ec=ElgnSs4LcJv$=bbCl8g9f z6L(0TOim&5E~OPt_8u}TDF>mZFJT{4o@KKb<9G!$()11R*{zlxwz>;z?6@EH4{U z4|z{JLJUxeq%-Qs?99T7>TH8lHDK^9WM^@Yj4f>noUAG7JZCGi8u+zbTlln}`nWU4 zLvX$D-I4|QP)FvNuBKWzGxhF`#X8dZ2R#s>{6gvK79^%>T^G7@-@0Ai+?gz80e1IE zjSThNmzG@PihYzpH{*_e2S^2vMU&wM?Y&}_a@xmOrai#Pv}dj#3xDVw2Fc~kwd5I` z{}3MXuC?%pt-V8ZqNM%DAGMboyp1K}S>0>nCH*oX7`p=%lEZxt5CP0C&Dwa;p^A8> z!45(bi5iRwEid*s9>0m~nrl0(lx_a)dQ170LegzB3Kpk#XL|PllS!E^QCTnOa#yb@ zPer^f-<8p31dYb~c62 zv#0?BF~6V!+lFeUSHdx6co=I~vh3~6!7=76f$M6Iw{C%dgLZnj^)=5C)RGST?mYj` zzb2!#X6mC9JMdp+|Bzta)#i6-iP>c@0()}~6lF~gNmDjg1#jxx{1av>J9?@|Xsg-1 z5pW`dLdWTRo=A(k+7TK`kssBG*@BJ2!3*frjE=r9yv^j2A+NWQCbo&F)8Pzx9du0}#Z)Nfgn>e1w%BBkrxI-2+{m%Ws~d_X z*THGE;Vf*mQ1G=9szH;MOEW}J)*ni=;b>wjEyJV<513vIha0Tvnxzia^#&`f6a1T6OVd~o} z%9-)0DY+ps;^_@LRl}TkRxH4?dJQR&wiz=NE*jnQhF_u{K#6tY&W{Mgw6HS^$SzRG zFIq>2b>u0GfrF`Vj8dxESW7H4z9sZ9{OA(KKUg8`bDRVVK!;Bs;X)oy>|jX$80!#k zCf^CaWH;v{ps%QbFne z*s>~iNuT^nIY^Ll1+weDmQx-WsLA^k29uDN$Am{sCQDwUUE(X+%hW;?v(E{9^%Vz?f&&p5Fl=}5 zc)VqlR_I_rr1Pyzm)G}m)$Z`ra%fXy!|%DzleM+g!pn@iJv^$LgBoY1=99zO)!>YVT7o~`u5g_~Z*7%SOI`q+9rsnd&! z+&`ZOOnMJw`)Hl)CxpqHdeWK>Aa=f5#sx>7@0(Iv%GTsbZ#f`*qkAE-AUG$+NmCh8 zrf5_S`tBZL1l^87NY?@!pCrWG?Hf-T3;yu1%>fH17jIB&d3S1l< zDT#i_?1i~KO~Fz-4+wI6uytFe=J@*OK9={&sz?@pADL;7mN}-`jRhBkbOk-kn!Zpko>#KY zv2&9;ZRegbyP%$hC+zQCQa~l-nV|YT(6a@v4{n{`@z0^HK`dwGF9a z(>hoUA&+!1$b=Qmm+53Lzm75wtab`eL)4O*Uy#jzcF^A?-9GeC1;7c`fv z)}06h`z4k?%OFTW<8r&>0v+X^lQvIsNxou1qz6o|rK0Rk7lFM#N|Dw*Li^`8Woyvi zSbQ)u6qclyu6UQX^ZarMVUAV_o#O0gEg_$^8-&!w}(5-9y=6Dl1@4eHKW!OB$?YV9RayLbzcG1bL$JZ(gV4MKAnaQw2 zrXlv2>H4r>LppEwk`oPDe_ce)JPRfOA*I}ZN={4O5@lRiBnqcF>{wS{nCWLqJqm0~ zJYN|&yKifG$wCWxO{J;rvhanZrhvE6xT@-)+0mL$g{|Uc6=EI9&^&i<+biD2z zPcwop%GBGGf=d8prmF~vx8)UQz#+ zDOfP3ps#LwYOaS+*%V({vI5GEzaI0jQ#D0p8>LngbZCcXKOUV4zy9UAy>=a04gL{Z zQgYfns(S(J)^Q+)ZC(B#MhLw3-6b;~HrA#e-g9b3!ZFgvul_6VJXxK$ z43k9SwKcfqCM3*Z1R7oRT*h=tow@XR^FXr3fNGYEE9bFwF}o&k*kF#AoHb+eE6&F7 zHmC5r+(5Zc_^cs*dbww_3Vtx~ZM`MJE^K9d{wi0|tf(tnZbAumFe^y-Qde>QoT8eA z&qjV;l`uZ`L^aX;F*$|vHY~r1;$ohmWJZGNMwr`HjPg(#1*pcuc9XD@?xmy1p~$ep zxut9(vzgQb))Fn`K_|4FG3S3cVTNnr5V(9QGb9sp>y9%m{CkE3w$G~m{wO8^!+o|k z(6W9o8c$tOg*kWU7}&}MUR!r}?DeOpqx{_J=YH-Kzv`};Vk6w(#5Sb3aa7U9vgaHx znZ7ug8y3*@-J-Wzxe+(lt{UZgX|1{O+sLo&Z|sxw$-P_m5HxV=4oV+($KPIGR5_%4 zwz0%}m({%POmOorJ-iaPrR$f)(~{vLtJWpx=B=zj5#JNdFiXwF=d#D!scnl_kZVbn ze@I>bfl|C1w1IaT)^L3K`(5X*2_Nhe=6j4-cW>joo&5=9)`AWhA{kU9i&R^m9yulu zA~b163Kj~Sca$b(3XU=NR0_VVElI)%OWT-VG7F>v>+>scR+~yJ+wddCwIgrp)7V&? zNw88@pc<^~h~Rc)h>~H3zt^P;J<%mfFvbzh3n;g0#z>MFCPsEw6Lck;kVg=^2hPm{ zc_x(3)N_rEU;FKNwLss@{;kY*j#t*Ehsm7Of~JTva5~M>T%Cn>=RuQLerSBt5-6pS z$_`IN%R4@!`ga%ai_706K zIdo2}*F0(SmX-bK3Z>dP&i#UQ!OT__{ow^jht%Da++6+_tFu7I$Fnd8<9KB9X`_nR zYHQgM@!o@EO9XihKcPk)^HL*>YWZf6+e3L~aW?SNun!3y9PO4CcBBpaM5K}*iiD%7 z9Td;3nw6ySq*!5z8sV$g$;avw@(?S9T(#pOoM9n0fx?BIv-|*;IF3XSm78$*XQqg! zeJV-4pptf)bh<0LiK<`drItAA^2Ac}DYdW2UT4aoFeD>%{BXGOrHi?IIatwlJIqdG@7mpjn~(?$jsRKbz}0Hg7jvBIsOEE8ejCZ zXhoo*-`>dq5vvo;b@&P$;pJ^JJry!-zi%-9qu-bwv|WgeRqb#Yn3Vei&Oq_rJ&eB% zP%UABzVSnh&^}3{GSFnUM=3Qy<76KluzcBnXY|$wVunuRjgD-sWX#XBjOzklK#3$6 z7RUg?XvtzSmhfjQOVsEUNWpMc90_m&_wheYz6u?|EG9L;AXKTHi=dLNdV&-#EtD>Z zfqnkoI7U7jM{OL74iUcBzW>7_F_~cb#I|;4fvmOJ+5Ff0J)RP<(FMZ(SC2{~9ABQj zmVwL?hrPjSp~d1XKAUjOxO(==a@J5|SuONP6{o(>RlC!!;cnL#7jg-waqE zdA($xWam)&HB;|S@-IpT(C&~pZ8{(`=V?FEYpJ+j@brNEZtBC^CZDNm+_?%7kJ?j8 z;b;}FgE}o(w&ZSEP}43=Zw(MfGOm78NecB_lO<-OqX&d1m(mOP_3^_7$Y;^CgQn@m z5=R9hg|;k?abBAlJ(es;WG^mg#Uk_;<*tDxBJ!gKJ#KB0F(#b+wcI#^$e=$Hgq7tl zA8Er)N8tgc#6!A~RH0;dF?4rxOBGsrB5a@hnK0}P%&eEQ+C_6!^Yd1z=P>VbA%+mk zgsxHEYuU+JKEnb8yMFob`;d2qH7i!q>>8UkbHr+LzJs)Nq%tBZf0Res!#<9%T%EG0 z>dp3oEaf^@7dExIOtYH?kL{pY=Tc44T5!&SNO%l)w8bB{kgSYX5eklFAoEssz@Un5 z(gk>e_}l{udDRDRNtxJ;R9AVe$6yuea! zcR!c0i`QSFmoyY^Z9VX23)L>w%*~@C50^-tE>^OBsk!CFs@${nxm=mNON&7i^}PA; zt+RcO9`gln*5)vnbAEZZPK|%5X8&@%Q4Xhxwb;|e0S6`X<&&yy_#OtcM0ufM85;LnrS3Q*b=ocE^VH3?*%ByZth6Jk!81X)PtWtFbiG| z0H^(h$$60P&&9?v+!yQ#;)U70^y8JW^WGZJde@nk480UaTV{Mn zgOc(Z9zmRGeEy=AYGS@SoomMPu0+}um{9ux81Z1t0nymL7+plzkpv=(b4Sz+k~mdK zS~)N$j#3G;#fJu6ON`2WVi#<7Lyv39wHnc3QgjkPt>F?%oqosGvK9f+5Arl>F$zU~ zuD-YTVqlZ#^~pDFJqW|hrE7+K>8^Z3d#hgr?AMGDMn6eGtSjadO-yy{5M%h6Ij#*^ zh61&Uj;UJ?m186)<%I+Cr-gIb{nCYUn`6GS@%^LpnnGL5=Y(8ZKV~8#Cs4!of$ub0 z1>Vu4WZv>67D$KedBW!EXRvD+j_B}SFSJY4eRN+TcM8>TcFHF-P4QEmbX_=Sn(Zp^ zvD8*_jC;{F#VP2yYW}y7TM|gw37aR0RQA`vYp%cQtUK&nP*j@$6)oh80_k~L0WsuO z?V2{>fx*BCy@P4h%GQ-5kh3)}#^tU6UE)fE#^lxo8;l;>tyRfjATJ~Z&!yF5;Fq#1 zz@{#5kg55}i`ipotz%b;9D?}E{=nSJUlY+KpC3x`TmN#vo#og~roU8D!a0UK&oxEF z8aXPDd|I4$=|o8V=-s* zXW2OhB$aLjDAQ?Wfg5Ed-lqGHt>yQRBGB@l-`B?9(FTY*e@MJOODNbkJnYuN1z*yv zlQ#Lz_s%u`nO*3DZU`u>H^z!RV@_lohCN#UC@B8$Bjz2|5ITF!F*!#GvR3}qRk=1S-G38;i>HZ9$?1xhpGjcXe@k9LuE{J0okR+@Z~OS6Ewq5_74 zas@|Vd(eV8J-(rn_KOU(U>px~SxZ zX}-Mz!sfZ1;?v|x1H=7lg0+Xg0WjH~&^2!TdI;)EdHdPQDdAdH(tH~qWH9jQ1H=?5 z=}sCgQ@gz0RroDKfd|k&3CYTK;h<&POt>zfnij^Pb?3N}vX5U189Q%2<07!VC$!J$ zRd1V~sL!yF6kO{U>lF1MZ_Lf~=o6)e)~H6|X2vEA!VI}WCu7+a0)m{uQi(!rZfAF+ zh`{~RsY`0jp_L8pYVyDJA>Q$qI#;97(JMk-N#_FG3;TRxNadYa<7x|T`N^~|4~h@> zW*egvuAJBPz?h~C|9ICLQN5Lxma|m*nrqAPh8avX@{*_RZ-~!gA|9S0+AZH+P4a1--u|wNo1IyqZA&~X&9|ey+qZsc?Be+u*Mip}x1_1( zIh;BVS|dp_F?ek`Kp=Go&Cvc9^c*_2jn!B0;uIP>*i33^6qyP<*>i06jsy!It5i_v zL^F(X!dkaG_#=hVPP_bzeWJDbP-RMr)ES+esidhnw7 z^#N2*$c}4d+Aw~hMlF-Ql1@zTpZ>K6ebjD{v9fTe98Mh_X@p{D8V6AD0V-k|aNrR` zZaZGx7%kbQq@3?T3(MY>S75H5xB80j{)0moL$F81 z?eQ1KZtMPGulXMwf?`la*LwW>FCK?Qf8Q3|6cHt4^1*3!c|q0E3*4kp3y>Wfz_#2c z)(|nNhZ~*S$OX&?;pkZN)W*EMK~1%lkY|F^>g_Lc7j1DdP4ve)fx&Th5vf31G3z$- z!)xt}VhNZdAP~Sm-Vcgvv)8XkQ48YR5`L?|!DcB)}8YuTg7dI)emE`)F_3goQAv z@#Lfe2eG%^b+Wtj>7g6;f@ai*ov)U!b>M0lBUDs)*unQfesYf=yoc-rMiyiCwS`DA zwVst(Dyrc`$pZZ`|H%brFQLVBHo&S_@Fm0U{Cwrhd>P(~(si~UHX+CDYL-Ij-%<(r_IajOg_J_ot z@D*r!t6!CnOQ#gKh>0Bk>Xku%RnrzY)t#GrrG~TnM~fEPF~K|b#JD=K+DDX>#6jC< zrDvyajc36uKG>!GIc1r$>eqIEI8*?oD&XrS05jQ}uE0g;wG=WHayF-t>W zTFg*Ja9E>U`PsMJlJ<)`2L}ksh)&!s+eMhKqv18nS|rVO_2Y`L#oG8Zh1+Np!m5LP zL00)=R@(Ar7EyCEl=W(74^&rTbWko(^TrB#2MJ0ISF|~te}HDP*z}E*J;`z0F-z$X5R1k`jxrH~K7BVpLG|qs)P86iuR|f>Iv)D_Zzj8#|q(2*TG8@I1A)Wf3 z(N^rx67?Z^ka44iBk2QUOn*EGPh3P|h)EMym#>Zqzcst@K|F`GCcBL~a(S28JrnMA zp6MGSO|N>PEAxBq%O9iUfyM47Bx6cs)UVO&SqnYIhs6m~H8J4=4}?yWj0d3wq>8I^ z!ea1r1N(h&8VAUD66SBz;D%!L4YQ+Lmz0=UfJq}UIr{NqVA>q;(_;Z$=_Vok>-zh# z&dCIVAj%H9NQKNhqohAn+-!oNnjpJIvuA z_VR101t`|Xi0~%Ab>BgtC%zU3gUl`3YvJ^2U8h&lgfqs|Z^)0Fb^HYqAEa1ZXe+jd znN(NuPOvUC5N~O#BQnZK_NZ%$N#ub7>%OoqtfShndL?=sU^m-)#Z_Gp&b$qD(ziJ^ z^y+Zd*bw%*=K2a@rdh#rqc%idhJjz6QDR^N0`92Ye!-A$;s*h6vHKDcC*-FZ?24%o6JPucTtd+nQ!AO@%R6@{@%Xl3 ziPO~RPKkQ|7J7$YsM4o}BiNcac2@8to<<1~`b>(&F@czGXQh=bsVsH_gy(7aRa}!8 zTX@Dj=aBH~($rv?>7TaL%Pcp+H-rV(x7nhy#L079FSL4h1aPmS4t;A8%Xv8o8q5 zejwG{UX$kn-!{wpb^91CTgN+y%;|{|>usGVcw!p)rKk}qT{_U|;z_{e#=kX3^7dQKBh)7j;ipzgkz)WPa9`BIcExp{33Z)@Kd z&ZgR0goQ66<&xBQAGTA)G#F`Ol(!w~82;`N%lX3EiV&3jr%>>RJbOYmZUv#0JjWvX z{2@BzNb-eiX;Maod`S-t-N-_&c;vs1SATW zn^CU7uXdH;Z6VKcP9th}7Z@}OCqHo;FVx7bjJ7BU6inAcX^j}F8iFQPYD~DG2X;WtS`id;V0D>e00e+sm2g^OM%3!A!IDb zu`tj!n388H&;1Au+f&%)KF+;7@D4~hTDwc1RmgX|ZH-Wt>@d6Z)J?xiZ^BwnnraHz zRe-UFkNIh?W}wx|-nwrp?G1JVhrRd&RF#bJ4pyZ16{ZC7R20W2j6#SJD;!9$VrPS78DGW(rE zgwv~mcI@t3mWO?Z`o*hba^J-lrl?UMPFXf*NQrqqCe#O=enUG(9j@@K%I;o0%Vx9H zzgAU*t}J?{$yV-**6QL4gc(A_L(|%NX9I;+Tym$`hxiSPZx7# z5W(L`ROtyZ+LDlu*s^*PKEZo!5?|xogYl%0%k$!VGKpcLlVepvvDy`KNHx(4?{1hD zdBi5>xmQORpi;Q?mpP3D=Y`Bgta&6Aoa;!y7U0T3u{H4JD(tb{P@8s7wQmcDQ+pHn zT+YhYf%np6WF&fa*Ydda8ta!$XTx>(J~KY9{?1aCPSjf8{^rO@+zrQaH~n4}#qG=G zd2&K2*bt^N9xuDt0ccObbdeqZNk+lu#%!(bmwJ8NZbFrLS>dI%IIG3z-k0@d=Q}-760C-AU+h> zVA27~bQ<7w1fs4)8CDMH1{j|2+R*Nms9gqhqsqyZl!2WaivHR~et-$O!E&1DMRJ}3 zLga}!u}PQjJ>xdD4Ua1*VIglW#!JHuUq)&92+c2H095jNo{vh)%sT?}7*Ny*J8~WJ z_y7Dqr}W?G@{0N3I-{75v~5DiOs=+nG5{a`?+*4I`rm;gS~2K{3;>d=HuF$Ro+ zmxFbwph~l?azj14O460j;SM296bFwNG1gf5vA;;ym7jlue#WMRQ*L)8$M@Hsf3P)c zcxh|I#L*(pk%oE8CvaiQc>bOZAG!&mALrX=e)DdZTb>rb@BRPds;`_YCe zrapOU2&`S5DJtmyL5M|5nqPh9>XXAu|5luU_-u>$<=ZQZ z_x%l}@qN?w(6b@Ud{$@jbiaQWzLe&nZ)F%rG=$dVE;gsZdT=2Gu>sri0mgUc-y-va z1CQnY?N{^fzOMiIo0QeYI_pIn(&zQ=$y!-n(+#SMPpp81e=E$-Z)Z55$?pzy?A&(L zVwqEOwam)t10q5bll!Ts&)-O3*Qx0IBCQw)S<>oo)KEjDXG zcOg0d8Jho(74zqhw2UH-n`L-(J7rtVY1HZ5$-`h=Tlc+kYvk3+bbGV( z54ce9bl4eeQ^yy?Y~&ApiftESG4#${_|!SsIs@l$3mDbf%nf<>K6~ zcQ|A6-1?W~2ZYIX!f4g;yiB*R4Z0&d`?`?6+%I>^rC%K%xSaj0WEdG@-LhLWzZ%`F z?EdYpz6Qc|uAXc6=J*;Ma5x){&kn~NQTrq#J7Co%F;IFZ)@`_~Pvfo9!2Ymmg|xTq z|2P@{^%FwB5`W=G)^QlSQwuSek`*87_|QE$YuvZiaan^UxP^t)z%TbI05^mlw}oBp zPd_|Szd9w`couOjXb~L}J3ErS{(+Cx{(rIe)?rbu?b^5!G8iBrphzhy3L+sL1EPSU zlynXu-JL@Sii(5_38|qw1%@6y&hMV}?!CYFUF+RT*Y-P(-~QwGkB2%B z!}HwFbzk=t=XIW;iqHB^^HTScbHi9)+Ejd1rG6h> zk|8HzU@hm>dx=0v@iC^oz<4j2cX>W9xz3{vXq;py#uYO&%YMFMT}c<7K&)Jqm*}|k zO}O%xDT2h}FZolZVuyLmb6AK;kYFxe zX}-6{710hR!9_bS&0poP5!(fAumE_$#yQQ(d3()rmnZ4F5wmL;-7~{a%6I1DqExO`Z6Xb^;hPUKx+m&*?#fWP*>zke#miAMw)uANpR z$T6kZyryd++1dx&Ncae{a)e`=^`T;~g<3UqYaFeFG&deWT5Sb8>aeny@wpG=#%4@j z<+I3dV(@Hd8Brc4^CeP8d%|;!^XkRdF)b-ea|5qy993Bx65iD_@+unO`5hR<^J zl1_!wl9~V3s^N-|Lq4$A!I)$kG`36Fm~_-0QMKR0nlW?!3X<e#h3j+QB-u{2iFmFr8KOxxvHnosf8MnCG zk|Z^BWiIAQ=4@DT=1OAb{GH^q3GFO)fB42aGtzRY|C`5pG}ujp4XRpZ;91W5wz0OZ zl#~c>#k_@+&yj&uUrdybYiJ1e704AduBNI)k{2Sz7Ef zUoNN?{_Sp(uOsH`nA6Irsrheunf(zNIogH&aWT7o#bBEWIg5-sS%XH+hb`Obj-PwbU#bdbxM*`vZw;G}19^A*~x}K5$PH+hC@tIDh`xf1j}Z zgaiDC7d(xZb2jqzlm8nK|BwHsD+dq{4 zj6A;nLqq)0?RV3H_2g-)l0^Kn_lt_f6-G+h<~e@VG(o8Y6ar{>XyB5+>XA*Ha15AZ z_mb*=v=TpD$_@pIR=F_=14yeIhuEB?pxa7zK&!w%cLuYcK? z{h3wtF=7Vwq@fiq!S_GEmHz#r{%GvFjEgz60Dv+6GV{u$fziQOck z;^r0opQIJ_*(Z+|hryqGrv25OeVV`yBA>ucMDnNK%s)t&UgVI^aN0r?Y%lr9Q(@ZY z?DbiG?iT{aXIlI}J@*eS{*OPw{SxeONiieJKYs9UPQV{ra&dm@@_)ZUe#TyLn=l@T z4sG(_ab^AbS9yyuH2tdGg9Qy_1e=H~M4j+gec&nJxA@QekI%{fT0s*6;2;9M+gX17 zs~EeRIRCcI{K+)Lm>|IU5xh|O^#a!^A8638`eLw9G!KAdXJRZo`*WM=zn1pj7#(EO z_WYlo_-~t4Hw3(Vn$w?@fBmb(;)4zT>V|*LDtsg_f&&@SoQ?PE%Z>NH%@rw${gY9P z!OtxZ7Suye4zc-z$43X6I9j<}{r}(xueAT) zazwWn-c??Ue85ikDkYdfbQ}=UM%&7;s^%5}meZ6*)8Xfum7Tt9Z;IWv%y9gK#n!mm z_L3^gHSA415Z=ntRp1km+~{sn`1_ksJ`b;htDEmS@dlmO_cy1$X33E+KOOIK+kwA? zZ@HP|bNuHniq|8=(E^4t%gN?Ue-B_s>QuRM{~rFm|_`pR4k)nQ&fq< z#3Jr#TYz_rV>kw_NP%=@Y@o!1*f|d{9*rwr32l8-%t=h#JkbV7Fp_9dFBItC43SCE z>W!53_u?4wi zok~CJ~x0w(dgmp^LNULGFA3N5rD`!V zTLHwv9E3!qLdzTgNDk`^d9z=lTZ^YzY!-G$s|SF>*Y3NlO$hc4112{sfD<5H5&sxw5DT`i(h&hSsU)39~vV65iiF36+C#Dsw4y zKRQxH_c16(Ki}f#K{VpuUfjXK<{NRkdwL?14>!yK@QByAGbY~>Y*Yw6pr^jjrs(zk z_iBX*cDCajc4fS3WVpnd0lAW%6!dw#%DpMn*n|CCM;=c1s$&>wS<&TyoA^>L(kM1F z91eCmIW@ea)PrT=4FGyW-vSm3(;=>Ilw$Ec%ZfTEm~H=N(GOajx|u5q z%24a!2q1_AVQ6}Pg<=^VqfTpGdEO%=|x@!)JXyM$;ZN`{M%4~p3D!Xg!J$( zl=cTl(Mr9)OcwM?@czbWA7A+ioKzeH8&TjSM-2vi`zCg&FCSeUnkni{ve@WHte z=Pq_b`PprSZKokiqbZ7mL={fTL%@No^vY2WD~2Lu>34P1YkrF)_z#<1d?;4?`lM|f z?%kr?4`--OdHR7|+B?Kg{(=typ5OQrh=d^I=P2qi*cT2|z~L zs|K(k)kv_)L@oLv1a~3*WOuVD=b1h>IJ~qf#)JEvA$}>rW*u)=B)QvA+y&Zyd*Dn`iFnS8{jdBHjEZ{3R*6>O;)O%R^F3@ z@l<3!eV=reA#V&>zZEWKH(T{De&wvOwXM?hZ6xY5Tx0K-5o6a7Xm4`aPu5^e z5Z>1p-8CRn+DR&>4Le9b^LR!S>9fdE1R>TZ(}TWrBsfY(N=^8A(~QVYSORv)t|_J$ zk__!nX9Qk`3e$ET`q@Cr<@bW}^{XWFKcGXD&b;u-!iOt$1V}q(GikVQi-u3eZhaAu z+WJk=X z6H(Uj=egcSjb1j!@nX~&Nc?7QU+ z3SG!mnpvuPj)~Hg3$YfzS;Pk)ZVFS$G`{hTuj+Y9vhz&BoJQZ#tV4dVKdO3b#ekJ` z!BZ=Y3HibQ#AV`B^`_hue4?hfw+oz(@TJ|8OeHE;Hlvkr3=w@PKsWRTVv(`#tFIPP zuQoZft$f+_qddR7CG!I~uIh2Ja4N_Idecp^vXqhyCn>9{FXq}Va2u^!?TKJx9J9&<)FxXTWmB zt-*{-PIh`va)sL~?7p&{E&v*-(7=khS1$Ub!USGA6bw=~tQx(^G@!?f(_XWOmrP`t7jf*FO zu26c}lLmILC=_-Db7hfrn)V-FHy%rHytxI^8MSZHUswa``!P$ooO*k^k6}Ce5G|T`CZFK-TVcFf0iSZ4kk7T9_=d%;->mfp7ZkYbDY5o(A-p z5-NGVA=etY@!p=<7jN5%W3*|$=6^%yYpQH)6z{M}((Yic^Qt3vZpNbM_c|Bx!mqoS zY7??~R{vOHQ~L|O)M!ltmyuxz8w|eHCu1Z!=||B6Aa#Q;Vi>?l<|DvXgNX$MXPnwz zM_X*U?_47XaMtrPzNfnwZ|qjLs(A_XC?9Hag3d=o=OK&o$s*x_Sx(_s+lT(SXdrq$ z@x{Lfusrr;S7a!{9y7=7V!7?+CKqb(ciMnPfX$q?WU|-cQL7PFC~*Z(Xr;&?lmqQc zsuEP&6;1C8P>XzHwg-cUG@SbWo3DJgyGT_)=RSq*bCJ3Z6^rn*`^{{1><7hpH}^w49F zOdKTMT$K~&cMPKD|By~fMAy|bJ7Cg98QUIf&z{kO_M>sL-D9bZGwLo;{ zdN8w16H0+C_(;9r`A`L0LJ!CFQAHB`FjdLodXpITKCs0@`f@x-ZAh15aLqP~%!^Fmvgzm>P^PD${dndAO{5gi0y5I3pHI>ld&jyB zoW-$5idoAt(GVcs{;J0kvZ-!3!|m95F5@fF#|W5bnRH(i-v<6Rn zW>K%b+gr431KIYa!f9t{6Y^rNxuqr$?2f(FHn4y3pqU%bF^6YgOaJZ79v}f60WhFJ zk0`~4v0b7F*m=d45Ec!xcY0b+11~a6R)ONcjr}IyV-wndbeF*^2XbD0f%aB_c(Uu@ zp^N`TQmV;<<n;Sm@WZwnWzL29*N_28uobOR(K*DsPDqLdD@T zX7$VX%6pMUKWi+JLU>~=-BKXlNXZ=EH0_-WlJqqnb6aAIQd;CcSjG;2>rII`&Vg)_ zX)&D|7=Bn)EwV)&=ATZT<2R}!*W`t*;2F4YGcmjn-Q*=AS8==_-E-4x&zuv@)pj=) z<1={h&}W9IHAdL{pT zjcBO#YO!Vx@Nz}kx?sG99VMjxj^py=y}C<4pUNch^N?9qPX9IutG(x1EfHJi37FBI znNZ956e$@HPp3R>;#WI}3?CLh$M(&F>vTz{$)QyuI}71+vOJzp4N7!o3Db7qI!M4| zwx*2`2C)3gP!E^7cV<4Iy3&;(oO$IloB%dBu0gq;Da59F$i(>gqRqy1i!1L-p3R3# z^T&lNTX-UVXLg3o^B&1>fTN}PpdOje?ss^+*X5N8be__zic#swQiio|f%d=UGqQE& zxM2l#P7}^no;s=80#lJU>AVitT@4Qv!g2awrE?$g;iJ6}1_UVmg`G$;e9SYK zeO1-D-$l6h{TX8=)yhH|LI$YiJom6zkCnE82_DY3`Zgh2YxDc-JRBqmBLP0WdLt^& z3eB2RsXue=`z!3GPS`0}(M-4|Z|r6v!(52gd)De+zS<Lfs8o(ZqxR? z{F)xqwjACrpPEHSN!S*kGgL2$++3>0M0)x~je2?sy!XsM_kcT-&TGOLN_+Ypr;nHc(Hj8S&Q4ARCI*!J`2(M|`Ot|?5 z5&8ELu`0CGOsjn{?>p%m1cX^5+3JNX)qOhcNlBl_d!BY3SAD@;#iJ{xdB9NzTOWxR zma{N-)k>M=y{J=)qM{RJ(ZD3fs8~?#PT!?W_PY%#SNmB{r_9cKo+z+2x$oKt?9( z-Mvf991{W*=t5On;mdXB*ey73!vRKL^*GweDMhw!#C^FyDwjo?G2|lG&N~o_e_nE` zapGBX1>*7uDsWJhM*@=CX$2m z%*du}Unq!$O44V)roN(ySpcWVy*dx2i)AOr>t!_Pz_3gYCC#_I4)Y=0FPh!(T{w6g>c=RNNov!D`;WiMBO7W`RH*0^sQ+2qu6&Y=vBLvGKE*W!^Du5GghO6nX z8x<X4)E#+jEWVrFg3gpI^z> znw;r=SS??Y_$-vu-~yN#?7cH9O%FR=0BcffQq%>xNREUdeo0|ap=#KLY&0EYg`sJ|bOl>#)2T@$ zjybscmaGNZx@tZt2ZPH4;=$$a7DvASRsXQ1%VK>pQWYWyosL$mq{qq+-PhPVU~1gU1-`zW74PU^M^0!t|KYH06CIYpEK(IYvVx0Qexd>e&A=N= z6^EJE%pu@bq3R?8`c4wXm(@i?8l4+(q?5|ovt`CS?nP&=>_?Tmb` zs=+VV&I*pBfgMzARy(I1mW@|ts!8sd=spH9>l!35+g#L=dUb9Ra35?K^-Yr;e?KkJ z5D@{3`{cQH#K4?c@rY^D<-6Rmb?z4_U8^s$LdjWqG<2a@p@f>fxf3Ol0Wy#MIng|; z4PCF$Pg&5r<@0F)oT1F3iswrly(e^)c(m+0czWPFBHb1W^durv??saf!Z>uw;4@h; zSrB_aq31E_t|h+Q#ww&uBhc!VYOl#WoKhB z1HxC;?+~I5Vws#WjqxTC9)|1uk{p(W9lUMIv(Uq}dT`Xs%^7}*=1^B!WBT$HG$&|z z!P{TzL5^5O$fRS?b3<&)s9$I zu;sl8p&y}TV`*HMqDj|zLy27**7d#6Gv%P!SNme?;3R+YJD={s?0<*m+n1 z>b8AgliEx*3^{vXEH|W6^yIKoe3iL+w}HI}a81MP@mtaNmDaNQcCC?y5!$@#bjqx2 z+U8f9a1L=ySr-gQjXCy?-uoHby(AysMC?!@h7P&VFXzizn<_xr9j!xLAX8fHs+Pqh zf8SNm+z?tPuZJWf)#ElT+`i@^R%-I zm9Dg=kfA&#WAz+03=Ip)eX!p~HzyEaHfTO@Vk(VoJ}B z{!EHAj63uG@f1ciMceu6Cn>nwhv2TH$fh0Fr}2W{s5Px4*>8BLf_)ry&Z4+a#T1+| zdqTf0IuoQpdKDz5dlAPD^l}#i2lNBj!pFh6HA;+?$j^4oyF)Yt$y#S&Z>sbgMGoo* z+zKE&tN?ZYjeW@0*UX5k!Y(!>{yggWp`vT%4vU!b7Z9$%_SbMe2Q69)iG;*Z;Hp!Be{ahz| zh-0oDlvHBzLkV_WCS@OU(#*}cEp_BN&`p8o_sJ#W2QE8+1IOwTjoQkwrrhb$(9EAt z446F_1YKAyzU(HEmmCZ^w28TWh^e3OdjE8ZNmh-AWvs1i*MNJ&xapUBO!kr1vN`5; z@Tp#q)jsJ>lfg_Ydnj#|O#fEMka|zjA<1VBaZ>6w;f%rtIvkdubTXXoAJkg%+An^i z?imOYfL0&dY`a>HZ&0eJp`Nc8Zf9^CGzKid*gHHY3d(%}RSdUkpPomaPMZ?_4Oo#| zcJ&5RA@lVEJ~sJO0u;~V>v`-eKMcs- z#4LRzSbUu7Ous@x*8K^0p=eO_Q;?D(crTryS`lQWLGM22mD|fFp#jAy z{GvIes8?2q^7I4odxHiK495J<8C=1Oc$Q|S>^=;bJayVTx1USotC+tyG2fz?f(#X$ zRQB%TpCfRn`>1>UfMVof-@M6;UY$GW>Nt~}A~Q*rhz$0g$k(giD7Zt!Hg)-t7pQjB z#E`ixW~B812HYs26F?;!fjSt&@Rl+3YqAbN0~s9%c3QE3wz%o!!eeE1T>kKr<#pV3 zvDdmSSvCU)9i75HcMfJ~z}qw4d1Wtwe!>%6=|N|O1%8kWbX!07-)i5o^*R{|TUcuH zys26^H95UCzz_rT>oe-~xG8ej&1Aai$cGe?=d+0|4uB^~j}azy-h--=eq?x;O5Uz0 zCu7Cc23R6Sr|mVNYx3Jbv`d~~Z+~0+CTKL=w^T2RH2x}K%pfW$tT44(1v(dD@#u<% z!#;d)MBkMR^L+=RJ*k_?`Hpdo^K!&O{T-$nj;GG1e7Z5sPywwoB?HvDJ>ch zwbo;oNy;X17KD%I(ZvAB*DW>dfKEcL0Q)te%yu9Ts`(b9FRmC6!7`z;1FPx;)Ks(a z&^Gd#>mhy5Yu>psIWIgbcINc8C3XXkxK`v*iI$jfjP9rCR6Lo_R~{Rq4%01OIBwMv;X^-Gm`)rG2@(z#wQi# zK=WbW(45#X1jyghLnBTO=Uj7fEaLun85i5Um~dxc*y6fqd+%!!x~PX0U+Kb>lBDnU z87_@Q8+hVEiJywe{^6Z&7c2U z9PlZN!6kUT4_DN*C_FM83*@Hfp3^fMrZLa$&lO`JO^IOL6ye!hFp6FSbT>R#Rq1OV zZVPX=pdzw9$GCTFI!(P+0IQ7+@Nms2JMzpZ! zsWA@_)2iV4vAgi}K-9U(ix@o`&8_(AoIpC|kY*+6{%p5H+{M&=A{3oyfqc`ER4cTlKgJCG$(-wMDOPVqo zTw&m-zy6KaBJ@J9>zwxB_e0JP(pFE|kJj;f6w`RAi~^3oSEXd$A(#z1nRR9*Wnhm& z<|qXvNy7j&Oznp6DttdX#?!6mB-xtQS=?dh`@#DKcB)?28oQok|9~#G3&MLZdKW#~ zg&Y^pv9UdO9tP{9MX(QKL7Q-?*HWjZ$fP&^4x1!s!Dn-4ZDd*n+}mGRROo<8g%s@j zbQU;|$GRv~+2ESR_(ZwdS5~$CAnI0wfW2M$jOa7t7@g{Jy+TSs@qbx|T|daj6Ao!j zr#qR;>rKiam_>(6m~|FT1quBw!gK^mGqMBP&e|k|Tucd8EMdn$KueZ2J6}B1up;)i&vRJ1t*41ze8bODn zQ3lc@OJR$8hpw?5d|bkhwXUH#X1Lc(OsFZkX};8%iwpOp&Xc|UZTk#&Rkfl zCy_nc-|5Io)o19N8Ya(o-z;bf=z!ZcK3y0{JIvi39rw}DD@ShDqCBReo2Jz>prx}K z=D;1hC-zBX?0*Jcj0!6gv#ZOgML$tWUYSL*)F8Yo;eN0(9CiT&?H}&ZI@-2}^P)XM zL5l$jQ!*JfNlI9F{~|u`l-<83mKE-4V^-x zj!Y#rFK+nkeMH9L*Er=3FZ_NDCxA>SmIPA}0KIi%{WC5c{`r&VA|AG(hWQ8%!k1pq zj+^Cx8Do~JrdIO~DNiB-tN>eA{5K5=zEn)vfK>yRoba&{=V(t&wmQRfixQKXT}#)u z7{~5%{OCZmL!sV6CAO-ximRdpXMXx-kqqtJZf4%0f&C+M}Znq^q+@ID~ zjplhqs%oBsUfy9Q9|OFbT6u5wh0t6u^`$ZPf-m0}_^OX2G%`8`Gdy>CLywu)^BGqF z-|*wP80`9ngcQ%2);{$1Aa+Vjx4qoha1>xrt}q1bJRyaCoJDw{4r$6K&+ycHDn zCC*+zq^G0AahEFdyu7+~tRqzO{XS?Ceeg%b9kzS5&&gF_3d}o*#9@GMPhSArtRUs)A<^tnxA0 zvc`)K5p89U9Wp@@pgfB-b>v5Z{y3WoN1hdbss^B>ipc7-g#t}W$Cr4(W+Ij@K zgspA&dx1|-%%IJeE}wiIO>wQRgGG5)MblEOV+Q7R6u1eFt(i>6bN5Qc_R+ATyJK9l zNN&%=FPpQP)+wj)_tRpX=@l|^OVo7YY=(eTy85Q_I12DtisYd7y*G7IqR_Z1h8k2DiX829&Hg(H(_TYjBl~<_NadX<$ zjt0IGxbU`>GdV{yBbncBdQG@p6cnayU)Aa-LSTju>)^Z++a937PDHlM|yr0ag9^zD#*bXFcJyi>$kNpLye z*Zu^TmB)#O-*E=X%T$jHU(lW4#iB`jR;b@jkl^CeGC}$#-7w|->46|2D^+_D0@4$Z zsqv&WJPx9SuMfWn4cuT&Z4mNl3H3V8+KtrlTG`qJT?ZbJ{fxVgf?`-K)T;*Cu(<&| zXJHRUR;W<8RqyZknm>D@T|eUX0qt%7vmUtmTW+JOc{F4Tqz^BEFDh5f3mi0j5tpW; z3>EiH$}3Oplh>^}9^pFv+;$vyjFyv?RWf5DP{~suTdhWcy7(y#cbaPjC&$&IPpm7~ zB?}vHplfco%?k@%)Y=-)477@VcjwQCkaGwGc~VqW;TS=iRi)muP*c!EZKXVd#D$Fj z>(lh8+)LH$bw(W>0dU|+W*G)_ORPvrW1SXbF60JQVOJgMoKc8WD0nqVS zo~S~9Du;}p%5;9~lJIg%iC-AYUE*JoX zz2<9`#&Mp!s>9~y3^^TAj&;QYI;^-}??Q%jHM9I6i$KjMB0BpB3N+k4rS#%3 z06UI{_!JZgt>H9najyLtu2kahaNSJTQfCZhk1aknXOXJV%@=3BA#4|iMq6yBMAK`Rr|vku`kc(Hw7 zV-}e+*ZQ*hRax7OL%<}&g#+b(3Dy23ocQww{pVogKR-25iXQ>!N=|WjrAAhNQzgaf zRm7a(;LpNg{+*-!$Nz~rlSMM)!F~Fk=XWu}ZvS51|2x%(KfS|lI>OluG1#eaUpQ;_ zppZGo_}+eenn1#`3mT3w`bs9Ulz-|oPN-~Nq=^WQwgrk0N@S2wkPfG$GD zN8l#M(vkYYoajMc%;LD`-%EY}xajfzz)RL|4~+fttCQEwgud$6xld z))AjZe0hG!RBH6-m+vZ)74KYoSaJFDKbdzVGr=ukVb{1pS-$6g>xO+?AJm+MW9t;` zuKwEEgXNfYJ0V+qfY(su8~4k!IR+Y!+McPDa4V6*efcULZ1BL*Ab zub}$V;{EY%!K2)}#qsGsxd8rZ=ljs#(~ZybN1R$k{k;eK(-w>u8vqu!L}NUV_;)h8 zf3v@Cch?i1-`rZGs2RgG1^(*xfB5M<4RE&FQ_C8^^ewmrb}RMvLD75Sl^*>0(f{TH zYcqj0H;>T{_~j)hrRL}!sFyqCjDsZn8#DO%|BE*#IKSx_SZJ>B%S%29PA0cq;)zup z;_jCg%OMb~d9JM!+b?|!?qmV`Pz**YdU_Q8i|d5z(ttJR>CBb*cl6XUr!Y_CQ73u}|dz z4mJn+l<%#aJB0xzegUZdJi&okhe1n2bBxq@p7!+WHQa13PSFbLzq`Vbc6A9ikkAVn zbK`fi%;dZdt=s_Y32U8->+7G$Mp>g)w@N5S3PssYqd**dz4 za7w)gm7uUS^co$COCK~evqxm-b~xv76I`EOpTE*`bEL(2qsX-P`N{E7FZt8EU)KZ$ zr~uwjE)TVDrUA4?vDODjhyG?;>!;P7Nq_AwziaJEJ6HUwFz8FEJv0OB3UghUOYu+Hno*0r%-0;xbEpl=p0LmA% z=;=QOge3!7_A(@aDtQjw@efSDydg97A)Gxvx!K!)lriqVF}FU|l!ns`JmK`&aKU>I z;3nqZcOE-V?X-d^&JU&jVON_>*CuLBL3#Dr7~8Jec&A%En9`JU+h}3NGE~fGilMpbOhxfhU|S3aaGS zX$3K>EzDkRA}ZKf&NwbZX;forr<*iLl6QIl8X(3w!ps#6n>c9ze9+wdDg_O008Pk} z&T=JquM;3wz+A{1EGvGpvXwA2Ni#(pchtGK&FweqEX&d8MN0m?xnBUdC*cSK-4(Rv!J%ib~wSyQASQ3y-Id&EVuPNGZ@4K2! zOnNIEjPvj;p*~EFXH!W1==>48#<6okYf*5Ak@QzHMh2tP>T?96Hg{Q(Mz6awigOY%0 zn6k_Syz8E_zdS${my@?REn*8~w&0{3Nve z`ApFRtAEU}%(CJWh${M)MB90HpyQgAPiBK2ycQsp>)j=k$%1y(91zoK2n* z&%yyRxhz<({`HKRho78rd*laxD_Z8v0YAfSj{Onzabpv=>&En>E$HuEf3Yqq#_ku2jCCH5 zl};4AJHP$jbp-xk%v77V2iK8|7?68xiNi@CU&${~qt;$IwBAXZwk1d@7wjHrKE5O* z&tn7TWT^~x4M-2ElGmFUKpVV*8w^5gy~2P>S#xbGsQ%xU163*nDJYpGCZdg@*iVO2 zOtN6Wg_Zy}nNl&zY^}?8JO{?|pWwE9n#HxIn|M9|Sfp{?gU<35rAl+tZ2SGL_9PtC z@Ot;aylpTF{-XrJXBGrTQPKR8|9nb<)6|Eay#{~iD|9PEu0gkO2Y_-!RMirCi25ET zf>Ig9TYs;NU0ahGAI#!*rw}F?r~6Wtc#agsUa|bSOrsf$Gej7VfngJJWlJFJ`;mbp z?Ia~!gCz%ec~dY>+q{&$8rr3p`s_)`n3HBAIf>7go8fBLw^l~@EK4U}za*U2Ig-^&$LxbSv7TXVrh%whVMb?Ub+Vr?Qn!yv&e8KymJ{ zqu=$#3GnT@knmt4iAb7*zo9amhXPrE+BmF<&t*q#UEhPFi9wn_sO&y9kD# z$Dy4FlU>E5w>FJn1j)c*UG}kWX1EKTUGp_p5A(Szlrh70^c$y+0k$I~kN7q5dtH7o z?@U|D#+%3`=19jvN0zOt$secnREw10PANU)dk+!Yc7SEdSJ5y&Qwtboyb5cxhXBoE zZdK*$@z}C1c&>29=h6CVS z)~>YgR8W6&K)6Fb3_5tR7-ps4>SoKAa@z((CJnVp&kTU}%H~j)Cvx8ZbmxWH6ot3% zmX3yu5gi9|-iDtH!=IRE7f2?Gn^yJ5na52u;s#{$F02NQvgS`-e%|+P=`&kXrpVkg z3%;dEiJP}_FKCrA+jd!6|2`%BEkhBzcZJthcA~iU%+={7LKrg4KFpxLtVN_$B+#7z zuBiPE#eXm0UbzVhikk%=b$)u(AW|iQR^1%vKe&lDi1AX0eQ!8LGZj;Roo2_WlXG=T zN!b8adS)+oMkw(RJ{sh8Kv2gQ=ncHY`9{Un84O1b80LJt1MQ+eh_V4xpbmhbkTTEO!XK z?N3zZzh+j%(*siyo*FgJaC}iw?kjiLKY0wa_BM{~NM|Y#jqm`BA}aK{P0NnTp0E+L zeOGqMvj$UuhBRitgU_OTAyq=v8ntIszWY4@JBLrg&uBX~)p8Ux zvn$iE(*Xc6+IJP{Cp;US&+S~vHC&}W%pC>OLv4k{g3n5SbC+_Ayhk-HY((Elf%Hc| zw77~+nQ6q#OKR-xz=xRkzmLwj80GCzrt{W^;q4_Hvym z!Kuuvq_SQ>zbBnO))zA44cj^|tR}C{`rI`@Y-(^wYOsL zA2XcnN_l$65)~vBh%Qr;zKXbdaPsVCb7O!_)7oTxTc|zbcW^%66&L3GFavq}s4x05 z%+orUm^6}t>utXLX1xo&OEnG<>1hpcm7Ez3&Umo|qM26{$m>`Lps39h_LP(e9l+~>#wHBtO0rS~#!I?yr>T{8o+6#0V99}YK_C}1zgIHXNO%WJa`>zSbW1vxY&LD$ z$tM(Jvx^t9eY;Q7&unclicX5__}M|&B$#UIIhoRHipiaxHN_cr@=v{YST1OqoxpE; zxf0ZPxc+G+t52VPmWfuC^%6J07*}@q322xDkdbJit1EPS2eccW;v^!*XCY6x*`ao zk1yPq287`S(9VasE4Vl!*N*9p0`!(GZKN7Kdffi?(e45IB(AG?$UN$ZeeT@AwYg_Q zpM~YB^nyK9}lPwI1SB7zK;S@$yjB1Dd3gO1Ui73{Byu$aWa| zv0-^0DUkgBMdsCo;#5wO5&nVi2bm%}Pdm%_{q^bA1@;Yvn<6kDHwvZ)ZBW?n{1SIxa`b#;h;U2U`B;y zcUO(`6mNnTrkaAgc1j|tFdkj@+8blF))$AV0qi(322JcrwrUObB-;BlKKV%d-j_X9 z=ne)ny7UzVpU372?S65-r8ldQ<8j|uR?2Ko5`J}# ze~zg7LFqPxxVNc1}R6oWcopObC&AEO`<4ey!;0IB_rA$N9i9yib+8q@i(L5Xs*)|`cRgp^*G zU$nX17!TrS>EaT7ivB^J%f+ZuDt>z|6R{x_VFr;1d7=X`!*He`{zOPf15zq7So)#T zYYhOT*_2(LYBez|1{8a_k*Ym6gx_zQl-tv*mZTe8`y2abRspm@PIlUPfo<6DAbt&uf}uB%HdP7>?=$WF$oX#(;PZDu zg5+}wMz3Fd^7!Kslb9Hx@JkXcIf7OGMy2^l7u!SEZp4!JdXXFH0R?07^=pCg(vA{L zLD!lhB$W%oX(p+I*aFSpwgzfZFHhkHgty79biF`CnG;@a*2^8lI?`A^$lJ-D`(TwJ zW-2E1JuA%jE2j@>5I%I2VW^SgCJ;Q#r6n-kj53D%*)|-MWT~yrBXPWBL%$=@65?&6FzG`Reo-;6;ndyGCiEqL$`CK zb1R*@nMXV%%ErnVxjo9B@wn0TVe<$3z((gYBeYE)ZAY!^qTuQvC&Iech=dx{evUB! zbQR2n)H&lKp)4@&{tkG!eYkz(@gUWzm*2bmu1QM5Jz zYmz?Q09SPajpbPFT%N1i&d@2uwtxPY&z?mJV)iQ2zcd}Un#IAOeKzmv9 z`mD3Ih1AoLtG%VjAt=h9pMBT!Udh^2MP!JHF`wtsBfRn^EKxsP3gE z^~HcZVed5N7KP@X*u?v-hKFB;E#!3Hc$1Spss^Q$q1R>->tdoKj|~OIL%ES3IQntw zv4`G#4{~sH7M9m)$R?>@YR$->CH+`PTk8XnYnbKLCqK=$okA=9{j@9n;w>9Vre!Kp z-=_UcbcPDmN*nChE|i7cPs|e=jh^Pvh|%O^$CR$e9|1^5D8A@>^A}!fxjIv$Z;94i z;}H`Q-1JVt?`#S1Y!=@8FPg_16A&i$MRjg9?LG3I&4L}7k=fw1xuDyC-^-0;{$hFt z$>vw<9P{~->LW{?n?CZn2>x(CTgVx*`MTtk#(iSVizL^xpVQ3;xD(XU=DrQr(4uZH z{6FlybzGF~);4^DC?Fywf`Ei5DvflbNGRQ1BHauPLr5t|7$DsZLrB*EA|c%%HH35w zFu*Y6d$Qkm@8|iszi01zfA{yt_s9DmqcC&Lb)Dxr*E-fZjsuLpQ*tM4=<3{FBEW$VlW<#k0^DkCB3&Y ze@T`PG;!*;v4c7xEZ~<}7edbT&Nq!gy74>XPdZANBGltJ zbI(6y_cX{;X;2IqeA=)O2FGLqbp;N4KF1_sBLPwTjPAF z6;A?OSqvq~Bs2eZ{DcNVs|!_ORs&8khm`fLC4Q;^U0t7QP8vk@jgGV6d1scWC`h$P zqgNd2eY=+%0*VIONjei^+eFg@_{q}Q^>L5;7W3#k8 z0c;O@G}18BvF)DS2j8nm?iJnK-wl0ad3rlu4K};GI6=~t%|+CN?1s%cA4Gb4>V*Eg zc3xU-vrn&#;O})O$SGwp=@42&v0u@W&T}vt`sUCu7WI{T>gv98wcMK?rFF*)7RNIn zxeZsoMy_dzbAQ3a3H#}2#Ezl}<=vR(F4PXOAbAuOzV{~JJW9oJ9J6W>h7sQ&vXfbC zLE)j=1m4i9ynFQQwR|}AbwSSQco$y_4Pvc2uk-;e)|Lv}J;B;dQxWk?d!v#D`l;58`D=(Fc@D z+C)GeOkT=;9o-XyC|ts|n1;-I+7`UU+3PmZVNjB!MPF5x)$dL~xWtyACEEFFUd`BI zGG-FeE7R0KX8GQZIm*GRrg7y4!}pq-`D2}es`fO2VE#T<#?>QE84ta;F0K4-ES8f9 z)dqL=D{wWM1dJVL&PZjepe5~MZe5F6j-Rw|%6OhRE^!)NibFIDMw~#@ySf?AQA!7* zpOByNB2}c`m)h5oQ2p{XG}tN;QMv&U?ckahV#VEz3qAOJMT2BDVrlBcuy|bu9=W=b z9!lFaL*nQFQ?>OTpBlu}5|R3>P>x#Tq~V8$n!!9LpA-up?=&g~{}4Ewmgx$Z@p@Y7 zbwZ8K@*2L=?{LF#G@U!AH%ahQ^|bXto4Z;N=9QT5!*(}msi*M%h58t8dCUoOEHzL| za)Pzh#Li-7NYE_XO-ra?e{Wld+HVlKGuJVmH`PIti)bE`y>**LF9X`58#e~aiD${1o%fj6gtmU8sp zxkmI^)q$dSk+TBXuwjPzHl_}9;h6=M-xVN zYMg_Mbd%MUa#a-tNDk+dO6-@6@iaQ|u1FQSr$_}C3GkbqAHvD(>jC1i#}`S>UyKJ8 zJ)AI=a;1@LS zvo-3~>jTyY`+F9Lyt+78i;xBg@?H(!!@sRi_)+9Sic)2LEERglxzM%$*GZfN2ij?y zO#NtadPV=ioUBGU9(l}30dEgf_>!nb7N{_bf4zfOLaHK?=gm5f1-=GQXrfeH)Q?Eh zI5_*txUo@W#sL~7G>$pq+XeuavwLL(%!vWS2InU(dM)eOoZT5&%cKM1tVRCYY|P4}S>Zd-6i$%WeU{OH?>W4`>o$=-ifFeJ+=e)o zZ|VM4WB>LtKq&|`fR(o0vorjWXa9qV8qM{qfG)rn0#DlAFzDAh(2KC~k3Xsw}RI zQ*6SKyXSbTBXicX)48%3vl@j4(z7w*RYKF9f@B)c*BY`RA&WD*vN@xKr~NlZM-8Xy z^gZEwqUqU-s)jk-p_s_^q7ySr%JaN@nzX{oUJ}_ztIF+odB(SX03FJ@%#v|K@t8@x z7Ni_wj44}|#@atJX$+!0w>+ zRbNxx29$6xy$NFq+ssCW-|S)(Gbc@x=$&Hwq|`0^eUhWsbQrd8?&dhhM`GE)cT%9K zf6XsX7brJrQJBbYKe%M*Euf772DfNfLe%l^CJvfy0bZVh|4%`0Kz^>@?;MfF= zQ%V&Ys{mc=m-Qp{rHox;S3R)OSS|A_Z_zODNQTcDV_J3;_-87z?NO4;I9y#O$j=XW`q-b;1aFnbHE<& z0ByfS$*bxt8Iz==;Tc{uPsRJ<0k5e@0b1qaXSGJfno3mJC;3Kw2~yU;<#fu(IkC!1 z2X#(v6J>9&Je=(mY;~n2OmyC1YDdj}wDN=(Q{6Ng(-O0XygEBLY%JgYG94u_Hf<^x za!@eW23HyVI&Ga?$d&BP%TfFgp6eOKc|RZzX#9vnw``2$%=)$YX1mkX-`Fhma}gm) zgo^naMI1oiaSV@fFIjf(9W;)|O4acFdS#qzt|mP4lH8k+*pAEw?Jhn#3tyJe!dtqK z`GOCSsZei};7XDj4dhumj_d|hDds}<3i-Zuux!3O4J>=^5z4VQ6%fDM=x)$UD9T16 zDAi+@^=Wvm)m_GSyfMCrA$MGc{X;%x6*AR`mr$S3_D1tSLOx3e_pOwsQ{jk>cVVTl zvk-fvlLK11_MiHBzbzzDdE=Cu`-DG@&wOgy5f>6f!nWJRzk?SLO@E0y^VkJHh}Es~ zb+2OqC7?=P?LpnE_h=nyK|8QF`91R^5~@d`)OpsbzK$Y7a}2Kww94p4Tdo$I^{h$n z1VWe79o`pI2OI#>IjYV4x$9@`$k5>pHkpb1I^I{$Is*@O=tYUj7U??XK#KD6Z}K37 zL}}HxSL;Ee1=T>qh@C!qLOq|mzO@X2rAafMs1+h9WmdzWY`A!@sF2#Ju46hFsB8Nye`x7DNuA3-sH+O<4G)&PQVA~+LizqE5TL8|t+}uV5NLL# zpGc>#Gm@3gB4WuFS`kT4@v1WNSA>wLA-i75V%FF!*)0~ z!fpH^O_>qKkEeQ%kIv`MDPl}R38}{hw}qje+iin*W#iUMy9e6_m9d4q_V&l6TT7gD zO}Kb0RU=&hvRDKrvb>3+zEe(pRku{lo)&y-7kwAzKZ=N2D)`u|ITA^-w6%P0gcQn0 z>t$qXT=sDU2SQf>T_Xkk4pUp*hMbCxMRxKqTX)bI%<-eJaA}lL)ez6S5j5z%y_^3o z>PbKXFbPo!{O!cc)FLjjzEY(QvlGtMzYkglY;z0%Hp44M0I=@8su}x(NK@ug#+H^f z)a>3S1!~6^v;e<|^HJx#q!=v)2>2T^=`<-Vq(}Wp#4+E{mEAS{ww{U8@ zBKp;1HhQGnit}!|02bfqW&$c1l|W&VRy&w?ONvp%^|E(if-P9U>a1a!ET`mHZ{r@$c8|{$6c#7Q z@O$;jq((IGJNJEW3^|wqY6QGmdi{mVJqA11!qm2BY>ou$UW z*H_QJ3Q^LmP-D&Ljz3uu(qOI~1C#9Y6Z>^YoS>T2a(SHjSLWL+t)WCwQM}_?@1Uk?ys>rK^fu5&YoKfyT1WaaDJOIJqAGIz-jt|l#6XxpO+xgzBDdtQNA6J^ z(trwUfx{2e4cHIL}=uE9V3 zt~r4+0}Hmi`U61_;K)qsoHL)1jFxA3kuc=vNst9RUVeCkbW{#6@a&+(PXQ-i*Ba+a zUbLp+Ln2|ajJH*1hf^z0X?LQmje{uhvf3oa)fevh69}GanMdLr|$q} z4wK5AckO%&)EaH2QJ(qN$cR3X#opr~2@rZl2gZyHNgP>`%8X1w%a-cTsf68ZixnxL z&Nq9K1iAfKwQR!HmfXL^sMtlC-o}z&o<{*FMh{1P@|f5M#M9ZF^?4v7Sw<-p%~~)` zvcM=p(ZbWWf@c}izXqBSLUjB>^V6?Vg>;?rC(?!_Y#&Y?bcmmq3CIZG;z@x9@D)${ zJ__0%q-?m&7gwi7XTvJ9RJ<6+o_Kd4apFk^#_`o>`|#i-Pv&No#U9o$-D~(RNIr^u zWh7YL9zm_8FQ-lgI@R4>GvhSNR1IJ*B>V*Dq{o?L`@Sr-NtZLOK14kyteJeAbG++P zxV}n7&X{(0 z1`TaS%U+SCXq%M$c5x>Iu6M|_J$LpE)=;f()YvMs7Sa1KYg=x#@HP=oall4#J-gJ~ z2vk1Xl$MR?=wjaOI~{D;x=v7y{;5t2>?lBNarSlHr&;m-P~ge4{UXS3@%#to$8I=+ z8)s}m_#<+3B80n7Ak(0pD;Iji{@yMe6-D^|dIQc7-VcvoPXn3z zXmM1%f0}g#9&`JOE^_y7bA@wSg7@vwAPgHN+nc%F+D7EA-lwf|Af0V=nzF==Xf;bb zJiKf2+}G48g}5scQNFcRICh$w6PA0V?LXJjf=ibq zNvbY(FHbU??i&suLe<_Qoo@ns{?b){YkQGRg}44;<~aQx{-UuuK;|jjQMF8Y^baDs z4x#v4qkfdLfX^2`K%MIOC_eNX{_>8oSjpaTT&Ue^clTtrqu>ak%E`9h0z zRrpI4M(OvKU*BEoe%*X+ETinNbpYqpnG>g*wD}4qbM9lS6op#*t3NLg8>6}C`2tKc8Yi8V#`l;qf= z{!)G~vbZ3xxpQ~LEBjKUBfqkgC}s%H_4yy~==E*V`=>E}=L{7uYu@hAYTWd@#Dh`6 zrY(YGFq*vy=WaF>Zwt?}Vig_sNV;($^qW|!^lOHrO_di%@LoNQOsv8OGlO>^&e~tkvEDs=HW?af>9j<&sqBNzE)xV~Pwk~BU~jodX*sD@pg z`e;TLrI zKmDB#tHjmMnNeawDjLDK*;XRNJ(7*P%IrkBM0cLtzY7#T-qFj~LeEjD4y2p!%o>TLnj49|x;Ec%Jj4opkWk-~!pv7ub=qA4Cpi+hN)i_G6jmE!i ziah~rX3*31c2fK<*e();67cIj>8h2qtn^8N&xvu4y}o8q2VR zV({+Wbs2d2E=*}~fo49*S_!yUXfM!2Ea_C$MXmP_inkP4syICot3Fgry&S)TOPkIl zage;3uOVxvzd;H!YVx%L5O7;%Us*UR+)*7<)AnRnal+wO=Z7|@D4S?kbP0}1?KQ#k zN~(b5yp%iR^eCs2Q=lk5ZQBS=mC7*#+VROUYows$(FThV=B3y*C))P&HeNk3WcSq2 zq$4KjF;R7uzFzL4;sp0GyAj>AO?)3$)NNFoZ~M`quMw|v(|v&sw3+8jnW!!9I0-s# zVpgCeKI8%nXC>j?uC8<{$yS2rTIs8ibeT{e!9@VNJ~?t1E#5~`C0iRvQ-)cZSDs`h0lw@O=YcB2PZ8MjS+mfi6}oP11E zA{+W&r0$7zII54@yiVK|ThGhQy^tDMN17bp|J+?>m$iGQ2{)f4(`G{|HM(}VGY^}d=M@+@ z#Op2!drfEd3q&oHUy2z0t0Q9HB|zzbnsYUC+kZe++X09;p_rt};bT?Rm6-V1J2A%| z6J|$mxAdlxZs#Jry6<922BW4uJ$C&h%9~v?9poLF&fLPgV;|bXF-%rJmtxtZXN0%U zyNpJB(~lZJe>baAyk2v+cC1<0q3M%(k4M?w_`pN=W24U{m101_Oags7j>kzKJu6YI z7@(9Uu_RdjGVzS_2~qMc=QiOyALi9NoYseaG?S!W$9b{JOL0;kGg9WHULO8wxM z?!!ki(G`Y^UZ71b4WGq#sJL~A0V&;w%YkzrTEDXmqkaQt`>$f(bW{$Cy$sX^>cK!b zYSjTWB0cL3fz~)MYFZI0a#t$cF8p8vuZueLwf~hgZx}_86QPgZnrZJ#NIfAReO;UC zEJ9t2tUfp?Xy{FLZDK|LaA!w*Z7WQh>IzkNcaKsn3(Y4i#69WZZEGxqt=PfDS`%zS zk=lA{Sl}cRj*)52%SODs%u?X9c(I2;?VfLmRvxSnNk{N)FYdP@Rf}W3qCP)X+Y=Q& zSALqg@rXm_fm(5Iq`iBs(!Xqb1r*=%*xa>XzM$)RgQ*R?dRw2KO&P-oR5rwo&M$~C z7rN#b7i2oD>xOAQ;XGD{km956uAgYRdc7pg;E9{C=4g#nf@ zXaO?T*{z-h=MN9nIH5Jp6VBlZ=l$licy+%x{Or_HaR{Q0yS^l_UJQoP@#61COnW|G z;t(LRQuFKi-g*f|D%M?yn{@GUtiJzkQ>Dsc6?Pl@Aiq_NdnY*cSyy<(iham5;jyair7OL6{jGszkZ?0Y8G&zg-VCj?<8 z8;l9K_jbDYn=q9(aluS>@TcGL`OEiRRsj+N4|6Z&B+o$7-0w#jHZuWG;FZBfr*hCh z{hO%C%;oUqm2Y5A*^fK;+x!VR&p4aENqJWO!O*zg<;|r82GO}&;A6`r<68TJohJkw z@^XFxCk^U_DXZYa3^3tqXOQZWhCSpy=vim}WWH3YbU{Fgc&Pi|KKODfubb_Vx_=5$ zvsUDYnl?{l>!Q=UYhDkC7j;Z1whAx_iw}d!W92z*x(iYM~qk~5nak9>0`~Wv;>!{uJE^Y z`Ohz#MIK*jw;Vk48a06{J0MTX^kruA2iyT_Kfd`2IFy7-Al|QW53!lr7YZ85F#?ni zebGM)ZI;oY4`6tswiv;DR_6GL+Rk#K7N>+NaKA6W54CcpT8it1d9IxEa$AIgw| zke$mt{;G7;4&1gW1XKHxg}d0Q4ITX6(G%l!KF&?L-!IC~;6rKnyRn*bG4}56?hp_N z`O()Go4{)myD@PCSj;W4q&@u%EY{`p^^-7*%63b@mWUZ~colvEhFp{VaUX9O zH2FgF^{R_!r21nned#`bkx*6rxW7G9-wHcmI_xs!Y z?rItE>}tk&|I7O5+4FBc+c)GU5nB`r{lGe}dU(*$u<1OMLa-9FfhM=8aP%wJSMHs} zzFz#g#L}=$NX@qstMl*P{@-54qhyJZYsadIIbIf745H9m)xl_Vl*TmI&DTxn*|P!R zQ~ZZFP_eBPz=v@|qYnG%TDT{VKTeaIl}8W-Su*sSsWaH>JI&@ zFZ7a~n3w$6Wu2-bTwSyzE9Ur>Xix6#sVX}P!eJ}97x|2Ax;)k0+udmmrKxb=?x2g8 zLp%*y^=`0e8JI3FK@bLzp&k=WUs%2f$>I2uW_tglSEU34whH;`uwX3jh!#26P`c3L zwH136>14hkJ^PuuY(mUh7)-C)vqik5)%UhD1oA*slySjyrO(Lh=U4K*^^4ihEG0FX ze0Gx!se==L*-~7YCLn0RrJz&Hv^|3Da7Q=4qyjdcjvFraZ7RsZ2+xR|x8$MHv3j2f zqVx2*`Qlu z|2oaib!en{PfeM9nQDq@{*yUi32fFK3meK)k?davwuz|!56+dhzF5rr8sB)ky8333 zD$dT%c64YOcq=G88a4E)?aQ&m9i(QTRUrlL+_}S$$j!|yX^8sEHqB|ySFvGJu0%{f zn9(&?Bc+4k*!PPrpTf<&=F?hkqSakN0qiE30R0m+o#I~!SE0Xt{gQQai(I9{?d<9y zC0w<5{rcsKqMdJ)ND}vdydWhWUyWrw)A{(3^Z^%_3MnbseFibZg~i4D?Cgrz0shdp ztB}V;&GvZo%C_L%kSB&k{s)G;|HJnBY)GHJUmuH1AxnX#cVej}=5#qdJTL1MvtLZ2 zv*-o~NbMJq-;=ZdW*q%LyzTEh#j*M7tDI+uc0y{iMGk5-K@JoTwe{uQ&AlY~+&4~y~qPjT+g#x^XR3jHNMhPSQ*6Q%?TJN!C%fae72j zs`)$3-Aso6{1*K77sL_RTokKn-+wYS$VmPklK=dr{-4vAJ{D);mDSfC^8ffRfB)zJ zZF2enibAbO&%gZCzkmKC03KJTFMs>5UKPFORlNPS^%&FHzr0|$9!R8d{Im(E|^Q2%#H{n>;3twunQcCr9U`uDXN5JnbP?r7FV*w={{gr9`bA1>u>c>YI# zRFo#}pDka~u3uM9FS%~0uJ{l4@jrWAzyBWnF;3n91#~><&hLScm>0XTt|~mCmh(da zYpD`8)`#FIm^?=km&I~@Mwm*2B4Lv4etJk@_1#s2Bhj^Th%J4tQ| z|7*7idd`A22dgB zqb%O*e;4upYX3{9U=N-o4omeW*!|bvtFV?yANg6 zXO+3plJfhEa=(aX7P8G4i=W;R{CW3JM5BN>QV^=uzJ&kyN2o4)m=?2Dpi!u4yr6qv zkAOoz#>YQ4^2KD&_@+IB7HYys3Ae4{I79}tj2DU|(nzmC@{2IdDDUDroG)IwIez}DP^-xYSNm~xp07xAJymQ89!++kMaXN^UK)T3{whjVZDzm)N+e0}D)WVuaUqG0dgu1Y=4;1NepI9Q`jrSis z2zKY#I{@4JHQ4WNhC(UZB^}6t^g!tIf{?h_SSA7r_RqbE{8MKYQJP<__9N#CXy--p>B33hg(l<##m4G@-id8 zqQ-Y!#NNGo=f=*aTS4s3Htl=9!fP|uxjtIBbui+&$_uu!W$Ym$Mwj53iGEft^x}9SdEb2JD)OO(B`q2 z-4pTD#8libX5G`3wO_~83pcF`)M=-g`ubohFIRUPu^3si5}Ua9j`?-=j@@;>+RGHb zPG~BN{zwxYn1K7MA(Ql1XECPZ04e6FS=AssOyj2zDeCJ0u@!AnIa_$p*6<~4>UK%@ zhh4Q^-XlR7zU{7P3_eWv&!mPN57BrI_qZRf)95%ftS2s3+R&6%1=e3( z(IZvjXYc>7W8q)?quFI>)5@+>*TkD6Cq=9aWl3o^Z`pn!(hQ?sUT|gnOVoqhZ z>6B~LVeibHhXFoQkEZ`{+9-vQk4d0uodL_&+8nIs6w4cfnW2_aVh00)HfDS z`+{p(pQhP^#Wpa$`-3uW^_rE!U@X8cSjeCRt8S_L%haw*@fF;g6=xOqra=ToM&zGZ z28$9#S@|6d3(BX6=qLhBE#C@`q}1wAoS$z$o-^JEVh!KhdLEzt`w zoi%0{zH|K@70zhrED_bRyrN>&VYx+r)wmqXv*RmJvsy!4Po;i27HsRI5e@ti&w@-t zzs8G5V17-#c><*8wglH2^U--Ba+PZ|`$-k~sfH!f)B4nuqfnl-ycOu_ZpW0E&(pL5 z0!zn3pnXQ(cws=Ub(SQ)_@3LAzAg_n{>SD3GHDA$y0k68S z-6YS6APi|Xqa=>%h@iF*?;x1rvTX(2nb+c}E|u5du6@O87Q;X`&LS#36{3q#XMwIy z?`xmC0g1S9;-gg~8!X;D$;^df4i9H=WPKv%81KUeldi{RTVwj3EkVI>{Up2AXFr*W z)VW{$Ds|g+hU42FY#9%!dfK&x(A0Lk&)j-f_CmGZOX8(abwheX?p<%^rTiZ{Pawhg z@9p|Nj0j=fdHqaD(z_$kSo^%p6_9nDDX+Z$7I9&6h%@}`RF z*Rj_NPT3gq8@i(e$L}|(*Vh%^1i!^LO7va`Tj73AcxG2ui`?IB-)HsQ{%Aeaj?-3Q zb)+=jZ}dfvegHkTZ;P?_ES=n0A^565%yyBQ;g0N%(dyNn42eH!D$PL`P`4foE?Nm^ zMTp^|M|~+Sgbg(9x{1!MQav{yn~M>5boM6V?mM;c&1h;5b0TW}l&ViKC$@-076j|C zOvv<;dn$cgEn?#?sd|P#qNA2^F7iV4N+9DM7xDr-y%&W{ZTZfEEz{j-KVqN$TBq9y z=yb7mzYh;e40IIgD!cqr!!#gP{C5F_ZDc5if{!kG<-rBFETB8(lAbhayd(N40IOqmsMyLD} z*ny^vQKY~E9tLDl2idL~)Z;E9q7;N$>v10vj;(?PK`k-Nr3b1X;L}{$@|?oAP@^!G z@WhH(CNU*blhg%EWkxB`3rM8p5H?$NDoFC%*U$X$8C1^KB)ft0rYWs>+pFn(r;lf+O2-2T1hD?}4)y6-f^0}Y1z%?WX>rF6#k%N2p>S%>&9V19ICtt9<9p?Vap z=w!;Llhc8Ieq9Xy=lddy;_Kz9=07r-ToxS?6)3IOzuBI|i3l6g@&qI%jJ^?lBCkc! z;OV3rd)mE;bYM~Ne<3LzF8`q9q+fxQP`38G7@?|WpuqR;G{JV= zYJ|Mo4guNnP{%OE{U0ixMsEKDmwb~gaoJ<=9Bd6Lh{BvbeIL9IT@2knNEE1HHXpN@q z9^$s`ede0!H8&!mLvBd>Wl6&EvAbG{Q@;R=)aK7G;z4b_G75w={#TE zMTfYVH+onW;%h-DUy7a*A?s|M3er2!M|yG$tIx)W-9)&T+j`rMn=x>^p`MXbmDf5K&dI z)R;P9THW~O6JdkF19Iq_-{tL52o@tg}L(X5GwHYTngR zm>z%iPwz4MFV{$Uo3lr8&eGXSJ~^~#OPA9qIUgoCnrFVaNnwwOH~+n2{m%2P8C1ph1uqgt4x6^ z1{N61|jXop1u*#M_7@*G-b|Ab`hawQqU3jlRhx1Bd4tR^9zxf=C znvLYC9G0sFHUVkmD8d6Np@)Qc^_;1`Of$f0osYgSwHka-K?>E-SijrM^5hvBfeglk z5YcVlRl4E=6zQt58h~n{Y{fkb6_Yxk>D_+~nScLNv=pIn%H0yS&%4 zuWj=^v{pvXhtgH0!+~9Hk0$4681y`6B`cMH%0p~9&eS$r?ZPd#n@RcOPLI`Dpg694 zO3YR;=3{RGwR~j*ZO*qbAvFzTDjByK<~G@;-e)%V#uSrQ&w1_<&eUrY_LM2~$qJn- zxN1M`8}Ko9ye3uXo(aEm)8Yzy8jDuP`oni3A1P==x`V=ma;|0N=P$2Jw)LA&_$T!( zz7fAKn|gD1s58i)$1xjAz>WhNzjXC@+BaB~!or@7`dU=e3=Sm+r^X0RTJ8$uQ@*pC zGRnV*Z{;Ey|DpBj+F`Z4##6O-$T-?`ll>{f?prW{+YV3kp;K)?(wEb>a*g!2%vPhWo^IN&U_PT&sj7w0B z=k0Mp{%3*XPMkZhzIjx`Nv8Xaf|XjK=gfZM3G`NxwcAwKBZrt%V*w>en*-}5L~08R zW5ny@4gfQV(rfPkIJZJGI}*t;5$OP>xY4r~E*JG8nnkIzL^?mcDZpUCY+pPuf2_Xu z>|jJHT|9J&j;&_@Y;SIQ->}}@<)eF1mBKh786$eGNU4cGmNdRxl;xwr(*vF|(PCc7cv`BQxXvf>$?{|kdl zrY>?g%X4z>*`|IxQjl2JABe<74{{<-_|-psrh3|+V*Y+UXKy7<_{~Yf8MF#th-q_t zxHr_|Pep?~0T2z?^J+{vpZo*sYDQ{H1X`XKNWc7G>YL~e=Fh6u545Sl%={_&$FjMD3L&ywod`&+TjEv&F3irIS_FNdR(_z2&FY@nAt)NgQIZE^&b zw@egh6;my%5a>=72I*~z3c+RM1r<17a$#aglTp@O+J!#vDdp_xRBQFx#<%5?7Anz{ zzSdgH9jQdkI#ek0R#yWDKtj-l>%Qze%bFp$q^^zEj`Shu2;qw%Lk_})VE)D^o`L)T zG54e$vH22S`GQf>N-Jd*q%h%`r2s&6a;h>+j>HmUQ#0<4RVfC={%^xYaYDMrHl z>}Ts#CB4Cf2GR=X=O2nq@hHO}!^lKk=nk-;{Ty4D&~vR_RIoWLxH-p;B3F~X<=9+UD~8_4T|p9qclL^zL! zflJXHCi-0RI}{0GUP$W`>vy#z^1_4D)n;~7>ccqc!y1rMvz{Nz2Kjj*vBhE2X^VGXdeRfy^F`Wu& zK<~%2k~V--Nv8ExIO`q zYTdzn|9M#Ohd1uUO$3ju{RlH_tR3rS=$<*9o%Nqg0ZZ0cgFl8F55+e`c(rp!4=ZX> zZjgPnYqytCP3aPt4xa%Ow1$KX`vK;Vd05dnOM{Tn(NP<8CPRwg4%e$7z~<+#T1X4Q zs`SbbtFXe$^Se0bo70-+S__q6y>6K{S_!z@b3b3J5b>}%m z;)+u7FL^$8X-epL0jCU0DUr+L+NOjM!#Z-HOQ>FmU2N0@Z;q=o2n?c{^#-`@K{nwb zm}&20KWVCoUI2W}IdG=CB`0cbQVTj~5xyxgs2>X{o^gZvF1&kqj~0OlUHV7>_6;{8 zXiNX?_VN5X!?^yt27yw3!rU)H61wyz#f}*CifL&R_=aA3vG=STQhO*j5I(RHuv(0K zgDBA#zku+5a;;mBYH3z8Qj70LIQI~5pxS>7QncAVh@Z;xY?&J-&k2h!ytjPwlW`7Y zBk#Umi^F^fjZULZ;mvD--o;Uxnh+CN1O7qvhb~*~A4ss359C*!WHYqx4j#bP`;q!; z3Iz{BCNce<>G+WbCkSK85f~aFDfz8{u};jZd~n;0U|Se3j-e1t0zCy z=es$B*FW|&3HW`Yn4lhc;K8B9OHR+Lo)aJjvs{@aYxN494c*p$;Av>GO>QTXQ2XHa zlO)Y*ko#0s>Ory$>$YPyPriGr%HK3XS^;@LxdQlFiiUnMan97vX=eL$V{vkgvfJSR)gU`Y_i;v zxW;e^|I}K~kO4CY`}zdtZ~hMUo^In*I1e>=qHJR-FIz{?S4!ry3x@P}6eX>J=KJHl zibINPsL&xE9@V5A2LACUPwriiezjLpQmUS*Xc7m{j}2*&EN7A~#t;4_SYU>RvG;+n z@RVlru$_IA?Ac*;u}I1Id1PhN7L~dXDE6L9#vap70`&e8t>?R6@4BH${T!gLx5(=b zaeo}f-PW$Oj!E_dCDn5T*@wwN89D>wN>C0GlsK-8+0Yv_X<4oMw1ttoff~1PtDIJ; z&_voA=*34=OcoFOr@+MxhMm4maCK6PM1<*wIm99Mn4Ax!#scqP^E53DNeZMNL?_YiG zu3P+?V!CaMYOg3}OaE$Fat`T4HIqxFt!XjL1$`EFI%m?Uc;cxtU2A15e?@iD@Qht9 zhojpaznC1uCVKHaHcr`SqaDL=AGW_tgjy40LA}veP{Ym`?Y>u;p0kI@q|-`HrDjYv zJ*pdvSPUy_ud*?ET4>~xK42cKz3${<`@y(Plwy|J8r#vmk(%ecWm@J;q13)T%-F+T z;){oOiFFz>To+dsalScFgkBs z8mnJvcfIlilk>XveCftubVk+Hre71GnjOP0eW+fFUc}9b!%Dj9i7F9Vo-z2Yf~r&T zS`6GnRT_zV2r|TT55FK}dG9GSY*cR#jZgF=1iixhkNElb)%E)j=4H?Sd6dKm&T;#D zYQ3zk^D2tk@}HFO3owm5fnc|l`9~ux4+o1-&a#c$ZHvi z%J%0gEq==Z`k`fKqiVaUs<)$e4~`vX8-CLE1LbMl8 zrcR&k0f~z#u6s}`&}|~zhw0Z2*}FY$Ht$W4T8ATC&pA`w#tPn~LJ#9>N=u6&D_Py* zz()qU>TJ?`fyseRR@!K3YmI$TN#mKnDO({WYq>6zHo*UD5{Rf@sB=PCqdBqx0Tp-Lv3$S1C+n_!%*8C{r6Lrhqh zFuVNWF-qP{g3wiw7g|9mUbK0^1C!e1G0q*`btB(R#hsG(AFaG5HO*111R>))MH&_S zC}HwPk7m%smyIMP;@`LDU0f}rJ$|G{`o6_8$IIN5?2$pu(=2mU->dTcibs&G^Y5WZe=w|{PeTAcidm5YVI-g;-PHwDO0fK z_4uRRhcV9bH~$gy)URSubnfBq>sRuMHh~8y2NVaMDY}A+o1w(?ESMnOGEb#nOA_>m zM)5!=(~f#|_$BbOKR`L$)ET2s3>kG{h51fo%YDHf-M<_apz;}h={>1cxRsrgGr4XL zT;)Fo-V~BO9hV?V!xGYpd3mBKoycvWh(#b;teN55`xCR0Wl{y4pC(*)Q+R=cv(B@3 z+i^=yjnh71s7<13Ys+0Q2vJ>62zpcZ8#2e$kYu7pjqZorGkQt$Yk?9y*K0Uuw`X$H zRT%8Xiw&+;9R)UeA8vlot+Xaz&1Cssti5+YQ`@#LY{LQ~x)qVGTj^4yN(%@IDowg{ z5RnejYd}yyL5k8l(tGH=h$u+!1PDz!2@oKJ0BLV>?|tX&^WA&S+vo1@zbFJ)S#!-f z#`rbEHxtZ?r>mxft&Ea#J3-d2?#3z+xaix+7yzqvNIeiTHvM7{wOivo&KkcWL)Cog z)vcz6B7c8>nYbRlXGz=2q5NO79u#~#C2fES)Sy<$X$iFnQ^~La4H&J5NEB94vw3Sf z!fAM1%Lp|nd=uUOGY&74pJ3x2uCmrsjX8T~iK2NrDR6)&5yBo#FynwXexi5T=@{(x zb4d1?)qeO$7#?D&>&4G`54Xxh|6(irvAw(Tj|LsHSZpvW1AnA%%I)-|GUJfZ`RXlr z+w{v^>wM5i>OVm%Ugb=+?tqK-YVGqxj(_X5IE*t)OctwsY~GKtx3&Ei@~9ddwic0I zFq<4m38h!2xES>Poo>3^hIzI{75$+qxN^=D>OD6>bgFgOqe8YbODS2ulU?xMS0Q6K zPjxgpqHlNAEv4LgiRxhTbP~V5o=bINdhiWfEuoWJ?1yL`sK19}s!T9qRYtRNX&DeX zx>CM{h;bl7KR7V{IlSq$=^08AE9taQWP%LRIxqb$SbQ-b z!y|87O-lpr(CrzNnFi414$6*%(()`FBOl`vKQ@;N>o+IL<%RdYV19OSr5B3)NPHVS zWT@Q)kk5xzN70fkXSIN>o+Rw|FFK)9CMZNLY%?>U67)!q_4SjWCBLpxH`a4&>JO=u zR5Lq$zp}4)+LRT#_sI$?1|Kgm-~e4x4Zp~li6UM8+5)+BY#kW=Vpm}7-bCQPrQ~Bs z4X*>!qHgElIe&6l6*XcTn_(wBb%wd981alw3tgCpL1YZ=-cn)JRAGG#fEt1>PJ z`!rk0XPAu(DN4&H?m;VT#<&?*M&P8ftd`tc&&JM7)!5srJ!v&LX#YjRj1Iip9tTLi z(51gngUP_XD6$F6Y%tBrVy0@xigvR@&GiA zI>vH0kEcKB*wRY8)CioR0sauqtT`cV5D0iT=bJ7}X>t!j{$ON$$DgF&NBXdM@*yN>Y#!9tTj zrYVT3hRvCmjskg0>H77&wfou=s42nzv+2TwCI1w0fd}3RvY#P=u`|0gu#?}n$N3zU zs}YQZdxA}2gsQ?_zg(Wz+c)&TjO|ZN+Iasy;Gbym*8JBu zwZV^Zo7xtdSd%V~t{V_Q1dIpO1U3RE&W>ag~?M?}oGbbYs71RDjc zHdvxlO~?YPZ0jKB&G3$=(1?I`6<;oQtw(PLrtc`ah0)PzO;>Q-@BWnM{nFxxjGp(M5ycTKudrpxnwz=T%egj=@T*Buq4QR#%vKQsFMn_1pfb;RvVyNlX#LA7vDmQr?{XZaDJn zGu6GljmeR-IN^8uTaD<@vt+|(@dBtFYGgkjOM8fjF_B!`qq{2yY{FjgM&4^G!1#JDX*=Z97~Yi}YUS-{B?5uIGXUY*r_Tj5>6rKf%yPda z0n`O+^58clD#`PEJH}SWu41`u_`K5_UDEHF2d`xT7K<}jrjw}S`Qnt%67ZNShd;;L zM2#3#0v5?J!zE1O4C3;8uak6-$GLru=pjH|809i7unpjgs}-syR?qsYLA22Xp5TGn zI|RW`k!y8RdqNrp;4IjzEQJp+QLh{#K-agiT3UMCrTIr2l>&`t?N-1!mZ)RsmSu6b zhrc$!0?KN86AjY!s$l7UN7nSfaV0>81q99Wck$Lvzvvu(ZRXS><9Y7Ym(f$dekF4B zt5MST1X&I%m*(bOD6#g;4e@f{4Db+IPDN4Wen=w<-*6uGgYE{ZIiy!eC4cmIoE%Tr zfgI$#$=laHx93Rjdtlxs8`Y|&7bg-0XA|ky8twVyuhtb4ee^vUFX8%q71H;agZ*BLMS44tXr6X)AYmk@b&=z6OyKeL0_?9jMbb?0zIbA22W zqd2@=K=nD>CG0+?^VIjVW2HXgH+~jXU%< z+|%pC>Wz4&`n~##L%8f3gc^qWnxXbQ>I!>CZMW7O9*jPi96F?z-=E>P%F{k!xvYId zstLN?Jk#ou;wtjMkop?QLN0jP|J7&0P)fQ)$!rP_O-uSZGg<3u25+%7wjTh@*u;U# zCI6L!vA%ku*Yiy3n_B|0hsI<+1HbMfVNaX640}y;KRKAEBwS6|?K@uO{7(1PKzy$l zRAgzjA3FN$5T3<3@NK2j`_DaWG6D0B%rSGGAK(4GY?u9T1__6uu^!q{u8-pRjFtd! zhpVX`8^M+<*ABSRItOtoAR^&;>{S^4~O>9TtG}flKAW?IK^n_Rz^%`0F6oRl;@*McA}T)1;1})wOzLFSBn?hE8zZ2WPC&n`BP6fEJ(%gc^^hDn(+0PA9EWSdR_XwtnUL{Pwkp+q7p1V z_}+|&X2`Gh5MBKfBYlyUXSfYAfR9k-EdzUo*T=&T`y6d(GZgP$k1Q-|>P~Cuua1ei zRu6JLHoA#&f_H zDoG$cPiKdUPby2%`4_z|y=uFVSi)CLPtRP3k~fBXA*SyY+wJJ2Q*SZ!55-xBdP$3j zE2RlP5VdEhVzl`tZeji8!r`qtdZkJWRH@+*rP)+c@^)m6HbQxgoBu{OOl;#PZOPLb zXWQ4t-j^7HdC=~QU2OufEM!3V>3O9oqMUs8R$fvWTt;P3&7k&`6P_Al-FmyR!$8Pr z{N}tTH9v(A(2?^WdIVEnM!HXtC@zVa4ZEB>gVrvo65AbeU564O>rOL}`UROo+`e=k z#}jE6laShI-&@U7T_=a4(rp2At5#;8ZxTA&U`8U*CBK<{gBV#72QXgl7#k}SGoc=2 zC{n<@x!U{nR{e(C*J~tF_Zi2;9-K`y{Y9FpyLCZB2;I4anC*=*ewC8dyK|}0DLrN9 zVFNVM_BlE*3)Aj}3s;SKiP4DkpW&e&CQdk#jg{4TbKuHRVu(UO%VIl~-r4{HB> zGiT?DQ9vMm9-v5_y9QyH7|AevB5#$Gn74%inMLn)Qb*j|DCi~@^Kx>!z;M$3G3<`Z z(B%w>aFu3>ew)qX{hP~sIZ`lP>)dSk^8syGvauz#Ktq{@b%#0Y87Z&n zG!?Gv8sezp%7^k!r6Pczvs7&W(>5^*(jfXCJ7^xWk{v3kscqdf;jSqF!B>%RU7OeG z0=cv|vlJuu3vugbiN0=QguUuu3~=Yq3ahE#-aJP5C(I+=iZ*dX;GTy_Zclb>S*N&$ z1`xj&U^O(iFuWX}FSYgV1r_gvI7)(=0m($#iS(a%Q;DLZ8UR|L0i6;3gU8tX^sVdE z64Ehz2;2%85qZ|jFiK5Lwq59I=}+=(<-oh%{(HX|O6Hf7&ZfENEk}MinW@_gNliL; zT86=066x4>XO>)A%zmzx>&6+RPvs$tbw}OG!NZ};pKTeWj(GVea#2X<6$Lb7h5SxJwhHa|2u z=M(y|wJz!1xt{47EAMgK+nPs_*U7}~r~=XfA^rh8eGBGL)JW^D>Y^M}FGOZ(%B4w( zeObhIg!hvWGuZOx(G*4Zx5~IR>OA+I-*|{-!J@Nzc*It^FVLJt>yJ$eF2*WTe|xvwLiTvB%CJ!ACCAzxl!#pMdruK?RL6=p_mj zpSdN+jPf%-ioNh7xRzx_+8a~wfnqiq5nlGvP4xeI0l>q_eKBBHMEHzv=HU*OJK4IQ zUp+Bem|9woA{=-vUAgtMAJo~oK6c~(%MNQ_nEl8RSl*Jyo z`os5#*#R%8Pw4avoS5pvV_eO*|DVPP+KQ%K!|5uw-aN!NdS3W&;j^iqjKC}*@)40J zOJx2rkLE(UjEP(Jp9!#{{YefFu}b}gb~QcO_DkXHFOdMtdu2Oa=ky_Xc>d+vv(!(k)E>Ewze$2T zoe~~h;l38e*mRDY;Nn{+(FB%f48q`;#@%YZ-X}xOI^|SK`kJkdOCQ@uuiQA zWlh^;9wDCl)pf7(5zkRLtBi3PsP7n000;$YVjJ57l2e;ywI=`@QF-sa*ol|5%ru;CP0DfWa2J-DQf#8)s4e;A<}IfvHD!G(h_!nSjCVN z3Jn-UUCbaetNDC8&I5WMZrw_KNAQijbG>)>FoeI1FhBXk8PAw#e4qc8qd&E9$y^#| zWTI(Iu%98P`e{Wv)@V+;q-Un&*0b_~-m3J8^((AsOvnIFFjGZowo7AA%BtDs;!+z< zXmckum1X(8J3FV_(@eUR9+`^whvf5D4C1dl3GRTo=l+k2heKM-qZns1NQtLsYxxbV zr$Eznr#IXcyAql?Dy%=Oo0acZ<(JJzrR`+WP-pT9-O_57nGy@X+N>L9v#D|VQ3Sqk zhK=mi@88rzPT)M^EMDK46GjVRJk(zX6$IDAdt0Lm_VdxLpGBEbiAtZB#9UB>3SPyU z%_{}E@^{9-FAsh5DJj^oE7NG(cj`YWCdGAZ2oq z=<3tcSD)Gq3EX@V)I?*GK-fOL%AOdKh~78Uvt;S(bE+=PmwC7rQ8IwX?BOi_9Q@U- z`v$mZEa|b(9wEn7KJ~!rzMnm;q71Xuzh_E3qTU^i9_R+s^cm!-0WL1LrOdE_cu{-( zCxH2HFQH!q_<1ez#>6?seLCL(aopX;x!aKhqwD6o-xLwz$&DIZ9B45O)xf*f<>gsa?4V133*gWU( zZ$=oW4KT#%zbY)y7P^(A6TX=J%1^yrgPFbaOo2|q34QO}$BI{%C}EFv3LnsXRV{Tk zFbja47GS&d;qTS+KYR6z0*F?1DfoWRduDNl@-VsO9Erd#g_o;WeF3~ImL-h6XvP5x zum0dO6(u3ZPdY;9BG4<8NbPM)3z~iNU#?WkqrLANcD-*oLelZxm*znet=4-U0WX2mF8j4B28Ts~_IfyRo@I4Dc8z2b8LcbcDeq-2THp zmx?9zHWq{*2{%@#QBVmB(U$G0f7XF(A&}-Q=L)T3VV80j^s{rv+PssWdc_V2fAx5c z*p-aa`F-Rj>z~7(fS@G6>N+sBG7-*-#Z6DuI$ECga4;+>to7W{z|fK4PcSQe*-G)M zqHB4-nq-LUOa@Ml<7zI39z)%}9&;{@>sJXuId+)5QQcBQi_^X(`ZZDe%u|y@V7$1w z+5SiOl_Ab&Kty0YBXQtTL?S5tFU$=xxR3SlfklU_J^Z_`%7NilJ+a2;C1! zT1>qcJXsRq2F3u7CW{j;{i)&hKUS2>w=dsa*MXSdd&+gjdvA@Mq0VJ9t&x@T0%rXv zRnk9J(xFwi$~nD8!VUQ|;{kEgS?)#P-nvV^(Kvsk^wB_+Kzb>n!lH<d@$+7z;rw>PM+z!eeQpkDOW)GvRLQ*2N>>$7Px(E<$tGeQ$EZ$E9GR*9R6+f(bfmEyvOz7~l@XrV zaV7emDBa@c%e;#E&l_IW*EZ;3Ub)x1CF{&K0Dg;4&!_7EA-ZlYg8IAg>3{i(`vCx1 z*>2Lgy$36$sSi+Qv2nL~i}fnU4UAyMMvW=JWyhe_7Vh9EKKu=$`6cq$AMPJ-@4xv^ zFjGFi$9%&BoeyMBm^jpE=_a^LtLf}dwSX66>gm|I&pg{8;TAgE>R^0+5D`G}rzQh` zqe%Ss42yEKk=)EI9klg z-MkzXOwITXw({pN!hi8;$g+Vav*^y+w@a0Flg^HsaEEhGO7yFn;H02qT_*x3;5gPD zD}z{$<-weWR@;olTw+G}KnYZxm!$Q7`|USE(SNCE{suMvi_82c3qJp0K!Td5isydN z(&beHtQJ1T)D;J@Rd}d7U+`H#lFDN8rF`Uii z?xU~YaM!585ImixCLqLHd)J@rQMoZ&jX;>l`(-JnZ#Dn_UOPSLf0Qm!`$!F70rsJ)`)mpR9M-aw5d$ zL8$s|y+plPR9?EbDvQ@1WN1G9?CErnl%`klGO^g8(OAOWh9BJZYD5$BfAbssw?E;0 zS0J*dm7Q&YHogc?N{BK&4V{BZ%1d}4tpa`CZu$Y?T-24@Y9H@+?5zt~EOe&7vpe3! zqz+-0OP`fGC%OHB(r&ofM;HH^xq4x4;fGR=LG9a(!s&%OmuC9^x`|MX-p$h9+X8QB zIaOr&gm_xJXZ=Z-ifIpY`z{LBsO;7nxJ=yNIPBA!_0b?<2mBSU!87Kl;d9(Y{>Ha~ zX#@dG5hHytl6LS&iJPvX2t?VW{uojf|EH9X`^T7{A#LXX>S&< zVcl~+GDhUL^JlWnJNJk4oo-K-Lqkf6gX8m1wkLNmY?Tm?B^#3L>~x$6&~zMGhs zSmA+&;9uhbB(K0AobPCYbpJaF_P^X1WtWc?iiT)Pas5|1aoO3U<>ut&!k(pHA!+Bu zRc&mp)~~OSG2?wHhtajYe=HpSW8Zyrp;xc2BEQd0J|pE!Qs^OG!Ff;w-)$`w{O`CT zW@>^{z|}bZw?3PN?`lh-JbdN&J>-~t8Y7wCUVzu2?_DyYur*P< zm7$OLS10cOd~$(u_n-NS^}kXN49Y(Bs=hl#kRwob6gb52+wiM&v5Q&{Upw4tq<2OZ z+|KcdEY&E|QF-$Cag?;r{?G9e1gIC-cE^*^Q*UIu6mtK%0g~6T?wRHXC;cMa#G?x{ z(0YTd(v0DMzMOyK1MmyDPo|3dQ=+1O`h@?-7l7pG{GRqlo~*yg|Nqy2+vVSZE6TgK zbpPug@VD#N`Up&4LY*xBu7dg}C+5FcuK)E8IdFMFueblQf!>LL&?}%cxmWrz^nbZ~ z{QYgR@4@9=zGVC_%8|c+^S^xc>>jY};qHi4|LZH^{FTS&y?%r_U(Hv~YGCr&7;=7| zE`NMn1FT&M!1{JG4JzZXxtl;i{K;;nqIx=l4l-C~*7>x0$!0f9e7rYN#yc6mC$5zw z;>f*jy)FozGe@W}|HT^-eXJ%p*TW2DIRy#kVBnY@qn__6{D8h4VVecJ$c`z|H@_haU6eE^Kvxe`!vq8 zURi9H&%gT!TtVClbYW?(8xs#ABy2M7>Q_BXJH%ls>=FA>e#<)x8z4%$op6LIjgp;9 z??~eX4g%$fOtwLIlb29|US)9V$Z$>%RSIcsD9_6!qC+;_c$J6joJei&aPL;%Orv|x zo0Ez^^VM%})^AK=7z0{P=~i#>T)+GL4xp%iz8XSSXHR-v*f8?nOW}WG#lL?QP-m59 zSlq`8{1i(YN!;{8D;uZGT8fMB8Z`H%MhAbbDny*7nYG|KsK>T;t};3_>65$>;rm~L z%g6cZ=B4^{}IWSd_mwOw8<$+-XWM%bhY;HL@v9+qWnv^8yfZ5k>l4^P95chjf3Uj)XA2Vxe zCO@uxRq#nPs2C+Pe;%)2biV({TJ>kmuAe$6>qcQFFs*h`t7j$}8ps$MlbHfQ1Q@Qd zN0BYjkoj!RVMs*$8TF4DXVe4V+)NM%4Ar}!@G;-Ae_a`uHrpJpHYeo6k@jovV_urL z`K9;b0q<|ppE+~#f%=2X-)qU$5EXmJfsJe<_c#+Mq;h2BOT!Uj%sMZvAaM7&+nBZy z{?ex5Vn;ZYv41WF9k2dIt{Q$?4P-d+JwoG@QD#zqsqM|VY^4>dtg|oWjwq?sVW)t-pLQ{U1#ExE_9!J>GQYe;Oi3d(Hu<42N#Tg@y*~RO81c;k7<@C{V zy=5Jsg?9AVD8zcK^y>Ld4rO&7SQu2>tn(3DWfaKdY2h%1PvUYzGg0!I}h2-(9` z8ql@joJvWyDb^`6NLynnrDRC~%oLxUAl+#Il0`ddLyxwWL7p_+%157~htGYnQ_WS^ z#4WXv5Zc-R=K1hr84(f7AFtc3h;UmoJ9oGYdVD4(uB#M7L{uDN#!@p^N53D!nZNr{ zNX;D>Qo9$GjDJ^XWc4QuK^Z8TX8jN|4*e0so^`Ptwv?ddswh zQO4ZiXf>(LG;gFHZXpa-7p)5EN8{Qr$=olr#Ie&Y{`>B=e>{QRokaw;I&mN~z>`rMiB%>Paa) zLh^e%x&`AWWmp)H6(KQ=ovT%g733Fau(biw=He%HDVbvpzw_N=v`Pv94r{=DAlWcwFPMQ6=T}o=t&nzc)t1% zYt)Q~%=U<}{~|BzU>=RMx83@d^O-$uxy93e;enS;y<+vu@~K7#h5BGZ&XwDcnzoc# z|ASxWYORJAxSKbay++Mq0}q7ROoG?Hn&Y0S9W0MaWz zqhnPtT+FX!{cAyZW5Xe))d?A{0=N(QN%zciSwSMj9md$5X7y3?8Mb)R7Y%_q%XAeN zSd-LuplUDUTzl=cJ)^4r8c~p!nsARF`-m?gib zD}Lm(r-4!~Y0?C8M-TTVn$+D=PaM5HM>xbRVlOdy=4(#2yRPWv=}@j{@&}$d-c_I!(njsLU|_*S3$Ox&7#|qj}bGXD?0+oaJdQvV>5rXeQo$ z?Y{mw$FJ4;6-AmpIyCnB%3rykCr-J^zF|3fT6Q%_#6Yay0%K{fvn^;kCG|oIyvfH}(Ddh<~5lun@ zvTNggR*E-9&g-mDH5()~3?Q8|LT}S?BWm}5(K%g?J#~`5ekqh&`YZ_(Y`b9EcA91i zPW*+7ubVR@D7F$ZP_W(RiuHC!e+);h-Q?~G85#Lt8OpNCG{UuO;3*7umS=mX5Z`(ffujQ zzTR12oyN`iJ4T24={mY8GC4g?82}W(MPonf#{|98Q~S;Pvon#v>E$hSJ_lmOx^-iZ zDSx@jHz6W&A3p1Yz7kO=JYC`gKfq|~65GQGo?75tjT;S>lAEjZKO>mMN`+O7<3@YZ z_IJJo_1KT=S=xILWneo2p_EN1ge5<2yFC)$;5yvZdHBfVz4x!i#fWZ;@iSxMO8hMc zq0~n>J(FH{B%4M+Ai|-x+6TQli|6k9e(o>SJ}1xUl*xM28S&ITTdp}`SJ4RzlX62= zdDaaEm@o+mT#7Pi@+@y$g2n5C72N{Z9!?$<0s7K4vlSo_7KA$DICWQjo;B_b^YBn$ zted=_;e9R(OmI+s*U&hjjW%-MZ9l@mk{yeF>&@G#C7>=fACA^+Du_PcZX9p4z4rXo zdU4+9JJ}7>mzK1Vj0L(5Ben#! zMRWsZwk{6>y%ZgItt9Wqu@%Nsf%Fxq7;y}yR-i9ElYp|+hD$GY_1&@M$^z4E1N4@_(B7yTNZwtlOY;*u%JLdBX!;h zJ_=c`dFyD-dF$_rw-+s3n61VNbI{9bn4%4n{zE}Gb&4x|hIImzOm@g|16aL*d3bKA z*Pg@4mtog=-O*>-jLq07l{E*T$MqKIq;9BkTX|;HFm|A{X}r{=NUj~eES~dMJi56$ z@#~3!7dpd}=Rve#2)CCn5liq)9?MF}0kzIVhC!8;x|CsPfNE+y&O6?|aa3uhypv^F zcM^t%I2@E+QGMVRm*)ARj9SObe}9s0!J_>GGIpm*(|=u&>*u}1Br(U8e#q+Hk`>vj zsHEn7XC$l4J#$zPrGhVRCN$&gWaY~Do{Nm4)7XvrX0q?#8{$j`${{ol8|uw0OfEO? z(7w=g(I;Aq{>fiyvCfY^I(Z6gce*WPKlN)`wZQ`g_Ko2wf$gDB`jU8Q9y45__-3Ib zWZCbWY*&$+p5w9_K|!(SHfQH``8lx8o1P)pi!5^Uda4P5&S{FPtbe#TpRU!;dU|4K zl6_UgK3av_+jc|_urT`?Hr_`~Wp2t6|m$2kS)uC=okVX{UsAWd>}x(D!V|fWsbq+*b^PXjYpDd(i5Pr=psk5hlFyldGsBfB2af=%a z67IeQGAX;b;3DHz$Ry%)WzzKnAjKp) zgT$tAA358x!l2}Pi(wxRMGR^)Ek*mbm#4kxS9?Uw#_1{-yTa)B-P2#LPx7YSV@MD% z(-?IzDDoUd7V*-shbfK^%1+!i7`b_Z&)$tPs`Bgo$++Kx+MLZ!7<28Ceuez#-(`Z> zy))P!sj#U`m9h>^w_G1)D>Z8meKWYymt0?#V#dx$5s>i9lOmpk{VenFq#Lalr>7d$ z0Pl?)wRYG@*m2lh;@awOB+({M?V3HCNwU@2du~1A3}ux%5ce2<(s?+EL%`B$))4l! zvpWs5-k8+U;&H1fVV2g{53GHZKMKA2hhYM;`Kx6*bR}0gabf&@nl%W>Y^*@c9#iCj zGhuJMPwk9*Q4L|arFqQZrqDIsGP_AZ9564#`vT1U7dTr}Q!q>-HZgcWr_W-&Xmgc# zV~ky#UzgRAN$8dG-fe}-viwL%54IWhG0QRA9N;&pAoEPuhfHVuqzm=^6$)a$yVRwM zqXAF7t5V$E4=t=#%uuo6ar>s7j#5w{ipOFA(f@Tc`#QN#lj!k#w=T{YfYlz!_BDy9 z;T$C!h&$gHxzq=XjFlMZ78IQGX-RTT6tz#9OK(sdY?c$xN#gWTS<+L@jd5G1y&>-W zaxZmB3djT`{Z;bqKREmDdjdUqYD?|7$3(La&iyE0M0p)(Xz(oaR$g$ z8H&uU32oJ^PONtnx-k1sW_tk*9I~<|7xudVXt-H3<8GTil4M)D?!H!^>Vux4DeUtC zzH0AR!AKuH4a5mVODre6!R^uQ_od11T?~WpLbsm#kzLX3o$ZGgKhjI`r9tryxm&{-AoOr`u*ZJ;BZQFc1_^`u0Zs6bSdEY+6 z7QP(bvAg+!pI0V4SsLHtsWD+-B31D|pkCc{MS;{JTiZgrbv;k@LFAHWV4Qh(5 z?hl!xN~U@EX`UGMB5j=)TJZbfZx6Lt$%H&|7=w22AI*Sn-LY;kQiph_RnEdY;8H_)HFE-G!X(L1|W@QEmi)*@W)=4;&e zT>T)UtQ+bRHv0=Je44@yNmkxO)Z$<-L^Z<|N>oc)Nk?83dt8@alL&_`576D02gRHg z90bz8`Fc}{y#kl%`w(ZJyC2P^lXuQ z?2Hv}0}jk?)U@GRZWHKx9M%TEXkSkkFq7Lee|>ct^nFMPkFAZf7jIl9&KtIKV7MZU5-Kzv5p;N0O;uYZo2=|%jdb;{!E`1Q#{SlQR zrx3K+mSDX5${~!mrYtgc7JHkAlGMYte$JdmYFic#^k1I)b~yVT<%|Dxh+lj;pjQT4 zBNwWf#zUMZnhnMGxh#H5fBo&=f3%GQ)~A#Fwr`#L9^DaXz>=|lxGJ=`bB?*nUn+&^ zc*p{xr&o5?%FQ{xgN3V1v`f+~iy2n17jr#Y5aBI&uvkKFX`3OY8_WO@e&cNkyYh2m z2k2zkui)6$m$kk%bAE`mu=jnwYQaZivo8V6jAyNyU4*&QwYVTUSxC2gCKUiCD3S^**U?Z>0nEu$nkIDzEkK{>;Y>kHjLDAZaUivr*PQjn2l!XtK6}q^dpO=}yM{pz9&C8eTOhzL^ZX zD_b=L(!EDC`Mtqts(!@d)PCy=iv1bjbpd%JUUHXGt_d)@7fd18s=Rp@Y-lloL8S12 z_yh~_02qe^McSzQ!fZv&4?R}!M^t+m8zlCLcaEhtlfh1-!~HQZz~u?@2Mid|#2UL3 zM|UZs{J~6Z257`%mE3#Z{&8sZ$5XT{kmCFr&(NhhVol>>%JE#tV5DdWU!^3P!i-NW!ut5}Pyv|xKNBcn+XSam6Ks;F4lSj=1iI^PxPNxMV zvKnsOnBVl7sz?_?RW@xFvYBb(HYX(>hF&l8CqoG>gAnk<(Ga9?gjJHK@mTBai2HX) zcBknS@wz7a*=!W{`Zkv-M%kkG2ZCB9I9Tk%MTB4cjQaAaE7C6s`E}AV{rZ6A>28y@ z)VNt_9+FOL%1;S|v1?7=Tq?**uO8B#Em4$L0cb_JcCOGrf_gt6nF60pcR&SbL;4hPVwY@1;d;r2!2DTb?0bq3z+V0fY3@N z4{#3~!Im_T>ON(?)r9GH=}XonY-jjIZ3dhxB+(cLgi!8l4U=Q4Hx5_R(GgO+k86Rh z;E(`dl)$9*p%uiALgdY5Q zEt9Um&1ql6V({>!CJaCSAERy17T4at`HY;<>NnW8vm11A+I6vH>&V}Dv1hDBUecpI z+}{J7w!w^I?o9~K#eG%j(x3;LZ_mlF!d5!0CR+*19we8r>!`*DaFgZWmi=7(%TX&h zIF;9W4qJxsM}7_6-e_qWFyJ=*^)1LrVAh2H@NM#av(I>kwo|zDMzVo*%TfrVV;vh` zWfQD^e>cAu))=Yi`mOumq;VJL6|nfC^km5%qyvyudy(?~ zec+BV4N4|Gi<2sv8FOHGvnbcb3uanT4^!e;#!pz?RkTz>Lsds>lkhIxb5Hx!^&Oq#)M~9(Qp> z*=_YLP;a=%fN9gQKSBNOGlzdN=rPUt4fGK28bPI7>J-8(`jy#p&<`^4v{krhp0oLA zZ?dR6>Du%@m|zfj#CLKT$(w#SeAvR9DrzyiMZcY}fUZe`41x9_SGM%UFyV0Sp~($G zXR7BydCLZ+9KvG`LW=Fh0V8t{FfzTvQgXA*GE!4 zt_|jmf&xzv6jLm&ThsUQ+^}oqIW%i{cwcS340+jqkx)C~ER?4!SfJP>Ltb}`RWN~K z-L`1_3E?!~N8bS%@S*Pgk!?=Xvd(dXjt2pv_S^HT)1{5r{Wxxq^cJ7PcoleKAZkzJ zO~#5iFOpeZ`ZR=0P*QD1xuIsq+>l!3%~P<`=X5p@0yJa9qEbR5S%Ztov3)=a(-aJ+M45*@n9$b{jk%FtJ7yx&3^CHw-^$C6j!} ztsx5G43C6l%t4sWLkvY2`3T zdLxBGOW>PVso9b^d~EjZN&tP}fMc|w*z{}pR@M@a56V5ki`sNmky?==#TIVL4lRc( zr9d_UReqO1>eDcn)nQ5p#j@F(AIq6%qC4ivq*RtK@vGyh82&_!@Cl2~2u%+ZQp%E5d==K0xiB|g>0#lRTO6!rJWBJz(iS;vteX2GPj-?}RX zeNP){ZeRfgfz-a`5#lR1{8iLx%#qlKoo^!^jkzG`x^_`6D#q=fQ9W1*^`D?!#K=`B z2l?{38%@bbq`4Axn>B~aGh9CQyO}+UXO@xhNPYe7g5T`+0655|{mFQfJlm*#3W?VB ziF&Z@5$%pGTvpQ!2P`9HNOi4|gI;o8>rF71g6;xLeRr-P62if|?4vyGF$X0b70rvL zo~~*E>F{_wixH;UD9)8{3_I9Q&tOuBIsg&bYmr{d1-4fD0_nM2>k|wDrm}nFqfm02 ztzM&}=Hzltwroq4DMV&xRi+{DXuzMeD6{_jA`W`rgs;6`f63c15e*GDy6xKEwVd>u zgXR~U@1fypQDfW`F|X9DFr?wq`Z)%-jrEXQqm3_aahY^3xyj)Q8bSu4A4}gh)Q(q+ zZyINhTEO?OC*}3a%LiZFX*NDq43nC&%?&$YrRx$cBN{C@mHJu9+>)LSTIarU3v3$7 zOAQwj39<;PSE~ikORpga+s^DXGq(s8_EeZFI?tyCAQ!W0e0wdK38;uzrMA98wD+Y! zm&DSr2D&j!LK;VV%%GB>Xob@+^*B5a)@e^G5f%saOy&bfr9 zOKMN%JVmQD-pwAqkD7JKOLY(RGp$8>_W=821Fv4v!CC}Iaa-;?j(i_mq@Kf`mc@H(`7C_Mn;B7)Heg!? zHGc-F`2uE-{kwqL@wlWz`M zd($RL_}h#V-hX=2Lrb{yU<-j>C?1oX6@eSlQ9MaA%;At36UPF})49@6ZzQ5#<4u9c z??Y0Q0vmAu^!y!E+YsGaQHK}eYnAoLj0~(*{OB8zicx!+7#nZm(#hgeMRRxbdkCv{ zQ@51pH{6{AOh{y~F5yL$%HltO6QNhj&hFUA(y@zN(v;T`shO3gtU^szT1x3r+p>DE z-I9@Jp{LH@$q3BY)5vNo`;z4SLjAO;h@94KcaiRk(H$~Cu-{^qf6DRNO$+LFdkCH> zPWyq9&51;1?$%S5*XF;lSX^t$$+n1%45Ylvb)s)q$B%M_D!N9gH5+!K>DZhj+SfEN zH|U91(U-J9=19;9%$?918M7K6&%%KoNX%ivX28%UA@6J-cQl)HNWp_+qm9M4c6Ppe zXmGk%J{8%D@k$UjrWQ;0Msk_-1L3|%N{Z8Sw}+x;!~1LX9!ozz`LU)rbU4Gn>yzm4vfRkIR`&r z(67z4Zq(uj7>sS$p-KQG1wMgQB`FgZ~5M@vtN3P}qn%V(c*)c;;cL7e*LyE67d%@z zLGaeS88Ka@sw4sDe6Pw+@gx6}gIhs^IlGj*h@<}S;bTmY^z)zU4uwjTcho2ES{sYc z_~AFV-f_NvjFnscJgA5N3PE^RvoLF#Z6~MV_Wa;R9*3C75=Fc!R=G2>A7&%P#2)+P zfe*6&>2jG?cSD|3bqQtY(=89?Rta}4@-+dvv+i`x%CSci<2#P{c~BP=jCvY@cS61O znyF9A+AK1>!YiRA9GAUQ&!b#R;aP_LRAyeG^n9<3rc5Yns0OIG!vHp-yLMw(qy~!L zZ+Y)dc*zZmklr+BqcR?BsrCNi;#y&BDVe_RTk31qe} zIliBk^7l{SIfkdcx~d*xVg5$0H{~GL>t_vCd`zOzB0o|nn^${q60jp%x2I@pT>4SK z1>%H4f@`r2V(HWM=PdonqA9Q}BMw*AQWuY1KgG_Bz$c(;n||8HBci@9hlzoe7PzYl z(#^20<=}6p%74b09?ps_yP&Ns9{cbwC7B;Cxy+78hmSolfOr{K?pUJ_9bnASU8wNo z5+BxB4nk?gAU^a(sU&yg7J1L`D!UP#8#ymtT5SrdT(hWmGL*8AD!Lwl6-nf-g1u$91seNspAXeL5-^_gS_6*W*EJ$t$8yRdIBz}+AjR!U zw-owZs`9ODww<#!;hLnm0yKl|!C_`a6DbG68KC%xbXW!i|6NvK1Om)P8W z$D_T0nJUgQZUc$^1WgT&bm(|c!J>WksRq8Lc`;I!_w{Mlky2nh{XL(qWwLU0!ma>4 z<**FnKQB6&tvQ-s5QJByX2{$mik@ds;w$Foyw;S!rLFr)q~>7Lk+ekTIFjo&YL#I> zdSI?MD(hV3C>+H#caGI}6vr8F-4HIbxaVSDHjIx~mt6*99m3I6HR31wHqS6IysQq(RnhfGb}Ub{y-wUES`JB!@nn|pXkMp*Y`-5-dV+m+GNY#Vm}15+ zDrpl%p9^3!=hYPG7n5B1YcJ`cRCFS_L@Aq>^-Z@HTn1X!W_@sP^Em?TXV%m@wF)AW zO5rv|B2F`6s1-&BAgCKqNfF@6Yk)P3l$rg2DcFcpe0}w_5nUdZp3P4MHrOHM%6or{ z5IaVTDK9^}?QA=rmv&-wURm0?Kt0Xa=DhgJpO@z}qx~)zKRIbaxv!}K6QS%Ms<3%q zlqOyEG4SQ6yherYNoiIah{$!qX&^@bS?o z!za|0ya{T{&*WWEsVXPUD8H%ejS4(}@S*-rUlG>E2+6DP;Ox|OY1@LkHrEPF;JO-# zypQD(x6ff4z66hR$1^-CTy51=wn>BN<-zOa>la zJIGGEQ{_MUSeII>nnff^yImjqHOBVym;LJQ3}_*=#F+MFA&W@N=*Wk_Z(8n~Vq>@2 zDB3u;m7kov1-heb)^S#BUEb`wDw(mX?~rlI)iMB2YvQK_Rk2!a?`_WW8H|a?YKL!(@$`Wa*J9rK z{gIu_MOC*uc2UJF>%uDJ#-V#d<>re1DO>_dUfk=_lL>~4B~EKP2KLL0iO5QZVH^K^ zCT9uYp}|w4))VxE`h%OXI(?N2@|FtTxHAG7Z|9nVnt+pjisUd0)$=7=LqyheG){bC znB*=Hquw-#HE~ltnTcf9bbHnr!LW*~tG;Q8ZBREFzL$e3Mn)p5yuS`ZM!h$l50NyW zI(n|X`8e^zqK`JC#h3yOE>v0s2gP&SIqm{Ql_E7zlV6H`8+etkXWbA|9p6(?0xS8*s|5tuy?dXt_x;}cJ^%b(zt{P5m;q+i zto5uXKF|9p+2G0ZDj6brqWHx>audfnr8?chCTNE1@J9LZkcUF)Yh_@>r?sy;=|`a; z&!@EeMD%&au$5K?n?nzPl1gEJo<7s_rtW|XS-PipgZyHBX|O{jqtalw|7v~qR^WPa z$>FEew;QxN6ee7c6zCOpsytojRY2$rGXDxw*-rH(4MfVrB1K_tIZrl|cIw>vT+)`H z-<78l_1%Hfoki?hEa=FFJyscS3-#tn}3)Dv7!uTk{zrJ-5h3;VidjG zXMq}N%IvffTh+>18MYQdpc3Tw*QMENpd4crb;`H9;`F%>qt*llbIm+!&Xml3RwuKv zR~SSXQ92oCw<&1{^Nw_m&O%fD|62XvqMxob!}Z!DPElmr1R21U#xW9?*&d{}iGM3? zi8w(WN5_y{y~JbngCzd`n%b09Q?m&QTMS>S+iDJ)w##H7#dOG=!H#N~8GP#Dh1*yT zor5N_^g>QFy&Lkze)9hEO}irlagL}W|MO~SvYKO}Tz<)Bf6{)8=P5RrtIaxL@d$)` zDxM~?{c*>_+uxopC?O4bRJ6=es&qeGrwTRc3Qr=3S=4fm6ovd3Vix35gz@1AleEU{ z?i+w3gBcL*c|nSc5ZCr4leV-Uud&mgbMJ#1DwVn+Pvm`NrHQf?pH_hq#l+)D_3bb} zV@mU@OpurA;YQGYZnW5g%xO6#9Oq z4Ps-Bmu$70h=d%fis#TO1WJ)6fN_9OF=<~JEFQ7>ZG2^$%AptD9vcdfJ8I<7k$c@> zzS&j~MiOlyRyb7Sm-e7ws50m26@hH!b}5@A+RN;3WR9MC6n=JyR|R9a>>5rmOI`E# zhUgx>TmdQP#-XYFA$w6Am_=6zer_OYz)fQl1oj`ORhkY~K;jOSgl{(Pkbdd=j*8Lz zpokuS@0;Ux!9w&`P!PGvh(^M`A{)wwu(BfZAlonA`ZTA=vq$|%xkP$0b(^m6izt3O zva#q3MapB<7GS|DOO)xRdb%RRlyomr;^kzctHWZ1Hf!4n_6rOf@Qn&_oNaX4oj2I_ z1V5rM`SoZ&fta67@h)ozO3J!dJndD=-c+k$lvbW0x{-d}PBXDj2F&W+hD5@1d}t~KDTF)MbuSXvt+4M2rnzz;ngVl-!;6S4)#%5_t}jLf zyT}T;5|Gj^en|IpXdwIS=^)3VHbMMd1xN6DA9Ky=$x)ZF;0tFl^A+-hNf8N=rhhf;p)*juXI*S|^ThoZ4#GLNAk(W6DYXMk zm5JvVtYXFrnX2l+Vj+YPE;_SWoz)u%h6c*ZKs%KTvhEA4YHIXqk!R-?q(=NMJMc%f z9I{#5LCisGzb5?J9mOo%tW3QN#TnXXr!Y%r-8y4Y@8hreAG;<`5jAtO=M{m%2d>A2 zE$?WDUp<1tsYQS$&oRd1)#AHqCA%O{y$R*=Yo3NKCs6ZVmz;XNmg>n+Ch-TcL^>WW zk#TwU4@Tm$3I|K^l1@*epJf*wW8S`AOZUGL?2olhrX|->jK=pJf8?o7Cf4nu8S!RW z%8JFjH*kvxSJ6leza@Zl|4}&f<=|mVQETOy;&(#5(rS&}=8z6e1JdDnSDpz!N(dNCm zgHN@Z)0vI9kJ-tErNa_s4b$Ijd{`7t5G`!Aq?yjfwhfF%2ONlNX$EG1z8ev(WR_xFRBOEneeY! z>gSg%Myk41itQ?2FpsZnK^P@O298AH-jC|F5EalgQpNm;wSJil9P? zaA@nzS5_J^CF^}F>wju!@Nvn|H{v>-#li;PEPVGHez-ao_z>bM`eAM8;BlW zm!i9^3c{Y^e_&s$&+&6yD1-In)Ws>Zj`Vyc(k|mY&* zs-@zSk|Ra>$BuF`$e0;F>~08fGV)%rx;1yHJBoC#d|BMB9H?)pUeI=7m4&{j$YvmV zct7RtCP=~`V8Gb4197dBZ;?yIN0&|*-~J9P`Nwj*;yymx8Hx7{7uWq~>HUP$+XLFu zTW9tT@cfE`DYus9&4kv0(lTCeftB7>YRt;>Q2wN&zzWcGe^#IH(5#MCMHDUJ4ax_h z&Bul2{P|CscXq6r-9Avg`jMtSUAczb>`XeH@6F^&Ec&&HdO8=}!Xg{{*#m9r@MCU! z{%)qee_CX6z#q8Hif`XQv5Lc`+h#&}jz(o6x9-_#7s^4~wm#-b!zLakYx9k@s7NiMbld z#TLf-o3yv1X(am>^eM01%teGsZ46Z<^W1fk*wJ>WUB zXV`Fq*k6glOy<_xgX~N|)K9~yIl~)2-?zR&#?lsBG3G0w&MN>LRVT>r&zGt@=UQLi zAgBUHUQtAsO`o2EtwL|qo=@e{^|y)1oXiL^NV8gVLgZc@y_@BD2{wxLfj>0Q986Aj zjD-(abjcz^JVI2BOj3FySts)O1)3rTET$|_@_avEopv7wuxU8UQw-o#g8*rCVl*u;?VRM%&W=NOEGK79QNoWh^% z;*uDJrhz4=l3o*SVv#h~bXFC`j7>Z17Pc>QevGQAPr?cD33F2}mfIK=lq?vQdbtin z+BETT2vciB#B44&7ik)#eb*k@$|)~%JooemAUXBu(w->S0rhnkQ4fwPGo0c9zX=E( zlaRjF3|XEDEoUdshF}I}H8Id~-Ktu1WFd66xuq~=^abH;P!C>0WCl0fiRT3Uk+Lh`|y zpX-`VB3EgITpLFUt;Xas17!*7R#IU+r$61FTmL*e-BZ}yVLwFzj`#A-&}xej!9&b+ zFvIn>oksMY=Oz%=5K9&h8p0+hPhZ#5HUqha4{z*R-$R98Q+b`{_PEAX`(fAf8r@M48^GQ_lMpstyndjn+9gFSnDUFu}f zOO}Qbw|khaBjQ>0ohsJ4e=tBcD>^?K+Gl;Y+Zt{E2;*GT9WVi)=W;TJigY2!BBKbj z(udVuripMJD8vz_rMJ3PQa$^*;--UM5q$c7&m<5@%lzuGuI6O5g~q@@LL@FN{FdtJ zWNO(}*U9Fxg%{X>Q2mh|GXJsRT&ra0oQi$#xi0N&ES)z$kGP4lv7nqizsHY>cgbjh zSuSxg)#nt;pAPR_*%eobxI{I zm|Pr^Y1gfeDB*c}&T0fM{ev>0a}YTZUlXEt-`tXjMAA%z7}_l6Pt_)fn)T&0TFl+4RpUj~R$0^iYdILse zObk81*`Q!(iPfL<{Jihgp|cV-Hq1@aRf4uKc{MY@oL)-Pr85Bg?KHso*aKGHXPdZl z$ywV~yZrqoVqX6!I(4|KQ;e>d>3Vw}#?DuI z1+||f7IUr&=b@`}?CZQr{O2LeQ-{_f-EN+na9CC#0#yK&s$o1DI_?mS%N8ME>iu|J zxGX2jr>0vsKpF4YqBECgG`c6AxVnvs1fz|11!7e8`*tuAe7tXT58{T?n=#%yV zh1;y<#S%wdb;sti$0_t~(>wxo!)B02suu^!=3ke8E~SNQXB}Es8(!>@X75J(<6f2* z+ITc~ONI0nf?uZzWiexEUh;!CCa+%aA~?6rwZ1j9MxTPjFnsrXAiPb7$ijvC2(zmn zQbkC{yRU1QHZ7Wq1&|i1y{WvEOPhD$B|iIlS^E{u-W##Er$QV@t&s47A@q*MnQiMG zOUh5F7Y2YBi*Y-WgK?w%d>Y$8=7v(ZI&w_`C=!=ICD{!Odz0^cP>$d#jQd2LpsZ1f zAcw4RmFCP>$Faxp3siCq<<*8mPLp*e;QSD0` zho&Mzh-Juw2P!3F?=|Pp zxHAdNfl%L__?#;Rze;M^YMkH=uS+~n2E!Qd$>V!CJaH_LA0veo!J6huJ zdmaL{teE$BnW62`MosvN)Xl;A2ja3KOV^{MO)_}loiI5ba4_`F4@TMPgo~(+{-USt zC(^FpoU}=p7hmDF(0zKV`S9dL&pFjko?VqEDK^u=_0%j&dy`EPxf^{e&3F3lPGQ=^ zxEjLc1q0LCg*PT6`9wT~gL;>Ql8cTEXGqY}dY zA1Wjmb*Ozo${I=w=@9{Zac>RMJ%l@-igK2kRY+;z zJ4pr);~Y&Y7GvL(7$*2I0lur2a*FDfqJ@+MiL{}=YLWtuLU;D8T4f0B;<*B*S`XW_hP_k0(y}=Gpkp`9md$g|a-_-n%27lROmtH!qEj zyW7}Jx+m$`ZUMh&(iOjMO($J2K6cTqm!ro0DP|H>MhD!hpM)3i`(L}l!eR|{CHawA zJ$9kJ9JQ5vrG^1I%;orgsuN#$Z)S+!)1{!?EKW!5_-6?1uUAzXAk*tm61rI8%2Ehl^E_Jq@r z8xNT_Qwl(7=2R=+!zC0Ej>A&0vRH|3NjpF@;-6?v8b`@lb8qb{Yr&UcazM9meiks_ zae47eoh)*XLae)bzD8a0B((v8A& zQN#VcP(G!Ew&>|FowS;^C1;+sy!4QIP_GtQH;_Z#ZP`dz)}>Yfk7S(0RVo&+=d zBIv${n;G%#{wFYk+7Oa-H#6HHwIdUxmO)hO@x>&J0C z6ik1q8IKMXUrzhZnIxB<{`(X0+woGg#DQ3YWYLncLilxj#@%;GyLKYO@wDcv3d>CvlV?$I&dL$Bh zB%KRTd3Oaq6^X@_ZxXg4-{C4!@h8ges2BtyEB)%Nknz$bkLLyw_A2~>9~9(C-+u_V z&s}&(r{4!DVk)6va}!{Kl#hAHk9C|w{3r1ch0U;|&M{RX^&pXsoRo&PFS*h;PLDGN z(FuOcMH1fer6uqA^n8Vgz7`Z4-Dx~iN&qP~bv>e=)mAwrQ+q~)iLtWv*EI%PHhm8< zDC&Vs0}A{D*0tncU`B>9333e6UvjnHcHGgy)Psc5>gwd^f1QBJPhIDUx5zMr@h}Fl zSr;`jEIRfdrvSlT9=U0=u57O3N|Zkq4x@COQ$>1qm4(!i#338089XOOwZTCU zkQ?yp7Azj9KO2bMsVlvV?p~=k!aVQQR&4` z_Ai_XAcB}je+BrR)V@)%AmkZ^;6BUv_tXZPPVNh~VA9xopZMo1?8MDniwEV$#2#by z#RO0BZz+@k4z@5_L+nlCMdVz|sG&3R)8U{;9j zN{AsPWJ2rHymq?XXFz8}72~TfaTvkez?K2gBkO5c;cWi#muMu&{!d?h{H8DOYI$*Cy&GgXE+CRNk&VBfHl#;_ z^dW~h2wX}KCM$VcAaZdqdXDMi16^6}H{n)uAcUax>1S;xdC2eq4|nnwB94qBnUDvHs#h;^sj*-dK^L;E4%?QD6rS6UD;cz~{| z4Wk_Kdk{r68!|E=iJRNWUf^Xx>IIP(DG!hrOr_*D1Z_fl<*jfx z$S1YodN9_c`oJrFL+Pn2@$!(zv!-X)qu}o}xb8zzYc5SriXt|weOUvF#zu2&RP3HR z*1&XMdSf|NKuqB>)jYfnRJ8Q_$@v>9WuXx6QCDqo9XUD`Egv4KRtlobstZafA1cp^ zYq(QNiwRmL8v_wvUG5~Q}l^3&f*x{Fadn!e`-~Pin#jUb* zJvsXp>^?C+qRj7dQ4n{E4CRAdakzCsJUU_12hJX`85CW~sq4Ri243c77fL~np2D?x zNN_Rd&}RQ32uIlM?oVAJ1%sn{Pfr#0FS~lH*u5gMR}$g%2kKI$D?dgZr40>o`@8Bd zt?C}hN^J>pp`g}Bo^!DM(5>euvL2_bU+;`Gnb^#ZQ9^$U+C#@mP?K{2o8jbDv{q3v zWg&LqV{CY6pxXqpUKr1_^T5zS>EbMdin)~xFtuLg!7 z3A@&I`)>fBT}7@WxbZLA?r`vUFRNBsc5{Cm`lp+y%#kSnR}!uA-y~F3euKNpMygik z95?V!0T`9-g&>&c4pR`aw9ZdwX@ZN7;MPCNqGesCy+vh~%uCMqt@2WbRDeU{4dox4TwB8Aq@zh1feo)JDv6iV3Oz&@m6+j z_GpScGjG<3>~o#6dCXXwZZo*|iJ5LA97&tH9NF?cOB}DRAem@U)jojM8(9&fi<~&N_a?1iqNDcu9`QlA{&1O2kO4_eRF?~#27X8~E*qjDs;7V< zQ*~>}3iAv#YZNZxmGZrcj)1{Eoy>X=s1*G5s*7WTpu>ZQ759k`l*TZ1labQqLrXW{>j|P@BWBl5 zhM-H?yet9opbBI}*SE@?dpyKvHAQtf{}wqu!MEw2^ykrT~)b4)}sO&GM>~5nqF3SMa~*o>(0r z`qA%!O^vtFP2i0TD6!Hy12%ab72WS9xS}!P99Eu=k5IrI<{AjlJbpz324QKxX6C%tX3B*Pmh}h1sdlX&-Wz@%egi*{PyW$ zL7z%i2`G1me`Tq=pN{g@tFIvn<2V>v*cFdz7sAjh*Cwpy)*6eV8x`RE_~Vn|cUPTm z7m$D0h9)-FsBT->OrM`(=2EXxG@_2}ouscPl3q(9OK>z2(_AmywpQigqk$~95+eFO zvWcZS$aeY$X6$VFmq&I*;KCo+OQU-Cn+pa#C5sDH3kK18)o88Ua;oyCLkUv++oKZLkm8?_q|ZLKtDO&ITak{@-sixA5`3*S zR>K2O3VsYcWK&p;wU~H<@;@cZ_+ls0GUdYTu{i{LlPU&8ahs!9r2*ARf^%U40ll;0 zY?rzVs{4_yM8c^5CnzJl>Ylc;eYIa2=0RTl0hon%u+n7RODV|FNC4+1{D?e17oG;xp(G}$5+2LcxY!_r}(CFw3n^y+k7MNO3+d3%xdseN)Fm@m23Tl}I` zdXmBFW$7(k%xa`U`3B)wUZ+2hf?7|TE21J&Vss_0m8&Sea*@Jnh{rEwnZREvwZ5KY zkg=rG(k(wUUMjIYtz3%@l2Z{`n``lzz8E2__Ue9&$`tutq(QHVr}~T-7|z$2Q9N*L zTzpKq8Y82uJ7X9`|1)(Np09TFRLY}Z2!J>k^;y~Rd$qVk=J7Du1uohN(Q(!4SQdL3 ziQ-YA;7(L+l}AiCGL9ZFOPLWa5T;qcvua0lw0EE69A-5-KVZ|OA1$LjWOVA|iSc_2 zX|AUnowwJ+H_{?wrnN&2^JIUyCKTId61ha0XL9ouY!u7&NM@PJyI*zL8uc>i`to`D z&ZXd2HLdWJN6%heZb9o2fP`o+=_r2VF)^L5HmAql>%ChSg8Q*(MMLNcB-}4eUSKJj zZK?FTl}7W}`?CH;tC+RMTOVHUAa=UTjkV*J6=Y@_-1f&zF}T^;MrvcGg-)NQ&Ij1# z{IEo{l+|cVwb15y##w{Ct_h}b9m9eUkA?D9rwg2_`q{fxzwsD)+_SN1oL#c+$`#d4 zkoKTgZ#Qfko)+5c=4k>0acaYPQ(XD>fWV~Q%tNhGn_pkQiZ}05Rl7Hj(Z{F^P(Vi^K0@k9 z_rZ#^)B+bhd71fB;yg2ZReU{UNGWP z0isj|(Tfv;-+M(-x?%@LW3rfO5vOHYF@d=Sc2EE{MEUBVQC$&^QifdKIh$(2-z81T z(bk+xMht(MNE{|C8g5=#rYeq>j$;#fj0+Ky^5?LQEkfB=8Jo#UppjJ(3C(AwII;9`X^Gaiu0ymZdMD`HBIRybf75$8 zJDK`hGj540*?Y*)1%+11Ngt3#Aze>;Y?n%VK}{E1wE>7#p(2QbN7H*A=w<9Nf(D|p z(1zUMLOA31e2?d8f4rv(tLpsJv0SPr3|EQ9B{tUh);!#l0nHfxoNWkkRMQKHRtoi1 zW7<%s@4fC^umN`v1^qFM8fK*ZQB$SRj2C5z>Rn-wFuDG?UbCefBSn{|CItlxid`S} zimP=c-giU`#BF?HM@MIFPE?Qo$~Y`sxElvE>HXFa-4z|fI?@0&P$J8frq;3dotpzk zxu@7lqNGdSMz38Omhx>;Wvt6ee@!P3?Oy_VXiS7m5qAs1>^8J@Wq3EVQwZSNMUlAT zaBYQ>l?o6D&CZ>z23?_yPe-Tq`)k#dSk=wW!S{GFHrP8{gf-17!{JWR)sic!_(s%` z&h&|3sEL=z@QG~hn}{bnWO1xH&u!+$7CnFSG@WIxpa^JG_g%OOj}l)i+X%&$uUF3m zEEhPvgU%>7!9Ccg1o}sF+`gxwU@ELz^zFQhLOR+jO;4)T$7n^;)&wc`bmFEvpKP%Q z*KOYGH4v#adylak&aVR~;KA0!?pY0JLs@ImsflpBt0;PoNf~I)%m&V( zc=370uW-7E1cyspoRR8}=y4YLTgCG*2Z3^D82L-$qHit@kvOnA&MvRt+{z^`#Z}mI zp0Si2btB?G7gVHk5`#Y=QNNKk{T``W#)MZM51>y{3SgD|~nsElG z@jt3x0X1hQeXm~uR-g*1*UR%8t56(EObOLj>T}6EMvUYBl>q_o3^YK;?9o3yV~xA< zw>q7{(zS!dg?;rBQd6s|fNUd%Z75HhRv3DtcRmf2_9my3OxC3!nYafofD+>HcRO+= zI`++v$*k?@?7@*cL+rR3ROlMgJ+ zxJ&F`6e#1inzI)FDYW=qBKa>f%)fuago-OMl{~|Ixe8_s7+C+0r18dH(TlLB}^a01b9@ zdd>OY_}BlJK;ZBHA^2K)P;=&i2F*Xb3*SSI?X{}Hu`N<#6gkVsHyF$AQt{Z!(bmY2 zFYkt;yfpKtTukE>$?=c@7gl1SpKqc%UMNzdI&6#3Hzh<0(GBl#N7FB0S}r+s)B2L& z(z)F*pwW%Rg|C!P+fWML78j9!v(00MgOu-A?W>I>8s{mGv=vPXJ@@An74TLwN>kTg z@?HFgH}$`Mm~ML>jn(Xv<@`U4Km7X={cX+j!9b2(L==6u>0d?_|I;VnHUNT(+&j&1 z{__*^KRrp{b(m+MCI$y+ zlKye}=KtMY|A)`6SOWlLg;PgC!@nh&{--PaXRm`(oUeL!nDN8^m(TQvPmHX-fCF>! zJX1U$9`%Q3{N2U0QrVO zW}(RM|JtVm=QV%o{Mc-qJYDKJ3m@*Ds1v2oD`$ELo?hlT!MUUiYH zVAcOWXskk!-L7T7TPc^91|Utd2}Z>CYGtaa>mrH)kkWR>Wy~d%wzzg`7-U211e}&@ z!Nh>tEA)cK91kh}ve^3HeU-ki@m}fluDzE2UjAQSlZ)05kc@N%1%54g6YJ^P&r=Rv z^fS3hwuvLhIjI(hyK9?(|Ly>AK0sXyuU-6q+QBjBoAzt#e_`5vm5Co&jrQ$($Mv}Z z5a=dg2DzQ*j-Ccx!9=O6D(aV8|BWieqVJpL?Pz-(Vq!bKXkzQWODo`#U^Qt>HI~uA zW;;T9v$oyMa&^?AUK(p_KpJ=ae&5>}4C`I2-WdO4Gg=}+S{k~J^7{g&n@*p%6m*6& zZh>W_+}E0J@^o_pgsD1!8>x`yN%L5~u@59g^~KebmH{U$Ph*}?3H$`2CVL8Id1sJL zLyPe#8N1$~P7`t~@m$M^JKhB)tL#~3Go5%s1kw3`@=XAzaRrp*ZH!xt8gJSbHK)!h zm>$BKwgMEdi|0%Y?9tf(Y7a=tBC|N*+(gmfJ=~e>9bKXCS6ZmJeZe7EY@#E+rI_-i(E~jV3dr#e)c`i`0gne+1hnXE=;HZEeO zlv(}N*q%1eSRPPgwFBf}(D_6U$8+_L&}zDWWK!=6FQuaPm6J33gY9|w{bz?oR;Qge z?bfdVa9sJZm^u%pk;HADh+_vd;U1sqiD4Hh_JORS?i`*Zm~|$B0tj8nyqv>>6O~EV zWkJpn5&M$fhm8MWf8en5%jr}!^X=acjkqRpzQcOpP22c-Qh?+MxjDp^uh88amic!U zKqxcfpqfu5=nB2pQw*pd74xb0*8g@gEyWGyU$68xz&?8#9hX1b6suEun0!SpsCk4Ji~K5=?bizdQVKmuZ(wyBv}J_LyKEF2h6xH_GeF81N| ziH|IlyJN{Z2phNCh&O}TOSThO;+dvInvrE2AEv-eidnqfwfIq zKfNuBgee+12`MaAtlhLOlh7pCUGsss3X{hU}CnRlC!jxEB5sZaf@VM}Fyn$p?>D=?p~&Cz6E06f_S_Lt4n_9^2~VgivOu zJGYZQKDhJ^?2#;O@UIsd+j~FenrTOwv10AV+8kT=uMdXT9wXIT9Oq8NJk4-y zwUzjO)X5#ES3c6Ize-Myx3;M$T|a+k=1!*H%RJ}1GsC)>kK$~f(Gu2Ot2sOAH{8=T zOKBRIaM>%^_e`ZPG&OCq8ZAoD)IZ+xDOUCd-|WvFV5h0mP+&yIvk{Dt7FB*vfELOl zW?~mR1IT~~Ac`0Iy8lv>ra%D(ULFX@bfU+n_l?hB*6Imx2K_e zzHXv77^?-0j1P>T-F~Jo!Nk_*e(YX-$}>E@`eZv}KHw!RyorZ1gk$_r_h2xd`)32W zcgsY2Dr`7b57&kKM}P(Ia+X4%q;BTO5aUA5F|fc4`lDnw`5wHubFil1qa|s}z=lKU zOpTlMB9!R7o2apibp$k>Q&0Ix_(RVgUNtEN_M*u={~6x!Qmi)TgXV*uhM?DC$mmgz zGYcEN3tVXk<(?DxaAxUHE3EU{7Z&$|oLI}KRnTg1lN{R9LzeK)$hY7~JhSO#F{%uRsd-3KLJ>)bKn6sc<=m{6RW1k~nnu;SV zRIw1<&!2Ek?t5`02q@N&RTRmWfJcBF*cy>s8__YUZAY)UaB%S{2OZU|m1HORc*h>r zb}%c0D()om0vo*Li{_cmlc%}d~n<>Hg@y?E4R{_wrJ(ShYBeD+z@ULs!^7_?5&BJx)(m9IcmxkWg1)h&_OfsBx#!0*5{crEAh0^fc zk%b^s4HGSL37x(})Z?MZ0_}}6z|(yTuzsR1j_-0r{{W)trUOIC=p|}h=b`N^#*=)6 z1LtphRUC%bNo9Rdeb$R-)Qs^FzD3uB_%Ya`k8(acNxPoaLUtx3s&jM@U-otI)bQj! zLFH`seZpNF+%rHiXeVS&CLf^W%5Qu#@FMsw8{!;*bxzN(9l#-=Bfsm7!FiP89lUF( zWxboR-Tav53(i5f~hTuDwuhIbq%VK_VN`Tj(1VI2t z_4u8zn}!u%ugN^iYFQUWoQi>t6Wl<)5d4_b=NF!egb1UbRK6fFx|+G$C$N%?)zyg= z&!b3}ySwvTk6W6F=;4qV&r6^n^=tzqs)V`XpG%IgIAxogJq>+zF4eep9RY4#iI?F* zugVzUZ)g1L34K8;XPJ4Q8+IwK=DoXIbfW&*;h1mpg%>clX$MJ%X`Pa$2QHRPYoe|0 zbpE1v3>!CgTS@1eh^}@}5#Gbqg}jo%91TnZ4oRVvYfB*{CZJFk&CQ2>T*VW)cbJAMnxciyUvaJM7S0Y$@7q`>~v4cjX)$sQSg&C(IM< zwS{{>fUTzsS72}z0Q09>LQve(y*=?fMwveaxiSQ{UJkU80aVPp0;q7}bi zw<%(xQyiPYo%f4B>^@mfm*C&b9IfP!Smw@4A-fU9K78EAYw9+k*l|8I{G<(WL_&D6 zIgJ#H$_f~S1>+sw!U@4du7jSa;t=fcVaIjt2XwCEh7KR8(YpcBhlB=SgEAHYS(E;* zA0jQ2^<_FQuDk3UP!I)odI2Dd&7<`V_D-*mwUWl2p3B7dv(Aw`HX+#Ym)XO><*C1n z0r*|k(^Aa9IUje+ldhfR+)4{3T_AEg?dmoH+#)p&glil+(iOfrdvYmup(PpDZVE}Bo6`z^}xZlV;Y{#*H%ez`MISdA0JAiPBt|0nnABtKCwhW z&7AiUU#C}*Ss1aTrc^2*bMB7KiOOHNIB;*ZV-YM}r`Myf@_hZvFiVWto#06QrDYT8 z&2{O&UpmlxICpn3-+E0m?*Hvx0^4Kj)*GG~fW4zo2z&{fh^uW6A`1_6xBgnlo|-o< z|FxMp)&0)74?eWO&R}p=kLGy8&WBi*_wKWAcM};=Jmu~A;pQGOL_O?WO}om|Des_Q z>RNbvc9cAA*73;#iQoQ3|J2z)qltWpi%VUUmuVW}WvyXdeQG#RrL}_I)m~`B?uQb# z&(TviALMq_+6PM=1}v>?FuLm6C_ZA6UDMt962JXL#;9GInB=k*m*}lz*jW7bIBKEI ziZPrxN)FDI>ofkMI9#gBu{nBSXsk1UnaCrI;i2dQ9?!GwUK4`AguHd?Fpjc#CyV29O#uvcjBgMA`5U_=pwwJX{KZ^gDsY;79Y z`uS3y6DxPX4K<=PZaf|)(*ZX_)@LO?&i9o^|(HGI1<;2VtSTX87hE-!scxuZ<-P z2&>-qayC4O#en%D@AL}Z_ccyC_67hOCul&IfY(KKt|HXAcSsGB%JIB*tN!s-!PR#3 zjYsB+A$LCS4$E{w+Z|tp2_o}pZhHYk^zKO6$6-eTZ)ZyM&+KjbAjLi&vyg>Ju}~Og%JEIyORr3# z+G{Q3-Tw&~G_>zb{cwJ3wsG^H0B1{+_l}^Dy}}6JF*@yxWcum4f>1f=t?BTjTVZj( zRLcBwHrM_^M-B z%!R1V!a6zCdEnV!`-%>OeizvR)V+>+Ir?!B=VitnSzB<{Ek+(RwvoRvyYmsJE~wTH z+-dz^obcEBJfl269#Koa0Ls41;&Jn=(6C5f+sEC)*6(|r=3Tv*}{VP9>zYQf93 zb~yhoSg^;7`s5yohP_}!N*eo^TLuR`&+BJ})f?bWxkBT_%@|Af6pju_|GuD?e%3`t z=j0%Eh8}5Cn@Rc*Y%i*JwolTo<~e3N ze7ha=_R*XiV???AmEDi;==x|b`A;5uIafb8Els(Q{Lxw{ULn+hv!CajDKUPgFx6WX zXW_Y51BXRckJvS?W`6i7L_X_ZCJYGLZ)09*u`X+WQb?Y}NFUK-LaI~*Dq40^!UdM& z`MsA@h!)Zsvsvbg^nFe+ACd~|)ISiY{mP3xdgmrR?b#^v^E8X=m9+PM!(4~=wFRm} zCuyV0&Py5`DXoB|LY?C0IH1I6FaIUz@QasnWXbB-AY+>VDa>V=WYvd9bfr0K?v?bg zyVHo=<4_w(Z^lEb%BLZ}-Unwpi}(WlwQ$%CF!1keyrrX#9L?6(m0d@dJn6Wx%2;s< zo((zLVMw$}vC$UDQYEuY$YXo{)`NWV@Z(*{PLI;>kLQHzx@?frLWMDeO`@~2nN zyK1BU(?15+E*sM}n6(sij@+v~1C3=_N;jh{WQUo%*#yzbe%3jV)^FQsC|4 z4R&d!y(Bq;pqW>8JdQ^#@0n>HiX1q~Xo)3cRpLyG5x;N)Az6vRG+c0^mw}Gr)LrF+ zlA&v_4^g1=n3w?Hv?e!h<5K$jwq2FVMT}Rgc$o7gvUO@sHoBwJIEtN{zh(}$f3O?1 z-}RxrdFuM(nPIgBbM)5K6NAZ_fg3kYmADssd+r7}itHTbm#yoZ7}e9@GkI*sIQTvi zisT7!>Y?48*;$OYN2U~qA7vYqxh;uxechQkNqQgr@ru;Z$0rFvy4-BErzYskBCir` zSG@gf$y2Qqh2qAFwWAh3Tz;9CeqC_j3RIZ0M2w<)iPiPiV<<=>6#2V}dOW=p`RXvE z+F4KVr`8Azz2fc|=NNJ0F1Gi|?6h%nVhY`HWEpl76y4=c9)>-adNTmhBiE_)D_~;L z5j^mx4qSvD>F|DHlf&5PEdsFviAkN~4vGgow|6C@D&weA7nS0>HwPS85C)3h2<|6S z7F-Ito=o9496T&Sn0~ooL|&+)K7r#lCWved1C;NUkY#a+7{S2Td{gBvnbR?yHNHBYk zS4LkrFJ9#1*`n$caYNeHTWnGsn4DPGM@uVqPK4WZZT5WMt+J<8W|k8=D;}eAl3{h> zisM~ni*G69Uj}uUnLpf=qhrZ2Q|%FEeVwtMlpTwf={lUE!>#NUjv^<)l(z}f23xXt z+|NjR*0kvVk!ZsU86Jab7&LoYZgbN^If#4lL$cel`fpg$WK)n$<7$PkoH0wJEr}T8%(Wd2nHg&yuHsW59 ztdjm{?+dr|8UtRu*=xObmvd=aas=oO>o0G$thtTIn*?|hT`KT$Uy+ANMIt4t(5h<( z>s(iyc;YiTp0U{N)y3_ZaMHB$(87A)dk;@p?(brxD4wnv55^(Q3O@4;h<6f7DbIzO zkRD!(IrLFk)Kxxba4zV0yb8a&^yc>cEAONUFM0J9^Gc@P6Odcb;$bdty>v5M?pIAJ zR%5E+6-$FPj!68y^Dd%8?Y%fr4jXEt^hX3AXA=0bhtDmo0{y4z-z@FMz5N%jY8C9u zI)c0(*@!p=TF55Z*o_F?cMes)*_%8AS`QNBc+1}qz^0F@O}}^5utgC6mRkbOeWgsg zPlNUh%YQ_HAmMb&SCEi9Aytp_SMM_hJ@txCFP3@Phu` zktoH>M0VdQN93oZKl7D5@S4vkQzfrf)l*iQ)9oOiD2R@`7Etc%tVHxei?LWMOU?-6RfyG&nkXBH)@^$lY_2k z+9}q$Zr%P>(XDW)v#EuX=I4y&yUX#^B>UXU+NI>A+sC?rUk4@wR(-#X`-yg}1+2Pj zPMrNz0>2-#e3QI?+}6JK^>{>C7aIpwSc`kTw|6Lslt|Kgd@pLp@C-stJIM5Nb}9h< zWKRpG*tby2K@fy*YM$`ou#CK8JGYBDd7vx6X?!i2u+yud=n*$VyTC`jHV<2;&ToDM zGqtPg)D%B*Zb>^|=EAJjq-*#(?g|B5c9eQwne}?O=BeW0@{-64VfSm#6~&)*-yxip zh~i=V$iYq27V#d#uOO}{FW0lO#_z3kZ9YJZOYoO`4*^-}b_}5-N8ZH{Dqub4Csml)>bTzZ1wB#y^prWpSvf>Ch=*Rupth^rD9THZo@3$7 zTb7SfS3`Jf%=#puH0Ae=L@CR-$HjR_mgbEw6HrjPpxaJlgx>tR+*834KF`YkJhP8o zG>axJ1gqF6@jmh;tos}P4fGPaz%^lYllz7F7TZ=jV=I4(NU8PsZ&EXmus%wd0tde%r&$}ld;4PUYM%JNa zWW70W_iiqEOp5Nc4njg5^4(Y`T?MOC_Yy?$m7=fqNb8g4v4spp@-TX+obfOTW8f+a zFu9gX{&z^6V>+)H6eTW>wVb+y$oE2+l+BR}xf`LUCe{-08sk1m&+jxT=CTEw%9jqW zcVEt%U2%jMZS3_^=e>KjCc$E-mtVLx5taHFGh2ui-swfmN*0{8YmjA4H5%ZBa92y& zJn?3XD`Xx-7(qJgy5q@=v^sTmA72qY_~FhA57IwDs@upH-57YtYjXTXh5W4t5#@>I z^27@aA{JPK={0*j4lO>R2^srLo)xcXSsnM8Zhh+Fr#qd>s}3SmMM5t znBc~O|FQ0M@(cWTfaUohyn^CAsx~U zLw6|+(p^J0($XN(t<=yUDS|Z8AT1z`v>+ng-SAtZd%y2K=j`*n-uF7^pWk(zzxH+< zXXaVYTI>0&`*Vlg*Xu+=A`Yndxm5;gfK#G&DQEPpV9!H2%w<+MY{%OLZ)=!{m>sT; z?|j(jGx6@gkG2pximUTlUMwsvY!RKOA#ic_ZaOGGW3Qd%LQ2|@td&|L)1VmRS%xB` zS$e>dglL3T7?LeYYaXbNYK)&hxF`IDfC9q5UAy?>lFyFc$D6A9$nk9(>#|JkgoAJ^ zexkF?&Q0x&N{45x&A7LBkc1o0pLj+y?NIg}^@BhE2nX9;u>1+klgQ=Sd#FCBtVnQV z#O$$zLCyyI_N&N-FCH4$#;evUbjLo3jVAXrpl?Nqk@j&P+LTVRlnkAY6|`TDNd|8I z5k++Og9EkC7BKvh^doO7G0#TiPPhf?p8Ue*-&4HiT1_o9RD$85Z5#NuG(MVo#c_fm zaxUy-a6KuTcyqmuWD;F;OoBzzCw_Tx#S|3kQCfq)C3I3cVm9fM;BPdk6A-S=%rW?V zcb#oMh)0-&EW_QVjqT=o@YG?xkJ-bns^jHuH}hS*7;Ts1Yood3^AIDvM@-1h6}@eC z&{saEou|7ix;h5+1ANa~95%Il0X{s^VZ3+3(;IvuLk zg;qHD{zy~1`+grNu{i8luCqI;z}X)M1_M2!)ggpDQ3@K`)=&t6=v(hTJ}Pqi0d$GU zjRvD;a@!RaOyly@KlnNp zJ9QCQbhrYr&yyFNRw9(Sm07wck(bohh3^)FWFIG@03J~5hRQ1_hlch{BHy{Qjn}0R zS*a!ELV#2vj+bStY}01X%>i12hVM!) z?*Cbm=KaZ~akej4;4MkAE!6N>I(Y@QPMpqf?j_We_89^ZOcm0OL_UnmA6FZ)c7rNI zY;h`nM7v%V9uoLhXgeYKXIqR@id;R*cC`f=K(6r}z*v3gbNQ1(Z~g{SK@c|9Y0KuF zm6;G&FFt?oc9bb+!pA)5wBa;A8ui2JACENk*fs$iKKwcU;&u+ka^r?wX54Mm5piWg zF+NT{L2_Aq%xF~s`ERr-?%{L;NhNw6sYyvtb!@{+SwFb(V`TTp${@qnk&oBvN2WkUBlQj-_qoXGnl0GN2~gL6ix6ScCx*0~-ZL>}MD(GB#J63;jk{B)YJ z$0w-~Ihe35^KxT6V~Dm#M9%W*8ZrdhR?8&gBwhw&)-_IJ;w^V*NygNd6(L#6`tAk{ z>29CNLR9san^r3!2Du9PbmfgPD^I;LmfbAnJPgS3H^(XSy1M9vmPg%}9Wu?M_C{a$ zZP32-f2b@^ph|%ubf|Qh8(&!P1#^VfID?h32NHhGDLEoD@yhM)mm|l1cUpgCt}C5f z%!BeN_R|&{6smw`l^r*0X63vsP0DqpJddyE4;XYMN2IstZ5?vC`j{#^xSMn|&b*?0 zd3V04TzsDN-?QOZSu7GlK7qjtJ^=K)B4TqEpYx**LVtH1;jnL(DoWP({=nzQ?NqE8 zT5%!5XeSk{U2)~4cRa6-Cggl~1*1U#C(rPrq0V!~DLVSluG=$bva&f#IQo1zi;XxR zj5qM6e_0^3#exh|LSNcS}CyiToz(L&_}PQzW&V zTJAUL)i1X2?%CaH_GQZcQmNow^@uB7e07RcgdpwYbWMQQ!FtPEW0|{DGkv~cQO6uj z2y@feJ-GNVbBvF668doRjkj*u*~M2Y%67M)pES&?Wcpf=X`-?5WmYCVh>LueH~q87 zM=;W9_Jl)6a#=F==3VEB`^>a80A1A~ILKRkq2p?pbbh1L`{J93+)HVJ^BFb7#-C4R zqC%PI0t-It;=F2p*!vcLXir6GgPBKHIvy6w2GxmaW9lW6)+h|UCc8)Do-!cET_H}7 z(UM#D?ZwOz#(j9>T@RKmQ2CNaHIPnE+oZX+Vz(Of9P(PI{fc@bt{<;-(2^ZRz;f0u zm3@^w?;@OU(DRc7BZ7Cj+;v}3lq~8&;kH|>8M#2D9yNtjg1A21s52EEx-hIqxf)`7 z^aKY+kdQ2z|iFRntVQ8Y!AwTjEky8ahzvf{Yhc@Y9tel6wgSokqUpy{R!g1NVjO z3P^Bq)g6zs(ah^HS=8p_6Z2-Kvn9zisPHqGK*+5vrZ)@~&Kmf>kLmQy!7oHrQR6s& zMF->3Vepa6^_0BES6;d!QFCb9XYX9chD6AwL8lozP=b6ox|*5u_zQ;s;}^;TU$Hu2 z&+y0;p=e>{j~EHlONV{U)yL@&LfJ0w+;%;8_H~Lu)5cbAQ{cD4IEP37Uab$eg-GI7 z0X}}N(f7C*C)t_soO@|?4<+8L%0W%3 z2&YOr9av<9Q`{J66<)+X?eg6D8n0P6ud_`Vi4F-BTg@?>UjpzIQRj)K3|qZgb4gK# zsC67)#gs|4>Elk0()b*t$huJh`E_vv{tJ&pii0<-K2<7%w==Y5cPJge&JFx`bA%|Y z8Lv`R?3B3oRSK`4aUDF+PGJQ>2%K%83uJ*E%%Z&*x7nP2BqGASUFV2OWBAA&2rZ5gS?R$VrV2YOcrpC2#S# ziu$xKSM~$-6ucFcJ^{Mrn~?^jG?aL2#kHML)+#V{72{KB~pFhEdwfL#Lkzj7j5Jc>+KOxseAl%_4^d}7x{5MjrMJ- z(8fnz%i8so$h|q7mb2NMEN2sm?qkd}AKm{=ZE#`nU*i})5@r$4NcHUX9>iKixgy=?X5#-b6o;K&iy(70&%&XK=beW%mNz z-e<&EG-TL-zFi{vbgJ=o+f2+y^?QniM&2hI_a%c)T5M$^EIm~f=TbDM-G-mIw5qUo z<}zQzTnneh-1`z&T%m;8_o+=14G)%D;8(j)SiS(wtiDY}{nK6*^|{d-cR7CX^CEi(s?f^liU`;K~|Dq zZXS?(UrA@Z)YMVn6SrcU2)cb++>$XplWbdq_S~Kx)_8LOpZ&p7C6*T`O%?UJxYdch z>I3aJ>*=uQ{lco|*9t;gmQqU&UW+KZ-cMyE%H->IBV{G=zXW~v7eGDYx~9V`%RvWXtDwG*Qc3Dx8HK?Jc(N_~LfnwV zs~_+4ccJxI-3@O9WE=I!af5Y!^*&a8O|-@RV1eAuhFBavNS^+VbKRROoyccl_+8G( ziyfv^(YG^kT+`@$W!9TCRr&LLZJJ_g$vk}7G1_LOoeVo%r9y|2reP{)^(CHgP^o%=8NZoN3sYuJEqM&9*9NzPelUhXVM(O7edjj+GU)sfo0>9b8<$cF-wUdc&w zp^N9`ow$y888U(~u^h9YxBLE7PuA+EA2?Y_cmD{4S#A~1i<=fW9YIJ&SLans7A04V zJV~&#FuVa_umj^@9^PJ6CP;?nvdWY2ReZm^zhc14v{;w9j;=hR+CrbIssRnSR(vtn zqSr7!>}|{KbK^#WkS0ImYxr&orLgMCdO1yaC1F`^|G;f6O|B`RxA{Ju=3pJ3gGw^m zT~$w#tHkw8Cq)Io_t^8@3_m-K?%jOx=lMtndnl`v3j_8CDG9zcq{thBvN-at&(u5b zHM2W$m3ao?%HlL$VVkhV7wqJn;`rf&c@$P#^>RavA+8fH8RY^1XHWiK%tn$VfD8#TLoS6KUO-*2&9F~%|t?`Y$OjXTX{W_367 zN&VN)7|lGN_Pu$?-Hk9UjZ-~ZQdqgeMa5mG4q!j->0C8alv~%8xhO<3% znzkwkf^#<1QTyn%EQF>CZaXwOfOBl8%ZS`n75bgA%lfLQdDF*9`rS9)!n}IYUHn@f z?94DytHhba!3ZDBc>G=5sFCcECX_7vuNjL`wjJ4O`)Yn#0}QzNTWdV@`sE_70utNi zUU?@H$JD|m$toiTXL4Ja(2&+4czBzpGe!>6r>5Wv3X}Z0<&>hYaD8(bCoMz#rm`pz>wgI6il_+AbS5 z<80bTdQZX_FIPx#^wKh(HqGG)+lYQ$MZ6BQ1lrXMKJPH^V>9NKbMs*G2u%x4tN+bX zAfGk*dGD~*7tgf7A-Cm#Y<}5Z@xiQTokJ0pzV}&b4fT^3PHC!#&-j&D79<5>L{p`C zY50B+?qxj)_d>D@hyun_Bq!>ONjRD<>J*o|EIbnaLAV$gYvT>NJde=NOTaiL-5(|b zn9S$nfK(w3xf5cGG36GN+X;;>#|KP%mV=@1o9xc^>u0?Ym(Dk~5|EJP@&frk6K_$& z^i-^!8#dl2yM3nVxKvKL@%Z^hD!bL0W2O}>Rok=6d2htkr;zcVPjihk7B5vaqj`UX zmVq*_n*C?n_jpCGCs1wysYwG7Iw20({_c;kkeL*z$JIuMoxN+5Kjog1JfTJIMip=A zm;GzWe!-H}bcW_q@IJxGY_K41X>96X@2pWj@pp+Xo#li0Ua@+x`@^5mEkIGz5bvN7{2q@0yAl{X$e zu}G`~@{gCP1rWYvWJ_K1!hZL%%8{0neP4;qAH_sPDX;VQl1nsfShidq??|pp1UhuK zo3ttORM{rkX8RhoQ{N2?x`%mTo|z9#T9mqKFFTM5mNF<6Vfcn~GZA|NgRVSVr-@&S z+o9Zfrp2)^{{g_VE5buq8us0f?gm%m+1s7+8V#;rKNH>yypro8oo zsv^hyXdDo8R%;O?4u7H4b@qerSYdiS64?|~5^==`L6-x0IOY2nY88>}_UwriP>Kvp znY&Af1_n8^J@>8hRvsqVR`i4iF`Mnexbx3DT?Yvd^0pE#&C_3b13DKqM&U|E7EAuQ z%n$sBZzr1pX3^0Ix~T>Dc{D|VNe0(2MhxU1=)ws_8IDT55u9Ok2}fQ1{JAP2Ow3-e z8}l#8(|2*WKrC`htg>PSJ&2&aS^yM+up$sxpSGFLaxowIqN-1!1ZMh$1M4;w+S|}c zRRtBIw&O-D#T1Ml($sC3`E2Tx_rVR3qb~?^+!jOxwi5=NtCYyuq?el(B8=mTXpO+V z`r8YRK_Z`;2Il~t{0j=N(w~)o=UO_0mb?77yFPk-W6)Us&UAt=Ia=;xs zwj7cg(*iU>m%%cF*4jTYft|b`W6epyiX)kph*DlaYEGc~>su}%N zrG;qmjrU>ShK{_{0W`gv=*%6DIFhYi(U<;c|3m_Eqntx8hy6sEc^EgtV(lyD~&7hky6aN6V>O*B! z+Ap_d$#qcv(Jjo@+1DyAb{4ow?U5+yupC9OD1}Sboc@S_P3L91_NR@O9+9Z6V>mp~ zZhCi#YM!38QuDPTZ7ag<*xS3w?8yE1lUNi=nI#5=HvIT20n{7`9Kw|!bS^UAb)`%D zv6R%)3`b-b^mmO@!}JVd3&KhOrGYX7Gn_zN+GynDjKyV7|NeIj5^VhXJ4u_xjyF?Ffb-=IBsWW)1;V<&C*6{NxBZE_6z3tOvpPKsO0OP3eAkw48@#0whsUWT<% zJ=NLSYkBhlW}}~y^KfB%_?6Lj-5Ub3W|mw&D;Ba5pWTGSRj)Ha18+CI$(#KrmW{8N z1T=5TcIpH68^#)Eu{=tbb{BRc?-w!YO)KqA`Bj&tnuViIwa#S}$&6mtLv zO|%1=T|NOo0uTLM=|Ss-rNIK{*2Em^Rn4I>E@d{5o>U~Tp%0@lsO#9Wl6}nEAAvGb z<7*27KeIe;^`-}RI;g-F1KkwKH}h5X<7ybV`IaiWe<)Ayxaps*LT@zduQYF+P>eVR zgdzl#v`eE5nlT3#>h+P?=&nbAVH0jbMKSm`ErH5Id71R>wRyL-RVMt9a1|9?)EUHZ zW11uTWbI2= zxX|2M;`~K!rXi(4x0$``*$(_PD3R_1Q`{ zl`;z4=6&YSf}hZB2$?AlM^Rt?!JLXn{@ih0yh&&=+ZJH(%*xX9mp|!=`}}NnChy$F zO2n^-@};t3xPq*ZXlr9yuFc?a^txId=l({RxHw7QF?PkpkVt`|98nV>!UqCU7cYO$ z+b`g_U9raVLqt+Bk}q^J;AvMs_Y4z1DurG4iCNy97gwsrwV%+*FyM8Np`X{5L`u1f z1R8$Qj#1WZj$~1MJfvULU6?u$o+Qqv~jj6b9p;7 zT1^&GuUt>-@3K{qm!tSaki`j{iK^LNTg7wa+GZTODgcTwukaFHw=GI^ADrwNyEwkN zcR9!(#Hfz9rkl6&{8}qfE-FfHYo)#m`%_kuO+d6HUruJcjrF2dmncWWmqsgLe%1?R zvXM&)egbh41(hvt+j(BG2Ysj1Jpku6uoW1ozuW;dc9m&EJ9GHGg66i~mBnl9EnsMJ#vTpA@rGd;rn^V+*jS8anPbv=@{cl=%xX zH-q(=fyETc4+-mx;6yu2f-F;a0(k*xLhC5i4n+hLRxC^w2Oo0z)V(eUc@mViknO<; zhmkhvQWY^rj;1V$H;I=XGLC?y8v-OGTr}-e2W|IR^{isIX@6mK>Ehp4`)TptZMhtO z-W|rnHn_E1*|@e>J`7=L$yg%uC9~U}!=s^V$-gq7!_U3CI2d1xW@Ku<@lk73$0Z)> zO4=E{iN+Xgnc)5qZ_d>s*DqvrZ>1t+n!#k2!`t?>>^0Qj`rzwnS${IKMi9q!8I;%c zf;nU-67 z0IBTXpE5w2H?;faU32UqfuWXi^Bp125IG~%zt;Vyhiy_75i68M>q0a?xE^25@7%JN*Q zFZh%@{wa<_1Quz^awA)$0hmxvj~JL;+FL4_C8Oj$MSAjHXe^wE2s@TswOk*S*4F2sXq=e{*zoDj4Lp|;*dRWat(PH3 zHpx+u<(=K2<06k>skf8}#_wFqYgwD#Fr`XS-&9J@;hQc6UDKPX|6{c$dcGuQ3=c!;&Gj(h!HE-iKjU}S z2c15YlimRugzvPn#$yCq9KoaFBw|T3EYyGwY5!fgQ#*Ww7g2mpZa+lilDrCUcnSEc zDm37BV{|^^x?a#nZsWxY=x-b%H69Zq$pj*$#pUf&JsKyRSCmnJj>CMt!nmQ7gLz`j zwt(fgUKbOZ3wjQz-D-|M+6BmS(^@rWS2gvK>aB|tryVFD<~wUH?D+Y`fXQ6mB(zan zIkNuV&&DI;jZBLDr&`Xh?Y7IaQcKxA1t{9Qhdq6OR<6gXGzefsFRbq=)Y_o^G&=vo ze*AHMZN-2H&}mo=XGJ@1ka}%ms+u+@xE&>+Y*}GN_w@nEVH{Nb$jP@+#}$!ODOPkn zu52&KB$~cXQO5(Ez4BX(-Ui`@?_i8v3ffr|+PDC{Vb{lmYo^5!&t((4o$%9!PO`KY z1uOhM8taPVQ%WCSf4RzEoz&~LZR*?B?78t=SE629IgxQ~OmR~JstgvwJx6=Pu{7A& z#9*oYm{aA-B9Z#@(KmnK?{^Tft6I<7raYtnQr&-xzGaXZN0n!p=y%#qCRtkG-XkyR z*C_f_sq2%f)3F5p#-44adR=$watu zcAr9I>E^2@w9~Hfv*<94vod_k^M0iJWi0smAs58^Uq(c1Yu)9%qW!<6bbI4nACFD$ zPKDwwvZL+Hotb)10xreEcSc=`uL%r4ho0n${7i`Hx`TEj>0{4xqSpG@?xJ#rbfN`H zt}^#MF@D?q`4it_kK--J1sH7%nu+e_WE%`x!{uVbS5dK(fJ>;vnI>>{JiO58)fK3C zZ>5*d;A%AqvYGvHiAWnoytaRhvVCISp5G(2)#&Hy-a^EyVg2j4f$EBps=1BOXS$Ca zN{G@S?2YG5n3U1#Y@D-ohNfX?bl<<(Y`wbk{X~U%$0e=h2wRkonLg}!&9@f|fMjl+ zGZsZV3b@4emSeGkY!k5;E~C#h1<_cq@EOSQn~dbeUHqvETM`nBuBbn_JQu(H7z@uw zI-AiXG*EL*5_N#&pI(NK9`99pT`z0u zVmf68Lx-1XmoaU3dhe|HcGhAp?E&IG?el`osd7C%Z6K&<0tzP)QvZ>CAXmVJLwu`g zOCZGPS7%|$k`_dtx<^XB%H9UX~(L?Rh9es0))D&Mz-xg%uC&Q_+nu&YG(?u%77Hy%mPP-Q=*V}rix_Uw4G9*_1 zcrz}|Tpe0*9+P$T?CvpKBFVYig*u4`^~QuVLbKmWT=6A8p-1c|WsGOIy!zW-Ibd!ouhLjXGk*ptOUihRdIx;6Aqh$kD_WdWeU>9yvT+%9)m%& zBW4{c$z4&Y6V!(a_?HkC83EE4qq7Weqbz0yG#^HHHWfHccAlrv~ zM*~H;*u7Uq!gPEg#$P@O+DNrE9J!z=+{^GiUptjI6HaRWN*+QyqKsu$Q^3aPn&L63 z@}RDlbTjw?>d-Y+&dM)pzt_m6C4UlM)%^8w&7O7-KiYYmutdsEu87194WCOsdizU$ z0#xJ8Ok$o1IMEkYB7JhUlroC<)M5J*`-A-vj!0-m9X?$_Ie~)u2#D869@7Z{jwj=8 zN>(gs*PTEnhqcnylX;DN(pBnvRHPHGy2p0(rCT#COhEPzF_99sKdO!z7s!^k`C`A2-uJ!KhsgZ$O(f=ozfg zF2%C&Npp+N3&YwfLoMm}Pv{E71T+j(T=jHCMq#D)P>y^{;nR{E3LE)Btn4eUTw~B00a>}tc%BKDABA%i4u+Lu zu6%ycs7>2B<9Tp&IzU}EX&fQZ&Wa|gnoF^cZW6KSBZgB`T(hSm8-&Fx>yDq2{j;Kr z(56cEZl3aXck!@x5s9_lcVgwVC;q|CZKM3X?<4)9h0j+l>BIv0v(NHEed&!!wAqEN zJ8FYDl~&tMe`ddO;?FG3niQ?ue>I|0k4yJToq^6EM_Rt)Q8XF1t(IWb*wK=&WSH>I zY2HX9VG|T=W|FX`(!6eB`>`=KuOFp8M`hVaznxrvybH-y49+ zRpm`ite@k;g$SHIcDUbnG%FPeQ_&TxhpXE5v{MGVm%1LVzAqCj{htTa+5Kdi zV~&Bmq+bCq_$tNSD{zrq`x@}zc(%aWo$!ZygSa3KVtn2KH|6sHvrjDJ18YpBVV@l zEmpCTasieE&D8rx`_wm1UEW{wSAO?ZhqMr{?!7SfDrTe8``G;&a&vfP!pZCOnegBl zpG?axjO1>})i@E3hHfr(_WUOVY_u=9Y1Mwbjzz7*_)#Wol8fz*1vrsw8ua3z|o=Ui8}+Clj}}X|Gd$dejhhzzOMjUjeq-rb*lVy-naQWi%qg^eivy#K&MW2PVrq( zOhtbnlHNeQ(9<%}M9pqg&Axk4kvA_JBjU0swoqk<9AZUlLysq!(3508M|On4v&BPI z++w$P3xrJec}Tr-t_Bt6kGyx=Qh7?`?#;=Bl`YOb;T)4uz>^js zIA8_=9t%JwY?qgtZm>;T+MJM^`ON|-$S^Q)(2kXcmV)S>Kx2>S$fzibZ8OUf8+kK9 zlYu+`87bYM(u;j@as=w3pJ6FS?&5qBTR!V^2nE6u7iXN$VMEEi zaX8F$cv}dz4i&Mvi5DRzGMUa)&$7Q!`3sQFS7JUK_30Cyx*Tz5C7GFHJt4Nrc26Hj zzp9*onW06nv5Ob-GfsI|ZZKYD+Hu$vvf>MB%tUMGaGw-KiyrGF@?)_sipO-!2tK(G z6MCxjK&md$QzMG=V6G!+#jWK+e~&Sx=h#5-DJp)e>r^HtWITtz^pmPECNA-e+F0JYr82yi_aFOKofWPpb`tx+*-{r(n{ zk1QIBWl^lln1T5P#@e-X3q3koxWmi1F7iSja{&b|%ZUTmvcwgq4=1J!7E&$v`m}E! zjOPE$OZ2KH>^hp0(q0kmX%)LB)_YF;t6;(r#}P>D$=$QJwl8e>a%P$i%H^H0>C(3( z6ycZ$x@Y^twAs(-r~$tSO6;6#A-%q{$J7Yw&)b8XK>bIj-tCaxhtDnqdzZ78+%bzD z%<+Ysd`;&-7PZ|y61_by-<%`$P=|RQf-k5 zq%Ew+IU|i8{3VHJlRDE4YAvJoz4P;3n1-Z?l+F4trt(gG1IJWnI{lWZ-2GySOUhYf zn78n{$5leC`->9Yl;Yo@eB#ahGEXE}>6ot`E>$S(aO|P{OknUzug_c1O!LpK>#88) zl{rbhA^N(0y5?imUu_`&ZL#k`3!Y{hq?&Ciht!CRCZ)FC_|VgKi`iPoJEf(IZAq_I zCfIVi#qWGq&XEl9SkCysYE*ctCjz7g#)rY6C&tJS_~R{Zr&Z)x(nE3gy;kvZieHe= z7Iu>Ty#3+>)=XZ<3xOA-AMh)ij)rS@f3$Rmtocr|GC`bMAX6Pz3=GWO&HB8@qnLN{ zHr6hJ=v#PHY<9k5&PQYH`0@)!GT>MA3q8GoSkYv4PRs}Gt|KqD3El~4lnJSV$%6PH z2604U!7^yls@K)J4=eGb=YGDnk5%S98n_@i>*0@@nUSQtui71lAx_bej%IbH)(iS9 zA4>G*<8}zL^%pu%!2v9y>**oP9M}|0+FNsug4igg<<9%g20G?YLEp>0Zhv7CpSn%C zUeCT=upfw13gdLFTCf7PMl(}s`D&56# zt%X)fx~3~fgE0>R!n|M{Wg&XsEX&uCFIAiAiZSU;fGy64?YgR_SZH=JSiS;mc3shQ zvBTG{ZYDD&mQsuh+FQatd3G{KkrvD+4k|eB6E%Hr%;!?i8Y}zKU45qn`7ivSg%NqY zBZ>ro4NI}d(85~I1gi}c-cXI4w+UUE7 zrQTIxOaes6gzB70Re{gUh4(edQ`e4S0o$Fzqs;O}(Ca7Mz+5z2M|{Adr1X63ps{HK zYzVw%hQzV1@07P|GfwqNmJOKQ<`RlakA-Ci3DCS*8#XIsD^@iw4#zH7#N=I*9StkH ztY;5)T5f#d*_?=1#F5(kHnv**2TW{6%`Yv~JWJG#L)+Nk{RtPAHm%&@=vOLJ{1-I^ z6DyK?Z=?4Gjj!YDq|SGy_;McG7`E}sbWEvSzeyHukA1m&``hjKBc$XAH! z;RpMD`S@l<;LkyLP~Hv}YgX#i*z{F%KOYq>M56S z>Y$Ed_hLN)Li}5mz(`5F6%cTRS~KTqqh<4<8j*EF`Bl%jW_AEok5q)*hQI~hZ64t$ zT%2JIJ;zSVj))rj_*Lai8QCy8hYWx-vguMtn%(JK$5!AaH-Hr**uBVWIpjiJ^6}To-s2MQ(t&y&Mpm)D46vTQCbnPMP>PCBOPs%>2?95_qSnj0 zCW4W07ujqc=UdoklB%yglc_(IDLRud#K^VQ^bq@r8BGUs=R99NeF=~ko>bF4k=~qN zOzXNavUAfZU+qF4*$*3$$I0dBWO;Q+N_y>CAs9O(Da_T8Pd$G)*UfoVk8G&)`vf|< zdJb^5p(^E>xwN1nspvoN5>)R^@FKlUFB}!`nQ2kT^aW3?FTeyyy+5AHtews{qbf-y z6EX0JsL8c7YjfgrJd}C1mW+hYV;tIDfZO_#mCUF%$L0MN^3qxc5_E*U9-0-f)$o;cCoz$nfHm(~^e>4h7DV zL9}!DnxUMy#^Rs4@89t7mUK>*e&=Mit;xPloi11#pOZKAzk?epJLjG^XOZb+tcE^- zS9&9UF>PJ+;m6v|Pg)UFxF?OhaE6sCRxU9FHp?t0L!}G_t+D;hyGh*86orn|1ebt*sOW29`!ayXF~s|o zeqCTz9TMmeH?InkFQxF*D{O3>QDZ%tJM@s;1gmM-125b{T%zmE#9boWhjHB~R7YFu zJ%c64Wen~AEU|n zMpV%US$)vk8+$J$;r9S%DPk$L7bzONP}KQHOXtu4F?~U-@?>QH%kSS<@bHt;&L99U zF7&w>yxEPtC(pm&-fs3j!_f6pO7h$vd4qUxQP1Q*Cw=SoTkuR2NCSb22#ST?`%lD6 zO``c|TBglVInzJE!5JW#iQWao`@ozSVSZnT!lzqYJwGB?NnZdAv1+o&7XzGYcRR2Z z12IG{A)7$YM?Gi`Z z-U4D6i$HivUn4jY83BhP`s6!Td^w2;;-icjIvAdvW`|T%Lpdn60P_3LdvP&;BRqsl zkM%C(o3BXfbd+Ov~oogTB}tJ#Y6hxpqQ55XeSbK>aG_8;@}pHp=;4{2P- zRy4_VoqQo1xhoJEN`NQnt%)+M`I^9QlJyJ6)0?t+{(!b2GOI5Wa;z-{>UaEaV+cZ+ zs>f8|P8f8A_l>BkfmjKdf`12qsYVq&G({8=ZV94*3Ib55HXx42c@BcW9QwH?0zrtV z6eTK!*uy4!_<;uA#Knggimu>fMX08O%WnggI|T4i(Py4tP>1V&U_(aH>q zD7qr6Uph-T#LwnG;EBxDP3+uJPA?gdPF_9G?Ix)$ulmRQ^G8KG@8@@E#oa1J-MIYS z0>naLMe}=d59|b6>NWl&S6F~{+3S*3~|#b?8Jj>)c8t^P+HY%aWk zh5cqo7Q?ZHALa1qe0pNACHnp7;)80tHAh(S1ny-0BnOL~qMxwVAff7@r>Br5T2KZo z{HSF`pF?i;0M&iob;oA)ptyd+#HS=0=H;e{IKH^;q3`*muvkE7fg$MK{8!a^v7Ho1MVc0OTMLPZ;DMa5vch&Pw*}yN4LO4cX6ZtR_ykGL~x8uvVWT zxv)*7mgKMeE|UKWUcMI1B>(=?Dzk1Z!!jMob3I~=UvZ}0i_X8<0D5tLb~0pbQw}Fp zx$q-oVx&jcN|_o8tzyD@p8Uc@MUj!bGhADga;$^UeF`g69xwIBcLt4fK$u+D6_R_q@; z*}wcx|Nd)+co7K?xuWyv|8#!%`?viYul8brYi1(k`5*qUe|{sOlxRS6S2-v3zdlU= z{KtQ@g}oNl2G^V*v3U1??E@E4EQcm|6Z5J5ja&Ku{Ec*S;F>EeXaApX`gi7|B{2f} zIbrum{ND}8zciWu^QD=g9;UjkzFO>i?i%pB9@a(#TUhpBMguL2e+dOd__((#WG_g> zyDN$a@4wr|N*J=welJim3dnr({PEk$BQb)IjTW&u{>7V4Cu8{LEF@8H2Y6~_ z3HpPQpB9Z&kPeNIlMaoT6M{wzd5nfXByzYjFLJms!Rq_udyO~YbI=VYeEJE+Cu+fx z;eTz-zd%K^bt@wJi=N}pAzzI{G7xtBA?z`iQH8Wln}zrJoFL`ZKdZoh;Wl`(!kxnA zY&Merm)5H&4+Y@7@zmqv{&s%*^SAyxbL&687yLx@bI+aXqu-U)zq+&i_cJ|<2)>IF z>7zyb%|qc|7_f`{9NBwl;=|Ma*0d6L0?Inzcq#b>+SgSCb&={-fmFVW(xnC&-)hm z$(gn3>oZwnw*USE{_~ZIu3&4KUcXJx4CDLbukIm54q0&h#|itdf8KwYY@GftG0#(O zO)>q~&k#y@Ik)1up0IJXq@YaoA*bm^$E}8=YTLDj(b(fXwINAKV$y{iu%L~4xuxaS zKp|8rIQ}-nt^eF9>_e%~*}B)ZzRV}DQpUAZw~UuZY>FLA&61zIvP$(G2PgRLNx+d2 zQ~Crtk?<+)H

+6Y}IgaaaFmm+68E)^R;jL-Lka=mhJQaU|IVxbH-`VtX#M?r|8ETceq8>YSO0%8hLEz-^s<#dJ&XTxd7?WM zySwceAppyIoH52ZDvzz9W0e=KunX5eP)bIIUVz*V&P#wV7OvKsZykbGb_ihSJvOiD zp-Bw{K@u+3<8VWcQ_wPK12`ZNyuYN-UsOt!(G)DlR4xw4p?uRquBXl!HN4wogZiGC z)q>X1;xUMh5V)Mbxq?pl2za%_^$2r7_W%J#4}qE2hXv{B>1%GH-+}TH1I{!-_Ifrc z2;5gg08zoY?cR+{mMr30o$`{A5wKyuSJ}_4s~(~cTrZ6dSBH-mi?YWL-17b07NdBA z*-W|uL8<`n@e(g%MzWy58+7i;$7lrXRM#hRpdImg*&;vv#dH}UpL|SR0K9hw%Dz0?NtgUx%M=RKk{;p0Qaz_OD4`Rw5 znPbYbYtsHVKF7XYYh!_x|4TUu&wfaTJocLgmM-*(zA4+iJg?)re42p7eS_Rg}%^GG7n z9c_VG^JqBho8V$32fx~75nVaWwhNeIQh7%}F2?1OFP7=r5QfJE`wH|_xEbz0!fKBJ z>W&V^z~XrfY6RW0yccfCW{GQMB9Mj8)7QfasywfuAROWYRvl;^wgH5lEJ4jPK#c4j zZ)xMcEaxrwY?xcu^J;4Yfhz{ZL<#Y3AY5WRs}xIVTQ%_$+&>sDd#jk~v&-r)8qZ>_ zjyPWIQ{GD7a8%#`n5Vl_B!R=g%S!+jvwU=;leHegD?j~8g;-@2Be*Y}23`A{6|=Vk z=fcVbQ+U~Pu7WpSYc`VY)Nin$pQor{To`jlUXAjG9#}AqWB@MnGexjyAwU5YQUX~O z0RNqdT=elh%1Dv(a@#EL*6@C^6q&)Gkkv*41=#)%pKLK`(9rA$YeYqnD;HjXpf5Up zS!1mldSW+Xm^o(7K^B2&NZ;N9IuSy|o~v*7PFsK%uH8tXY0yzKmhASZ^lxu_u)x2q zr`rkp5EqkzFa*1_N_f9@0g#dJ!Fk>1#bQwVRK)?P*KZ9bC|>EFfy35_-aowA|GLj% zP>Y(9;vh`N{Y0p4@|mUa_e6!8VjB%h;Gu(7xUE|Lk04X8H+d>9!ut|$_eSJcKh&U* zviei_7ZzmHKZ}=+RM-oT$)+cZOa*xo>${biSHQ2fy+biF=3x;!;uJ|8AkIhhYph~M`r?M zDq(u#+ApL<6yx#B>`O3gI+0jkzlF=Y7%ewfMC*9_y4DnpCG9_~)@dKdnXF6#by{yHc0KmPXpkc;pyLdAa`rGyf1DYHB@ zc$rP5tFRlH^0C0AcWVYVUJ4?%;GH5=5K!?9%D<%`qFGA=w~ig1=WbcMBSgeK>(E|H zGT5LO2Uy!5`>&%6hTm<>e_`iI5raOn|NhB;kNO;R4RJANfd}(&ZB6O^ZexRpKQN+y z(tf$^m(29`*SXyV6Cv^x|7kIJ78AC(4Rb+s;j{Cb|0fV^L7ecj;i>k%nHETc=L_-VNB@ zNF5%AuD+%--YrVSt_5z_5j*7(KH#GQ zDl#T3)x8r?r=Y^Q5|i+dlR>}hgWBtNInm*Bv~bn&OaCu$nHbQ+l-H7(&A`pUfwr0m z^l%;x0E*CR7*LaEqNzUPQr9wlBY`+q2%c80#-vjs;zRX7y_!@013VH)XwqFqcH=TH zsT>{<**7|VmvA(I%L^QUh9L-EQ^B2b_U`N+jw+H-cGv+=E+b%uVdMH4{#b)QuFHT8 zZP$L)^y-s-BtYe_cTXn;+b6zq_&d?D7`W*>v*RPld{EBFUk>)kXDjuCa&7^n=|pUV z?n~(>*_715pZY}bUDN@zdIRCrk`ibV?UzB~RR?;7edEaAwpH4oQU%qF}zf?Mo2N_~ct_3zKp zR&|SWbfmt>+Xp2oa>jj1sc5uWMvyI$n-Rnqq}l1~ZlAwBoB%N?p$FT^cv~A=lT=evKTuqg*)(!nX)kzxjWM)*(r%wI4$nnhM%P81Dlb^rq93> zY3wHlY7D(vc&1MDeM|c{a4$uMP{+Yt5yml(b5x+Z(cKg&j}pf4iGcf38i7J1rUAPe zd%5Zg<>dZK_z<-1wE*hL3b;dreWLRwKzAH^!Ya8qbu~$*_$}TD5Y%H%WMr0YQWnz!ttdNl&xq%g2+cdJ`%H3b>yq36t|Q_d>bzuy4?dw)?$+rtgM>O_? z5#B2HT?*bU$jdGG*ptTSY%=TYt!2*iX)>{#(#JQi90@=5;6eCtJYUwPH;VHelmkL> z7Yl{VkL#f1x8jN}AWIFT+mTM8H4^6T@5}W$r5+(r{+?XaoyEZVb^O9y5M$qrsoGW~ z7bju6I-1Y1E8{b0e3Nu1ZY8{a7x(2W$6(*AMffC42ynnNPyt`$1#u<2lQq+IWA>;Jw+v9w^A{({z_fm#$cyt8&|~~w800<0b!!y{~vpA z85U*N^$$y2GANA_5<{qf(jW{qAPis-BGL^~(jeV}Fi1%WNGT|wf=G*Wr*wCB*U-HC zyyA}My6z|MFYj^uKj~o%=j?Ouwbow0Sa)ykjPI^I3^Lk${6zbAYBJNRuLDIpBEs5`8D6meG<52m8kV>4NU;y9oMxxk2(GKul`DW=H{=LU}WPp zkPF!I7~g!S-L9>61t&jpm^XQZibwx>62bdNn)gnz!qPZ0#+g|;X2tGhIMJqk&FZvH zTYa59&`XJ=GE&;eYscX}$sm+1;l3UHlDLJ9YUF9?ky$NITl?h)Odi|~H@HIDNY}9l zaF!^a-q@<@3@_u(lpeZnr?1z}%wc4ffE&O{VEWP&7oO_(VYl09{c87g37+1%9j-AA zt~7tx-KhYJ*eer5{q*8Rf|`#L8XStFI~A1r?PeOT3aV|FeJxUm^fK(&5unycms9p8 z$B|lAEYi(YEF56{xB^V#pJ;In-yUPjNPNt{Dyd}YM<{@9{Y1aj_VadFR|<+We4mjz zGZIf`o66vW6ZhQ1k;Xvxw`YbG;~!5?tmu6qV%wnVr0s>~FgsU;DG!F*+ImP<(DS&H z50B$eJo7GGquh(j_41NKjWh>tU4D=R-C%Y!N=6A`{Q2W5B4_Wcz$cLwLw2#2>2q7L}tuJ)i zW>i@7Hxv<5{@)<8lryHX*1jN*^uBTLArR+wUh-hiFhn0l@oyhWzb$MF{g`(!;(nPrl(dK6!MLb91?{T- z7f%{}d(QKnZ`!8F@4>~})S0fndQ7!4h_4}e=V|j}rY9#s?9JD1+<1@|jjpSd+|N_t zMWYKyZiS)@U$f5HbO;!kob4bzY%qsEPfwF4_D_=6PCd($7se-a!00T`4zYz%_sWzV zLaEWm6uVm$WKx~l6yf$_F3wjd{PLhWqyG5VMezl9&Vx?MjXCPhv#x~(w=WGL7znzRw!z$HnHLs4xXEI9TEZYsm!s9bB z6FYf2UpAsjPfrQ4;2STSnlv&EYLz`DC0K9SI)|Picb? zNTs2XgLd@rk>pzhDaxNJahDleYPA)NXP6aQhaBj`>t*%*y$xX41>+>F!Qtq1HH&D! zZ>+UjrQ^}z0nsc3zF1wcB!1gi+_Y*MVN&A80mPPw2QTAtbbqL{vI$1B_;aFvp_1os zffF$zymn3BG@ro;Puvp4Nbba*Q_aQvh+L_awIk?yJVa$I-p@!?^Zsp@=^>21hGzLD za=2gfwsBw+6{_;pk6<*mQXm;mh#QIQ2NQz#Drg;uy?1g~EcG2Dddj2L&LShpsl||G z7!P-f3f>D1(?Y_tzJxu*#Y!w|qHO0KohW=U_W4x1ABojM>qsgyrm@1R&1LML!8o60 zbzPIbm9>%d>eWW#7WWvX#>bo_fxHYC<^t?H%=QgV=DGaOaubi<+RK+PKOph`5SUAg zHK^#*jS~e)S zER3>O4(4sFjcBH_FwLe40)_y+v81e$XCxvRe`$&)C&d!Nsxqp;Suit1!vX1Y%%?#2`uR^m# z*bG$5Jxa=Wu!i3a7ZvUc;*31e$Z63L>42l}rNW;icQ4?lDJmb!)eE=S=VNpQNBo-9 zM0!+@Khn;CP8e`6f;84%-qP+;Bc#(DN8f)UM+&2CF!%BIQ6H)ywIRyYra`#mT+>|d zZS%u5f$R_BBpVISc4ju_VZHsQa}%Pmj)qCS&1K|Fsbm}9UEp;-cY7xWc^YYZShQ*j z71H$-o=m$qJ!RlbrXdZhE5%Okb5&&(7ugqb^)8UUfTia_{P|R--r-6^X&Ok}#V+Ag zlH&baVRyVLG36V^N@`g#_TpEDagG^vN>;`0<8EmT0@s*p`Ggth6{FamUwNF>S57Bb3c(^FuaCbn{?K6l!a zzXQud&FGPIvOKj5)(o9)pYxRJ1j~-XYvFR534iLRzRt2`G)xm^ER&-y;qAo~Ax_tq z*}RSkh<=g(?e5u$^vzw{SWf;A7tvokDf(ul4BG^fV#qpj-!S51;~8eLBG84#1evLg zr_=0&klcU57qTb-4MAlxWN46uC$81nCO?h7hjclC#{1NJrhD+mBkT;f7~}X^9o~$PpK@t{1#8;CxS3d#lvpV}uGiszI(Z*y z5p0)a!7S3(%2-1dVo?di>hrFNT#_)~v|FSJ^Pp2(7RFn9>cjnt*ggB2Yj_zCFZbLn zT5lHlx6w)U%O9Hq#ao|>372a@^!%G=%hX{H5%bEbEfh@e6v3E^B``JVma3AjoxR}` z&6X>YNrIJ!KkJX;%!^hhpM*vbFV~VxM!;J*l*$aBFW)PD)^d6UUN;mT(O1hByrvPp zcp=T2Q&SEQ1Zp1F3!IVaimwaNYEjK4-SahG z;AWzD(<&0yGb$K;XgkfEjksw=zCa^Go$hJ))Pkv%?1pdIb|5ppdHI@^%|k?!e>LTT zEJ23BddVcW*U5osjR%UgF1JWv7<&zeNC36o!U!z&`=EPWT}m~Xja_~=`u=jVR4 z!U68!{U2kz&ABamw<|wYvX+igMr-(c?UD+Y%Y4Wt6!o?4Rhf-EV|X!2^%S>J2e;Zz zuiHB<<(^p0&STi*41wLfT9Uh6oQ89YL|M@I+#UO~2#O9dR8;!t6ocGxCZP^?uLy!n z7Wcxk@N!t@i0jR{60#K)@o)6mPRm^T__7`?c0$8gmF8bqi+A-%hnQ~;{804q(k%6y z(Tyo}N)~rnxPo~Qf9*7mdwJrsgU$Zec{TZc)K zn?@vWph9$PwM35`H}kSxL3+iEM)HwuTO1D|W_9+=q9{bLO~NncXS$*u>w2?wC5^>v zF||HM2oCAj*lF20ge-rYJ-u%We~LQvs*(^^lI_~~xcbOuE93}d(goPV+7g9%S8jh6 zSclO!qVL;bR6NuRt;N_!-eoqOKVSt$D|uz0;z+v%pnZzAgp@kqhs0mkvj2>WE+L7Avb1n=8Lm4_lIo1 zhntD9hcEh}#;eQ&EARBHrCh;nI=3z#_qqo|M*Mh!DWqhl@Pk|$<5EU?Q z({fXiDN&||iu4-eg}wU--3H@PWIwO#zECm&?TpRIv0yKPLWi56(p0KdC2J=SYp zXX`{*iI0L_1S?qsqWxw7{sib-N{^wL>mba=)P1?N>(A5K6sooQJ6rWxs8UT{|Uf;ba=H z7m|%bGgEt?3Y|o*gXWa=VGS z-j%eTIaNsZ`lBOk9!=2>9%Hb=O+`GNQqiUA3(prx-`P+aRy64#7`ACLB(u~`t3195 zpAUbkwCxQsWjYEkTxik1P2eGg@^vA~ad4p|#`Xn=)mqHk2Ud-XE)Gl@HOWnX-U#B0a0X3UjGk4BXp!Bp81`;$I2Zt#$guGqnsOWv zZb&#DlV0f93&398#QVE}0S-xW{KycDs>;T$ zi)>r|Ek|CzL$k3P-^Sfm=5R?bNV~rBZCP}eggwlhK`&JSeC#q zt@)n57U0@uSlb71C8g}V)}xI6Rl#4vY8ed5C`0eL%o*`rFc|Fpq-<0aSG3*Eb1`@> znLYTBDHpsO5tCrHj^r(!SgcNK&?GdZMKeB-7LT{CL9QbdJwOW?KBWE`-DgV3Q392s=mrGuQ#OCiGd!1_1)P(@2zf z0^vT>evXLP>Dker#P16U4C@-Vx3^`p&`ytRJY8Me928V!>ckRFsjZy1XPaHgQ%5{j zjo__2AvZ&M+QFucMc=GPf7>$ncuOpnkQNI?Wq^pCYzo-wx}DXxNK!c6nMV?A_CQ6I zrZDeW!mu$c@iRLUTg*#_y`p?M)rh4vUeq(tFq2L3<2^#aM@dEawr|zwpP^5rbzWZZ zX>y@z7enpW<-Qe9wT7C(p50$gyDW!X!K~n5otbe~iZbEarlZhRaB4cHzYYeN@xibx zLNm<Pi2!5B*x$1)gZtFkpSd zeou;4wl3+`?SarKR+UmZrpG92?Flo*))5q)s&4-f5oPc61tL%^KK0ax$H zOFd&Ca~Z+8f`pPMLhB_tt56oDvENbT@#|w!B^%V2kEh|MX?oY%Gd`$o#QWPf3Mb!B z9nCEl;KxTLD7DiOtwC10oa9}_8z508h7TV)E$8Gts!0C!YFjf$bYQ-EVvY3O8RJ_I zX66TGLfEV$`=3W0XAb>Nx;I|2I>5p3J5$$j!-Q`p8|Bsocy?_H>1(?WnV;bUxNBZT z^-iaBpA+W_kuL!Gk;L0oeBu!)wr(ubPBD(Z9=tDMV{LgRJzNt~V;+M2a`a;+s@P>1 z_DV`Fo2|D4W$7pRGl#0nvs59e9-V?^Q-t*$i^MUl%B3CF(y4T91;3GDo9S~kSze4m?Jv{bHaZBdsgFh0-yEk91GU|%|72a1|Moz4(eYy|!! zLjLI`8TmT%BqeF64YROw-Tr=`?5g$MO#7!?8Ds6ac3-nhJ!orP1(51_W1zZ-kR~$D zRWCo)8K=QUe43nWl;vjH_FQ_X*OH2tn8`yst{J+ivcjMi7OmF7mcE+Uy&69-5_tSs zBVSJ7;j_siDQb4uI_ZoxUr9~Co}CqxWn+cxi!;4auj_W9=~q91$DaL@ted&t zHDY1h+gkv1N|ts(tB)VHJFBK^hg~lLP6R5}wUWz|_D@o_15_{yuAXU!B;o_s$|`|x zq%n-&zl?Et#mdH~b1*(*JoPmpShzIElgdlP8Et$KiLNF#i>d&Vxfk5tSME+H8*!_~<3o8HH|2KK=nRD%CBB74 zfh6)pnGzU}?(9Va?g&Dt$*dlou}e<6%3So`=Gu#j#aJ>8;o(jD^SmPVSn|-K$TjN2 ziX?RU+-@Qqfo*4%ASB6kNM$@pE6O#f>6KV>UbO1?DEp(t`zCX+A2fAQClg6RCb~{EixSTE#yn*5H0HxV|tfR!WDcY&1PBG z5l}-O6F9k(%*B=s68qM47UjJQcgv5jm;|hnm<{}KxuM^MROS5Up#R0pEdh+{i^cvPLp4nUW7Jt!O55U4g zdBBc<);ir^3f`_cMR4y*ibA&Nw0?&9o&0lvplJ!Z5w3hYsPf}G2LjqRv0K&80_mKZ z(!{1Ee0xq})O0mqj@vS4E7VwtX?AbtYw6%M5L1kOof!vxs>y?-B_sT3IY*oXip)>6 zI=g^bbbF&a%C+^WBDb>#iHv_}_$X=pI$*mfLcVDfLUtEgxD{B?7aD(tki3FruKIUd z-)u=6M*=glm}c9ovk>B$5M8>OuaM|a$FBbB%ghU=GV65f)uzd=>DKlhXreE#X!qX} zd$M}_H#P@s|3uFthN6=w4ZSmPWG@5~*QLV*OVSbIPYSo@wxxwq6V)_O??010xMSa+ z`QXc#G3gpwTt^<^| z0AK|ffWCyVQCp8~2asre#nz}2JL^x>$bI@NJ26$y5y6|0W_*Z4rqT6}d!ONhi;18- zL<^A1JWm=HM)V8Oy~TA?S#NQ)2CX0z7t4`Q4Py)2M#|?@_X4Ss#A2u8uX|x5Eu~pb zM<4CZ3+)r#Tpe98uuW}SP%kW0i+%-82fJzT(@fqi-7)VNyp8Fa_ea)Q!5N5WQJ>X{ z)mbncWM#J}heXD=?(875Z7!WHQ~2VO-~0esr+KHPD#F)(B& z`wte(KPH>1*Eb>R+ibXiM^ly3FBz%??`6H=3@xO)2M3QVJUZ)!%3~YwCT?{RxCW7u z&7^4znWuzH%-vI-Jp?qsMdN-k)u1ZC7tr!JC zf@0m_Xs{9aMoNojB4@r6)K9 zR&8&Fh2r3a(W?Z^Al772g+WSL1AxHE-tfusEYn`I1mH~M%=@($w-}gI&TMy2$1xSL z-vhZ59~_bIN+%B`lPG( z`k9d;MyJ>xc-~DSsc|Wi#U*LryI;;q+Vyo`%@n1nkihISnCSITwBrrtmdIrylt|wf zFa5*BXA2{rDqAA-%r`!_T(21b6KndaMeoiQgl^hG%b!c0QV@f*aV2=uA z*ZNM?+()g?=|68Y{P0X><|tQK=3bdqdmyz|`T{Vi7B((!eN8KB+YI%oW_E#ZZ-sk{ z@<^oi>tKueF)pU*%${Cqkf7TNV+^CH5&Jp?wAIm+%=Dkv8f8)nb^X7hqiL3kMCg9K zPC4a_Pp%dN4#LN)s_Km1vXf$S8OXg)h8Ma&)5<4{3RM=H>ae^(UZlIk1NXD)EM_le zDLc+`OYcoz_?rk z37^czgMc;phV%R63sJoeA!Lrno}MfJt{yP6pHp!?1GeAs#zPWZ2NUO~6HoTuQ!Lbe zBVnzRb)&Uwh3gGk4;Pr5PuB(I2KUgDPa27Zkx@x1CuZ>Yr}HdZ>UPdoZZB4NAhJ35 z5T_6&)hd1b%=%69{?MMiB9mi1-a6#}4u6XlGe9Kv4g)w`r!+Hr0tTc0c~t~Vaer)u z%aC}NbpB2rNaGpm&mJcaBm76H|Z&Vnw2>FRU6M0E!QrrDiu@J{jP{h6&V~N0})X2JGA&1clyS+!|kf^{p1~=p-Z!|5jR2 zyOm+AhCtBqBrpI`tUT(=G0{*&VKb32%^vL+S0+ijPO-m;`>M(w$ag>cbwc$8l3DJ@ zqJ}AXjL2H7i;X-Xr^kFKquf(ad8m{e-o>Mf-jN%w?UMY^B!OQjBSww=W7u)e%Q;Ps z3_tT$H>;n<$#!gITcF8PdrN-pG>_EU6FtEe=@O2UWwlDp(FDlB`132X7Y^b9R5w!h zAZvq9>_ln0=)C8E=w#|^&GC%3BNYjLn&UII<5C=Kgp46X$;?BfCq5a9_(sz~Hdhc2 zU7hy7&rPiL>M@$!q&tc!4CoPVk19w(8HRc3tT~<0(_%vhOw_v*@3N`;QS#ivru8F> zVzI^j(kuW_B#MNMQk_1#$n*KmSFVlg-_LE&`rKrFio}v%en%3>C|i{OIDZyCvNkd2 zZqHgyv$F3RRzxgQ)(&I#4;!zxR#imIm@7L#crB~w@s@fo_6^>>q_Vnp zCaTQ%m1u?z)hBYhek0hVD_`Ol= zyVPaPm*;+0HTqt}y|R19XXe&l)pcCH7;xzk+L?OS5$+Zt*J2vf^9+V#E7#~68BT$o zBGKoGbj%@~!iQH+rxC%+n_5kgJMoEE>+ndcxe`;Uw#rTnf{10%^aqc^88r?3A7mHf zrfM@qeg~FeJJqFkuzBHFGe)fK*FrX%o60;oD}e`fPrsIp4oua}wf{}CZueJjcBazF z*Gk9EV1CSvL}`+01DFH4_ho&d5t8;aK0}Vt?J^7bo4(_F9Y0J~fpJZ90?)NOhSAnrD3R^8mhHZK-6m6IXMPX z`l4B9x`>%XKQs9#u_(YAZ@-imRV-ZiZq%^Ejx&>+|i z0U$NGAy)0lY;Qs$79kK6(fdw;`tU~)XV$5X)1^xbpyq5}Qc9UT_zpN^g&v$^tWnwn zkOzn~@>64`VA|PD<%r@bmr%_BVQeRG&tf%_m<0T1**WQgY^YXrbxdZlhtK)B-KYai zkZwSQ7qw9=nL%>KmI7KK55#iU8B}g;*rDlo3FE0X4CTgc;Rtflh}Eq>IKS77K@drh$-wO+3+D z2!?yt`PsR!tH86r9a0H{$spUsF`&iBpY%Cn@SPMn1696_g;$|TOP8>iYWdwV zhFXtvZnHdLiGXTI1t*=8TfR7*ydCVtGoj|H)_mM{Ak$o&wXIK)ZBi-vjc|f<+LNwv`jl=% zwU!CKRocg9zUA@C69h5^==MjC2(rQI%>X4+H~W2FDt#)Y%Wy!QJcyrw%RawLQ2|;f zT>XW@45fP9h}n@EzB;|1(WVT8qU(i{tND`<80#m)*SZR62xWD4Gh_71@P$2Z;7 zrG~#kH#EWBfiw@b_R(wrZe!akItf=`pE z)v-O|zLW=yJ%1{7;19c+L4mq*5<@6^4iJulGP9}$A@SB(AER~5g^((}JD0rI!caBA zW=~K<@5A0#Os4U5t+(vK_bx}&ar@su$k_mbYTJ$KM*Z%t)DdhO)80A$vhiImfK+ap-%loTVcL}19gJwB%cL*f(P|){!r*tinQiuC)Q5-NPkX($U4in$B#@SBTP?vf(M?jH z)SRfF|02rlxkub2%uh)VO|5nb24d zux^VDr-Rzn*aWf83xher_n5IOzIzXa39K^15AQKeGY9l60(YA?3$5pwHjq>9>IWGe zsgfCbS1@JbI9jBx52jSGPI>@U`8jlsT%Om@T28sbP;^%vpJT!j?ZqWD85+r@ts9qICcL3*Bi4iqrh3QI%2O_$Vc62@`uF#`ssYMpR0Tmc zJXEI#ZW9J1M|K=FEge?5A^Z2^r^D@wXW#E~l zo8V!?5F`fnRrncGPlQnU=aA2MX@UT}BcfCx4ooABU43f4FJ$ga(;I9)yeJxScS2K) z6X)Wlr|*Jt@@;BVw&iF7I?a`caOPmk^Tk}5)_eL%5mX6RgNP=RW0A$!*_jN|Tu}i~ zeAzF^;?g@R&PF(-h2rl}i{3-T{MK(Qkk#MccXHtRi|l)yn;*u&`N?#v8of7%zOh#n zKIaj_<~@ktC{9~+NTotgcxQi#nV`QJZ*#jMM2Ni(3s~k{23K#M7XK}`k(^T zdKB$r2%bo9v_CIcDHN0U~gUQo!Z{L9AFGif4?@3i?| zAsX;@HEUf9`fV?!tr{WTFsarcWSPii${$puB`;``u5U(ES}LU*Y3c@&8-*MM&&UbA zHE7O-%FAcJY1S{PyEQLSwSVFle06;LR3Z<51p)rWVqmY=KZWSBXy9w*|Qd zN;1o9F1SRpHcKrdWL&9t_TCIw z3qGoBZ8R;gU4Dru&fL%s$tOF_Rn6|WzR|dqkrRXy|>5xx^bl6JP?mFw5o@bfaQpe1L?d7s>!P#9aYkl^dD0~0m{a{$f?w8hx zfYkD;_=E+qWU<0^GW1bJL2?w#Uy=>O5E4qAxsIdx7C5;H~ux-4cZxBD*Im#DJ$=uvBqWvG2iI1IF$v z`35pqzfrV{l6sb-J;)g8Cv`XK_d~3xUsNBfZ7y?MTAJ}ItPt`9+j+ceIJ28M#&6+< zuP#|!gu5|ccSt>{PT^EmIvL9LcJ z7o&;&~4S6FQpig-Ry#CMYH_1cJKP02lVWxrrCQWMptM8k=?njyuk%*--0S62Z6gCz`dA-Vqr;HQ7oqZ$0*Ka z_HPpNzESEd`@j=#_W0HAqD8?>NO$uV-ZQ=B8%!^!>m=!(ICU=V#{G=G|8?!V8uJaN z&1$Tul>qhlA0aCvmUIH6IbCL-8sD-}CcP=QpRfhJAbDwi7GAg`tK%<3+M&t!g?~;+ ztclaMc0#i{PT245Iwt?}iI}96uz*lT{9Bh$p|W>Z_y?|s36(&3FktnNhR&ILfUt3U zm@TK8)w$g5%q&p;%UZPrI6xout}Qf`vMh5hbBsT%&)3OU6T>~$>W1`uh6_a8zOq^2v)gAr(VYRj9zeW^SZT|32$45x42jp^`kB0P z=O`q{W5ZZ@ciTGbI(IjWb&!gI*?n4QtX9K|N?AQ_n?+O*F z@&<(SP4*8)7m%@#w_gFcJ7$34>(*~{fHW^|H`~VZHkYi~*k!zZ4PG8e3iSZlrkA_? zGIYMT?oMR-Z7$<$g*$GvkaHY?>bE(<1fnRc-w>@+&Lbe)y8Lwih~!(@esxzN9O?a> za1JfsDO8nJw=@s!2J+VG9Ay($){3R9ogdVgng z1hoCd<$yxyDLp~xUHoS+eWCH6U?^phy$m|!2k6HVdZdfi_>gX%(Qp^s6s?1n%aF2e zHRKs~e)w`eX7(?W0~>joCXavXkYG#r+Fh0eAq+a@FdrJB(8%# zkGN#}a1Rw5?i-*?({=*7pi%ZwqdJF>rl&xPXUI4&0<>XQcS56s*Ij?5235Ty0gO$N zZb(r0yau1;DGxmU0JM>gc({pbN#tV4n`SpY-@lmYaO4+KJn_9xiXR~uIKCOWx$)b9W7>uZb1swOnaJeg!2Z1v zUZ9|Fxl;N)@Ha^IPujcd+*j}+E$S9{ql> z1c6*tnGBOVjhWxul4nVnZNZx|&T!7F-#G)8qudNayJLes`gfFmtJA77Z&XRVMNuG^ zK17jq9UXir6s*_^T>@IZY@Jn`ztwV`TmUmWWwOWl+n@Y{9tHfCfEXKLr04MS&Ju{6 zZ3>TxMK>iVP#mDUYYsHs_rb5hh80tGY_d9>3jhU$gy9+By3x1U&?KKYqgI>9=7Yp366uh+@qo)4TJ z7O3hNG)K6&CHKG>^r@}Rn7WG-L676@L^THu*|*g@bv=DCZ^ph(vyS&+bLKLE3;Up{#8)?TYYg) zKa*_p&?r-vdgq33JHCszb9?8NKd__&F05mUbK|MRKR!8$^ik&kwK(RF5ygK#PXD|M z|B4y-Px=FXX^Fdf%5kIj{@iB&r+@cfe$XHPppzP$i=-`wzn_%<;S>GwL4W)=31z5> z`La3mk45=MfbI7+{GTnu|69a=!^{5v!y@_)QRvx!q)xgqHU4YcjGGL9)V0xyd8LqO$HPez7$PkKgdhWtY=MNc95B;h;q@w|6J^_)guBZ>KbQz0_@ zfzo{)PR%l{@gH~Tzut(xH(^;nY9i$==WdcwRp724uK1Y)%ZGa)`pRDWHL|qD@td@* znu83W2${p2Jq%jf6q9w}^qrCf)|9n#58@U7MUwgNKgDl{nW`VRfoNqk!Sydb@1nm? zT00?3mHdx9`rkfOzDf$p!iP)|Nn=lb&~>d(+1%V9HIXg-|vsu_V-;wH3ywc|Mpyedx`qdVau*2(Zn6HuAN}!L0v7D3^&j2{|9K6lT5xGyJz4eW zc5eE%!%){1Hg?((KQL8f-x7(~^^{{Ju3~MJ-nzv}a|xF{L2Bw&$hxTdMOn-qe9aU- z=Uw2idGarAEntnG2Ll_-AJW~=|BL7Qt5yE%%3uHO|NKvVt)bG1BQl-& z-?TZ%%p^bvz(IV4M!No=w%H$F1}x%>3hM}dx_{FFdo6(j!sxXsyWhtk{NDrBJ1r2U z9D_eB{f{^1zgnYfB;c)F2>bHySwZkvV@&Vf6@6}e zK>okH1SWYX0=9&^8yHAWUI`4NwhHRV9)4IW8sUT#mI*i9A;QNGh<^gCPLhzDzQk>L z72_|0$XU`MDJ$axz-23dZJGQPD8fW5fk##bKrg=NM(XIC zl}?;;LVP$5pm#9`u8C{UhHu{Vt!97*x_kiUBLZ+YV0 zP7$4EB?>z}gV;sgpl218E*Bpu06dGPksGE1$S-(b6c(fKy?6bqfDSUr3(zoT_ZNYX z|2`xs(F9_O9^gln1lSCikH9E$w^_EpfE^lX-TkYYty~`G4f&w0#=$w zfDM=!7Y2M=Aa(O6T~9gBb3n{x~XelEDVEz1X(*AsoO9rzH{@m$@#8aoniAU6p-1Uc3} z0v`2uy}NLXNCQ7^(ONQcnrJx?vZjM70IOOaq!U(J!7c{5pe#*s^qvPXNwannzMdepj#Del%3HC{hMy zFq9GROR0H(#7cx2NZPK4xdGe9bZAW1`N_z{*_#ySrACI`w2eH_nXXUA;pGs=*>&~- z4Q9sR%MkZyzu0_WzR+;&k1Nu-^*L9BlJ4cSQE;iX>BIHu~1l=7JH?DVmu07oB(od4b z0==4@U9j{qP&1W%`B&mrhTzz9Wu$1w7>!kd5}(FWivX}uu}r}t)&-7?{!N?bXB+4C zM-a031Gv6*1E)NJhu3GnKCMFi#g_g1WkcsC$bhX9@(O>>FTUes2tdQx#-a{7Z@HZ# z7tHy?EQNp;40B)dgP-V!P1=K5q&H;Np#ybn*|QwZ*N$L@phm`i*a)CN_W)XL${Ss` zCH3exn$JG5Ux)goWTfT%K!kWe3b9n|)Af#r2X3yP<{FK7e|(n2ee%`&wvbE{@W7~( zPC0BdJzDju!oqvncu-3u!Y-2H4Gb;h^bT9pjx1-0fia&eHY_WsQ*M_PMTdGA2J`s9 zYc*U@=Sk@VNSztbu+;McB+)H}8w|b?58m~7;dgw44oN;b%!@l>f(y78?E=TO+rb$S zJr{F@&NCet*m%D&QUFnO;1n_eJ_6Wb%2$9a(CWCqA=LR#%V{b@TobtAA&nXITZoGRp9fBJ`iIwWDk9HIx-H8Zr#J(c)72pw8_02$b=T-0FYBHeXEWhE} z;l7;;S&q%Aw1Zo_6$!)kHgGR|jR8i4wJ)mp8w7j)NKmSmY4|v_|7F5>a_7w z(Fm7@Wp2F*bzDHTog2rz#q$ypGwm5E|sMw zz{Hh%3Q@*TK@?cR<8pTh1yluLHpIr#9u=-Yg(R?AvS>a1gg`4GDA|Se^{rO4#k!gULBB|?50mc{Dq!v(0^;idy z)kV{NtUZ}Zq0Ag!G z)amd*#Tze7>BJoCCFra|yrmwHX5*OYydVai%D}IRxFg_&k&LdoAo!KEZ7A7H9ReLt zR*B%jnO6ui5<4LF>+To5(_0xrz**QN_ov9jVLN+WqUXLwW9{@X7*#uT=;rYAPz-o7 z6ifxsq=$R18p*>;4H~qC^l{&o&jSlU*P|sx@3+=~!XF-9Lzq{uvUY7?@;A>Icu*xgX(0iFcxKq`L2PAXML33fxurEwJiqpMuxFo z8Q!NWdOg5)C3dI%lvtJthjAG$XstbWRsaF15zh}Dj4<#lB4 zkc@8TKEnK3a+!aOu@@n4)hcl|gl8m?F3MDX=*uG$njhLR6+hPaYSV;gH3jnP{wSxi zzz-`lfw==1s(SVuxV`S*uZVH~e3Erk(&5R;XqHt74cxX<5cTdJSnNm!g9B;b#~;Ot z4|5FP81@0e(ypUoai6ch&v^XD)BO7-iTee9ThT$}?LG`URW-}{r(f?;5=H=U+nP|C zqp?}i1W{TGS9c4O7-4%d`**}GJ}e08XNzPg{?^UbE++cp;_3SX_vV@?O+VDWB%|6|F`PSJKYbm7bzXM7aYaK>I z*Ma6J7E`Q?*i{g-2f{@0@nmRuEC}rw4+*`DDOQy&&@6r&FN70$b5%R$AVnbRwcb&8 zESWdZ5P7Z%GjTCo`NWHg2Amgj-Iwo}zAE(<`|?n3K6=MCqV&+W0@nhfga%UFIm4@Vp4SqBYxya3abGqr<3&9RrYFHjnuoq( zeCFNuIS?PdTQo&|27;Hun7}J5B*?qP*IA#YROe0gB5|2fo@n-OfIG(6{Uc4|-w{2jb&&ai}AW6XY?qybQ6^wAB z{$wyDb(9}aQ#oAj%bQXc_%g`wY<@@|=&hENIW^bR25vX@28%{9WY&<(LIm=*9oQQZ zJNpwiH2g6Nf7bOP@(`{ zHBxnt(`Rlr1$pi8Gw-tAk>Qq{ZsE^IngWR}=Z~gMbpNV4t9rz2+o1=S1KxSE^p?+7 zX^naYd5pT_c=AmvVHb70j3dRAT~&)i84Gx&^BZIiAT1tr2Wx7hd@OXA#DuRM8UNr zcQ{u8N29epAJO^dyd~^4x`XNMXqex&M z5Z<%_Jlm=i6iofVbWC9%g?bA2zlD~RY*SOHtaPP~k!owab5o3J4d}02)o@c$j+dCR zuP7ui&Kr5t0`@(bLbvT0D5tk4335|<*2${w)0s}mc<+S`w%n1uP=8tk3ZmIr)-QJF zFZI90>@LrJs+`NFueBcrByFwmDv@ErWmbPzG%HEnDR6?fKB_`^3u*~MLfn!u@S~Bv zNw8(~dSDbn=G6aPMt`Qmr+nM2N?@^?m+5~mjcSi}`Y8bxeC=ug_t$2r&P@7rdH*NyU4~wRpvl)k_KXhA;NYCbz?2vKDIDJbK<-3gJ!g-ZNVwOMyar^dc5hrcAu->z|Hw zW)l|m9-)zXX769xKhL<#RramJBq`b)w_um;M^}esp*+LHt_5pxui~bJ**!fW|Lswm zUD?8Fmd^W?!uDXgdk0JMWyv<#Q<^=_5t6%HbsoJvqQ?wQ>0|Hcsuwq&NWYa{1P*a| zD6#L*74#^~G={tT={Xhu$0&D}?&rhwz6!@T)}pTtmy_&t5vOWH6Cswo&oSORd^Dz# z6N`ZDV!iQUc~#n#xJz49G0$9Z{gZIusxyzL`0`K_<%0aVf5in1tITQZj?k&|--#K2 z&K@#w-`uo~XfdO|+Ph>}wEG#L50UhXP!7ZoI^=4PfKef4XXG$|E{LS);Jxt*7dr&q z&|r3gY7;v*L$CzMB_@C~JX$sGht zy8QYpjCT=wQ8I0mG-bkm$|CJW24MQC>QSrSS>#Jgh&^Ovr`6-s{?UWMpA>PYV%;&P zH8{x0zvmRx?+MsuZsUeF@Fdk@< ztDzdzRA+~y1g+E+@w!Mv6?0@Ai4)0H?hw&7Ajm2yBcTf|BcFP(g2ZCdFEGm1bztJ( z!Zb05&)zcz`Uh_%0U@t1OMRED6aGK;-ZHAnc5N3{8bMNFD%~xmbhlC>Qqm0~-7!Hz zn6wB8ND2tjf=EbrHwe<*-OYF1&suALYmdjh_I|&y|9oS-{~R!#%z59}eO>2yoJa5n zxkMQ9TL*5|zmdFphk?Ohom767p*x2w$yf}c7na)#hPjXOfB%fSl8n))!tLFZLdg)3 z{uV)V1rdt6sDc}1`P;sJw@_}x_l9Jtc?D&iD(8F*5%1hSAs!FA8kSb6+z{MHuL8i5 zv2(;av4kwqe1AmzLxy8KIbxDJfuT1O?{)5k0$Kf$Q90iwUDdwhCq#LNI;Thf>`#|2lcK_@Vgi#;iKUs<@GoyZ2E*s z^BXXqF;(I2TZ(Z`(d8Yt$oROP1QdnrY(Nu%iY{e`SjNk2wby#!Z|}5_!i@8)Rl(f= zw!VXKQGRiV^-;?#fwbp=FPrl&Xw^nrRVM#ao?#$2n7rLx$upn+!%(YT$xjNm{y_o) z?*}zA^&(ykdUmA@{+gY|T~2+2DPUe~T=r%4S&PzrZm3&v`PgFU68_}Z>UY$}C#1NX zQXxlS7^a}fu_6k^OO81B1afT5ev{rtofvr-)y9$ZNU&I`?;RsIl?TELuY)Rp0)ODn zfGmQtQi*Cg6mG{e4OaMqx#|#t{$2lf*5%M-nv#1so%mluEWE}5sPtQQ?Ec03AjsW7 zR-DQx7U(oe9eYTU#mk5C1k?rFz9p$gZBB6|{ebL>%o}*-%f9xt-!{@;w^}z?C->IS z`tD3~K_HAH8KAD6N^K{w`K@W+BmMXabl`mld$>`uh%=)HXLnWfN@ep|1QUh|Ki-Xc z7`3gN zfp@wAI`#@aC5$Kb&5=CoZ;^50mjpQCZy|Z31Yr}U-6i#XMMYj`b>Xr_UK7pL!&&u@ zjkock-y>`l8?Ej*;@-<3F=9jJ#D4hn@xr5n(;@FB9Pa&)^;%Hq%!`Wx~=~S{+8jqojw6y=1%SKks7K3luU;J%A^ImQrY-- z$W#$ifF6Cpn%S68SUnHb(0kDsLQ@>12KKqF?F>rgQX~~CDV~Ex-bZ;7_ulhc4 zr3xbbKtob|4mzc;8!zRXs{qdF`0?dKY33x;1vD%HVd8xaD;)|HDxD&O>sfzXqQsHV z*U|yi>R}ijFqJzeGd?$gRSwo1g@iJtUtUF_cj4pMS(p0Ki8(c2qX>X{ZK{{2;SQB1hhh?|-pvaogYi5;{5SF0BNxV=Z~EV@N5S%cf0C)hSpNIVqC;jQUPVlt zMvr4bBSP$nS$F)YmcjzLc%BGWO=_5L(2oD;nyE*l@;r#HX^~W{Lar zWy^w0L!{qGuDaDH0$?!Tg;5qCe5w%Z(gwPI%esf3`JxOt0|wMgBd#|<136HA*IyYA zpOlK*2QiR09#y@UcY{ZdUtDp2<8az?Nz+DKF)i8qsh}xc^A@&u<9GMi*bLO6Kn={P z++(ZKumzP~#54T}oZoB0BdhUO1y)(t(z|j~3Mi*;)0^zNqyK^VQQr>6nzkmo%6&3_ zzz-8p@bc$a+h-sfA=G;c_SK^E2pWgQ)K?5w<6mvxHuHN?Vi(VWojMK?Kb#Bf&Mnsf zakQr@8PvU&;C|&LN+Gyj^R6ztJ3Bl&Y9{`1#wHu=(PO2jS|xB}!^6fyPhiBC^uc8} zU%_ozA}OQm0`I;Gi4c;Mi4K62v^&Ml&YyVpkdUQKtf`UZB^$?_6ep>xY+Z4UKa{$E z300r`%miEbDsK=<2T+Oy7|qzgxqiKYt4CA>;rZ9eK)nA$E!`cyqj3!C4$4&;i5bu^ zf1Ezd;$51dJ#Xc$AD>5%YD0iw5N`BR3 z*ZtU3%DS2t{$b2QAMyo4CItq>p6dXI=-c9rBEolf%}{z%uHcxJe9YZ-=d97SMFV@O}Rz3l+UM~a!|+3Nx-v>Wi3Ritq9bZ1t18*^Opqdl{e7b=zENCk)T zuh>|Csi2+6E}-42^CBM#rzd@!nEWXiqmCErc6Bt$&bH=L7VVjPGrVD|;_Yoce2v0V zmI}r{5?=pk-~Xqb;eT`|^{Z6iP9N%6_5IVF=ijy&fW3P`0|K0huuUI87aJ9>D<~Uh zmQ6VYrCLfk)z9A3y@d@M(f2GcUmt*qbOqHX;QOLSy|Y_xdU_0iz;5d1Tp=CMg|30z zpWZwXP+nyNY^mb)_uE>>AjhKjgAKz9m%Pzkq8Mog$d4y%GN2+;i{=jHJT7T!aEREM zg^5&=VXB=-%hOjRdMM+S%i+D>{(09mmGb0{%^oPNsl+9DpRZX0wsD~l$G|B-_iXP+ zMMEf=779F8P@R=EbPS5EP0lK(ib)xA2p4FNuP#W3;e*_7zw%GwMH#x@vBn-zxpxPHm_8; zl=)qnT*CxOR#$;1x<5|8G!&}`GHw}BUd)>r_7s6k)NY0Av;S00xxyC^7tmc;Q8 zSvf}$x9~+E~xny}i=x24s#qp}2my@jYjiBg>Uf`nJZDctuDMh5r(X*BbPX}NEd z@@$>{nCx|>N8lf@@GuS}EVk0;M5g#_UsL*7l-xo3T*&X?Fhadz`#AGXBs6L=Xg<2( z7(lG&_PKB8)c$13H5*7H>28|OTmb$VtfrgrF?lk$_vLlZ?arvQ`L|ms@|!c8rKI6o z00cIE1uA*7-$#&eowe&Bo0z5qEd^;jzMA@PV6vEKClKC_i4Y5Zdjv4i%GU?YKQ2~f zML(UK0pUSP89?7Uw{-aU{#rr~<6DQh5hJ~vtl4U`=-v+oH1IHu5r^xT=bDj2JVy_E zkLQMa_tgs~y;q7(W`(!}xlY{lFhPJKhwJNQ%50QiW&(Imt;AU&m- z_zVQvbWxbzx|T|KGwPw1MY2>h!t_expeZzmAP zJ|;8@Zg~oc*jv0tx}g*#qdl~Hl$I5Za`e!adcd-}?1Md$U9#WF9>rXe3H4IeAXK%AYCP0KG3oL9}i zCT$Y1e+^8eQ%e?>=o8<3kYI%^gueRZftdERkPg!cE zNs;MT@lLxLr;FJBLTm@l5(sYOe`j_TTut-m>I0ZGO_6!(H*oqg04vhQT~G)<;{IHq z)~%7Y)``7!0}_p5hNxedR`%_-uzMF*q@;qWGRbsYIM+}Y*Ayc~K^Z27jwFY_Ti&xI z2jJPXT=MjAeXLA{TxXQjci-Ti6ei0A;1{SOK6OQ@32tcwrJXq!dFoP2PzJi4gVUG= zr|%Y(p;}ZeAjG{L|f=DJy`Kd7X|B#V^Qn^OWgYiL#je`TTrWX_>p z;UjR))V6nmVcv$3h}omc#Sih;e5QB-u*o?fSyaMC112n>6f`oRegkGpIl5c_Nd*8y zg($Cnp_@#9CqK$C-zH6LvfqVZ&GX@Gkyu28Y8jLZhZ(dDaS+()f0E(2_?zNW1rMCk zNT9ik$H?5fi()tiJj4?1{OI;)A-NPZCs|(N69*o=_2YFps@cod<7XNX)M8E~)Ut-` zwWmfBtXg!B-yG?TcMx9yJ4{tb`D~Emf#iY&_L)8x>{02m!F>zgE?BPnBZu&N?=v8-bnE{ zf8bJSh>7h-$YSJ741*ET?+yY4mk`PJPV1{sqf{GL!F8;Sz@ zz!1nL0tc2(Iapgn&d1QflW%2i2Gu2`%fZjyRbZi!9ss`&`5*2ZJFwxi(iYf2Ctn*P zNj}jNR2D0aM#5XS$^gA48xmsmIeHz3!Y#C+fgbfITsR(3Hq$Go=pbU%!3kLktV)ia zlHR7nr)z@znxCm?=NIvAnI;s*p?@qNqW9W%OznC)?Q?TgDcV7a54(DX!aGn$Grw$b z9O;-P(_dHed<`U~bzXTWMY^Q1I(dLes9usY@5RXQ89slk$U20cM-sNpwncg$X3UQb zYaQNnonWb;B|d33HY}CJXb(n+yxdN1pRh=D8HA2n4mt5dd^c~CZ@z9KU+zld`GC2^ zQkU3`DacEN^~)2i^lv)yCVVEx7Jt-uEqXbJ>t2t7HhtEi?EN8K$(nwl^age%agM#0 zf;7N_jPpF(_)u5`{C3&>=06bF>Q|u}-AUKBPyO#ZWID#IkOyGU5I5d#CfotT1i}Fa zX#BQp@f4ETO$QR1$6OY{x2*g--!eXhMZg#FIUAYt@;}(EUd`e9#2=7v1h*2a%%ULkm-^{fC?49a8K*U=W|C%`Nx z0hMXF1H2~!zSNWdu26_TRuyIhk9{NwbV`M1;C^Lv+-58F7Jq6JoH~6Mc4QU|)QxN_T11W|sUS}z zy@v`1Go$DQo!kV_U8=^Y1jrncOcd*VT=O`y%2pf$XiunT*fp6-E>mL68chTrHATn; zL#IoPaaHvbBq1L#$0u^nhNefswbXEL&6`e<3Vmw{~*-f*i*QwsTpx~GlJ46qj)ri0z)BpJ&#*Bc!0dx&B?Uf%>g8xRWGB+Wb5r(?rR+4 zi__e?-q62!MS9Qo!NePY478NEY{(F#=>eJ+Hx`u)AL9Eq=mLBq7>T#;bV+1RJ|pY) zHXObhn1-lkmY|LcR%?lA7c^}kSy6@uuWkXDZjB~41o6!jO+n^LOsd_C`APvT{p z4MtsXz`OUiXkKY^A47xsl|Am%GX_XiLRe#eeIwet#Ifg=dHvwBX*XV2bWEYq2j*#Z`>nd&#Y`ny%b+{*C zci%Mnup$T_nTBZir~KRYo+Xd>B0exAeYAbXxF(nLk)c)t%I8zcKY~U}pUw6P>e(YKokh#R z?0lSogXW~uEb{@k`DvmE+~X6EwTFlMc1b?d~J|{WNxomT^Ok0{YL#ndK znMp5&(OZsQWO3@KGQJ?a3D9apgJUs?j13H?*BYtH^+eDXbnpafflk*F5G%TVe*&>A zfW%o-vS6Xbr^?>L-{QI>TGt8IAP%XQzgVtOSx$VN6-x6^_CdbqpdbKY=^xvuITdiUV9X3fhQ?f;9_Uiih^3VgGEXhvg4l z@jL;Nnh-kX3t7M-x#{#XIj>zZr#l}+ z)LF`|uDCP>4%Q5HoME?d!B!@e>N1B)Bs}HnZa2U)94OpL5tdaa_u#}S!3hLv?7&uu zH*$a>+o$onalvWPl#^%WjNPM~n#IN9{{gA|8_-h!oBb{iGq)pYmJXmRxrH_izH6}s5#ghJ2SdjLa9xFX?`yA$;! z-X7W;H0pKIFVW~BroPY2FxxVz@SiapZ56=t?0iYVwAoO^c*hVxM%nNFvfXb zZ$}Sl{l5UF_p+)O2Oy`BV6O~4C~;>f8HC8gfAOfG9hNH(pV~a<~e@?2r>Ku?ghOr zV1jCioq-jX30Wps0)~|$l-^6^m=DaCCF!b`A=}h?1odtfZw-@dWg;*!{cd-KpoxHS zZB4b3*d{?H%}x!d#F+r&e}u1y2eQ0I1|hZF^+n(9eBR!M_x|>raonz1nO@P|n1;LL?hb1kYK&-) zMBlQ``ttEr?h5zxrc%AvbaJ?B7S8P>r}fO1uK~?J4KJBYYe_^UbzwNSogMjfoPUVi zcK#;9lA7d=GJLZZm(}_Q1-);=)7c2Y9k%6eEqbBPO?~T z-#yQ=>U_xl2MWNS_R0V09{=G5@h?Esd*X`@LxVs3^8Y10{~vw0zuv*%CqaLj?a!?I zy}#4S{{=z)@1LuGd)j=Nq{aMenPN%N{xC85x7Yoj|MR~u;@?_=KY-Hzo9p@ii;Ea9 ztV?57{fCte6ZIvry;?XW8$boz`e%e3smg1MoOU;qX(_pP2;$w4maBrz3nPy(z9t^= zoel3|eD!ZK&g=S5kAZK3L_OKL8OZznza{dIe~Gl+&zp04r1Q7Kh2sh48y;K#oCh3| z{~zRxFhl5#8-vaJ|8&^?@3xFT>7#97!0+yz7|F_;S02937e}BS1EZsj`{y#sx z2LF8x|MuMcmrfqA1OE3l{PWuSul)4Cui<~?YW?}O|9uVrd|m!4KmGs3HS8vz`$ppQ zhvGg^Y!-CpEKnq6>m%FJ0htz%JNEljSRW=o`z20~8A^~L0yHq6y?#mD`=LZ^%#f*| z(TJJ`8_%@cf)P+MlU_86_C5lG_jxdfVOs7<)W`>8!`MEB;}&#sH_8_4FR|*(%YCV$ zzZ(W_s0b^-R3ZeZu`yQ;u}j%V$`{@9aI z9aTuG(*h)8uigNVtOZgV(M_K;Hsi-Ld+^wE{$jvXVEaqeXAj0#*fO4TrnW0@Uj+}WmZ8XwRP?^+X#`L35=_wfI2YsWc{!uzqs{b z4n^hV6`r-f&D9fKvSeT0RmY5LSpX`A8Ug4z$@iX_SxolC$@sTAezUBPunlp%Q$VHH z(*bf=%Smf6@L*F1VkL3DJQ5lygXc592}f!Z7=2_IG4JO7qZ&7$g=#2*4in11*HDDA zZ*Pb2puxYJgSbmHI1nxB?g5=L5hTMoxBB#2VO9WGJ%&JY%kI2B(HpN%a$T8}^yXe* znD{jM^+C`hfO9xO4$%r){*dly`wf=D zov`}Y9Qj#*f5EteONFRh$EbY3U|YcS7)NlE)N>OmD{ZO$&KmGiV8-NW#f#+WKNpkRzG+gnzODq!G+TV zqu(}mYw@GbppBtAv^8)_7uVm3$vd~%x{|4?^1@2BZ+&2_H*KB@4*|UW9-dPe^Oo{o zDO7UA(bwj=;xj-h#+9{`Nun5b;0oX+BS=gtG?-VHb?{!4z!xIZ3=tV|GT%Kni!f67 z1@0{d?nR=p=3JqJEAxrU%XgzzHJJT@;r8X_fPeGw@TgVIeY7}ev{>Qgg<1M#j=~%E ze+|U@KfOhR7Yv-G|E=>f@E|Z?sDN^fg4>I81xi3{&wcPYCJ9cGA3wi`EyA7Pr`F54 zZbc1a-}P=!1WY;;%8rofiniQKgcd<7% zuGH=7--qnzx}tz&OuuPFy!S8o@x)(oeg7Dl^bvj z@7^%R@gt*tHYRv4rhx?8xdN;PrIT`K3jt5`)_FcJEe8e?>;Qh&rLWh>6m~w=!oI!_ zVXv01$8_X}ekiohpVsgEc*d#+#nf`xA^D&8Zu5{oS-qtkBkGS+zWq!A7dcMGKf@ZPb94XFJ5+0&tm82-%l?aqTYND&Box%k&xTYf$DP4-C z{WW1AtVX=x9V3XZM3Dh|bY+fNHyuYY%DUH%(i_B~brwOqu%}A{${C0>VLh@pnr$;8 z*zOfkebVU0OE-J+O4UdNGQUbA^A(4iyOKn{CeGcqRg4jxlAQe?&PwBW{b@yapUnKh z;0CxefPFsR9E4Y=O# z8=X(*g0evhK`{+%Py81U7Ctksz0{4rgp}QJ5V-pa4c@#2tY;O_-d>s!lesX^Si68_ zP3IB3$nQn0O`r0+J~=!0DHSptBkfg_8$sdpdWZ z__z>2b}4t8xtP}AunVmE_HGnkmQ*Dlm?w1XKopj>KFHDg&>X5Be(C#M839VwqyA7G z$T=YI85fU5mrC zER!lIfY|b^;P3*>ycxYgcF&io4Txdlr^}vDaR^o(*OaXJ*pEHP8JIA!!UuTQW_nsX zr3y8^URmX+8<=pdJxFG%djsko`2bv=Q^olBm(oIG&cQ;Ms=XWSm<&npr_D!JU28UU z*JAtG=xs(iw4E>K60$s80b}HTCR20^4p3CjgAz(UY?e?$r0ed95d-pIubMiq>1JOd}P$AY+3d4}$R^VVo!VL`<}E6O2609Z#mydL4Z|nvtEK0cY6sRQ#u1YH4Q5osOY* zQ{p+_GgzsvnB)`9w@znwqbDmKgz3u*Bh|)ji1H#wk!&8w1@#`nOzy zVvYO-`qUk&>0*xC&i9XuAFb{K3;V=<^d*M9&u}?UYfxPDQYhAKtaC4+kXl#;Nd4}9 zJscpKX(s1FXm>rE;LCPtV=XoN@%QvRUR9Bm*+*>LM!SD(K%1Se8iF0I2jT@qXBK<9UVA0aSuhmEF>fUuKtA-{4>`yG+?`c>1&kZsA0oKIE7Hq{BLnW% zSbw@nEWK();S#jW*xB=#5)i#iQ+$^e`Z;^iveUrH4bY*rjpAtM<)+2kU6Lf?;0D?31jiXG_BpOTW)2?{mVQKf=jX zB=!z7N@Q%D-m)NB`GEtb-lTfo;1YyDnLe@QF%&VMz2aV+)BtYtRz>Q~S8cPUox}5~ zQXAZI3*+aHUMv%4<<`c!mDyb3_sP1;SC!k4&`*DPlSQ7{)5J@$^<-SHH2AD{QP3JV z1IbwzxCI#}9!n4H{Fg2tH6KbV%Izy%?6Im*7aFV0=li|A|n<~`XO+oNn7P_c+IYn9fY^jZ?Dma9a>9Q<}$ z@1LC6-{FCCoAUD6KBSmV2=kiEj7+zlrm>+KD4Z8Mr<~U#9r%(826JmM-hp0|#<^k6 zQkr=Yiu*50Z}%V3UnQT0iY_9b&$V+WMWa86W;vqw z+INFdyE-lPM#_odg{7(+5%#fd8N1(w$iY$MnrW}psT@$ovCvQ&XF>9!Otp5PEW0Mi zW(x@HE%SM6zMbmFiiSxqUuEyTIYW>Wq`ntFObF0coG`AIo%qOUR6F$5|G|MK@N-ZV zc#%4QtM-w{S$W7sXJ~IIt1oV0vGT7fwHq7BcDSCKrndFP-)b>RUn+cqZvnx`l^nZG z2K3SR374U_A2iZ_gvHcj+%~S&f$7w<(W6oT>zlKJc|gN|1Q|CCyHoJO#!M+=#9?SY zwZH&ABB5fE_Dw~H7%H-PKBFJX_~SIfQGS`p{@9EN0p(7dl};b3CxRlsTa;puTeZi1~rjGGs-LcfN~alEVoz zqjonEGd^6iOL({71?=DQ!QVH<9#da?(8n>>H+L^+Z@;V&eBIm0af=hTORZn3TGC~XdjfxMX_&b~Dy?7so zsU##3dJyisx8>vj8AaZ)*y4?L%$*Ds>4$6YU;fJr;QOliBc1H-G^HL5N}EL2#9{Gf z(S7xR;M>mEa;XA?SxIjQS}`Pd4N5qFCLEaW>B{00d5$&hK9*BdDKloNJm-R8D4PxW!_fPr zPrV=2UvkRr92%~SNhnoprzsVe__WGRsMUE+u;QMS_{$&v<0Zn7^tvJB*fLg}!ZQ3O z=Aa=f*UN7da*gQ?*w_tnK-T@N8>SDlJ0hG8<{MJ~{H7aKcQLYT^K%gSI3{sl8bvdD z9Y-czDixxkI_+6Z}t=NszT_`O+M`8M_$)l|u7zq6W(a`&T86_$8Zcx7}LBDsS( z*i$QQNV2_2(DhayILlpI%l_^|k-cVA?ZVoA%=CRXBlYE)S87;Fy*(y{=R#OUTZecY zCRvd7b$cCeeA<-!UuQ!#oX{}1%&L6nf@ev*Lf83?cJDs?z6aU1!*qd9jbatPYvS>b z=Pr-8Lakp-OLF7*m2#8?y#AW%;Ym+=r^~S>HOWe(J+uO^)5?#!kFX3HM%A)ef#!E6i>Z zK3A7ZwdiwzQ+IZITR(0bE`{UUcSbggvX3%$Nl+|Q@|?qV)XDt$oq4fn=gcuIJ2fp1 zbzihD`#pNzwx=T^0)xL=)s#D|i`2UO zrZ_cJN;c|rS-N)6tov6R1=tP-Yq1(y3pYL8UrGLvCBG^?E8j*swNZ8gyxK4+#n6d; ziuTGYhJx!rr&Qp!@tQ^X?T!c4KZeGdA8tiI`x-fxfQ+1_vnm#ext#CUVM78#+9jbu zS$~X{t%VvBi?oh*8}=ck5U%ZE%pvtwzb)DV21zS+ z`&xEt3ccc9;^{p4c5#!TLZJEc$Y9w9G9rZ(iLsF^)?I@y3IUG|O-)V%s8q38x%fYI zOf+MfDLPZ|CX`B4w;N+ieV`AB_QIqHliH>5=_giqlWDI~KwENmf=S{Zrg*gm`0nTK zu6-G-V@}4N%1tbPru4{uTzx10p4@Lq0oCpLxRRjYPYN)xv2Z+L@t`xwm4r8F@?n&f zt%ZEmp}tr@L*6sAn=g_%5t~(qDz#>M($9YuUBpDY8^tIqXFETVsjzgF^`@zgXXzJ5 zV`PKHFjl}_INjZLt5p_KNrP@~x!4pH7aib-wI*Up=b@k{b<$C!5K%AeEt@b zvk6pMpfJ66#(Bs(uC*Kd*y=@`s`)&HVOm5Q+4mtMniu0nrNtQNNLK6Y0dCK2sfKen z=aQ#T<6j0oS4vM@y@zHL^L;)mbm8{dZ-}I8Ih|p|bZn!{kU3~H@L8DxX zm_DQ6zKetq1)mD5R!USBL3vfkKa>{JY7_68wu(eGmmC{H}H zm8=&YB->UgTfnWNktQKmilL(N(m)M@Ofi>=%a*iMbt$n|k)nI>))OZyD0NBe>lmAI zYlWN67zO!*Y^MYYdQJ>awM540Jr=ZUy6HvB!?|mDa9ZsKWHNNbMwvEpBu_=?*`Mp%q<;TPAK5A%?YQ2dRwE=8Fj)!(IqI=z?50gyK>ZM za@-OiB6E?il=!JkViQA=ByLA+pWoW%FzT4*EY&IVZa|vq@C`;)M?Z0B%HVMzs)Rs4 z8rB1o1UwscGA8y@@6TI=aEGOp(`f@kwg`;5>?z&tqTrKHfi-uBvi@pa5XvLPmi+P$ z?q4y+n69FG3>Ep*SXUJ*>dPnAj%6(=-tVgTm8BHy+STR_FZc4~9Z}t9UgDR~>fzNs z)E{uIFXwj_`Vh$-Q#V?n{Jll-CGi@i{LXn>tmr!Po2HH5(S_RtKYcdt} z()+?L2Q%xUi@BYM<|dNMD4jX1Mfc5Z@MLB*lR2Xh8jEb_sCM6354ly8qiv!@2vy9E ztuoj&zopqae=B$J^KzWNm7-kkXJ!{KtI({ErO zp*pSQJfb!PpERs~>z(+1mAj)&cY%b#DZ7_fN$rBZsLei{`DTHzNWRS1mtsFFzLrAA5&6w&3$-6vdqwUmCoJ(>Lc?lEir_z=_TxuetUy+}}SA z^Hw_-$k=R3&4PLtQd8?_sCxtoxu~3t zPs!x#p*#|I^jpEho^B`JUw`BD`1<0fISLCDKOi+>NXMhc;;g`9d7%%@ah^0(^hGti zc3Czkp>LdhOc)$L>}z0m57~4Mcl7rF9sOI!-o;obW!DnV-xr5IVAiF2i2hAQ*LoH< zEC4Un*83oFE#@01Cmosq%**V>V}BY>4B{PU!NHu!W|UK#z8{sn>)lio7h`R$3bkgt zkZqn^pec432S-b0WEnw{dtEgbHDL$_R&?ZqoWe8y-FoC`?^(3yvZlAUuFr7Xu?smF zrQ>14X6Z=L%MXU7SKs!!dYcdkTeiLxZR}jxHJq1H_$cEOpRbV36#2iv<__yN`!EAq&qTBqPO-f4WJcl1S^ebdk-kdtH8T4}T>n0fZ#1^tu!nxP z@0`VATDo7GIXs6Ye$J@GXZ2`i&c)80f8<~4t{3@~AA7i!mbVbTKr$mYtu*4m5|7P% zxymG{PKkZ&0|fJO$YV*-aRa>SPAiq(jRsecf`&ZwoX#bldgcXru>+g`G_1rR|5b_< z!Ol@$9PIFmmqa{eta7iW?H;W>ed=>{6^+B_Oa8`#gg&oj3V+iglGnz&1V_}-s~mD> zX#$t#U-Z_q@j4%GXLxe9lby`Bqu8x)j-oGxAc-1urwMPTk5<}0t=8b(Da-MX|5l$F z8XqHMkVN530@kiDNX$!9%C-d*)c?9>R#h` zTpWSD!T6i>WTWV`A<`YeYpPP8_y8^p@$3U#Z^pj#pvbbr$5c-nf2jzGr;%7IE zj(Bd%f}aM6sMyon?0nJf-%zW$bK7pH^vdWGZhsg}cpga}q#`xh6&XW(se;R8X;)Np z+-d7dL|1=z{SocD>fkkq_Mm@FgvXDDHdSlS z=nEAMiUTsO2#__%>C7?%JH+pwfrXaMqHm$1$ebzFv6@Ie5Q8`Jvh1ZRinMc4x$7)Y zED;c{I#Q9ZzJ%@CMefiyjw${dtYu?QYRj)(PJw(gQA#p#`~!3KQ6s zoy#Jdg!_FOWCsF8HZuybg4Xlg31<~b6w-;+8ZGpNhPOfzV<^062E8#GVZ4%1 zw-NxFQhN5K?cpn)Om?R8^3~|Ha3YDB`_^yHGn&Lix9=YCMVXqF^Sl~5mgXfNu>ED- z?KE~Sr%!|SiRv(w6gYIR#T0X&GcQ~Ol@Xr3ljb`UVT_y$AW>`kIm4&6vpfoOG{R)DGXrf9o+oLnbmk13p|I!YpB# zt$2pjhQ_<>yvt3Z^!cr`0{+blu`mnVrh>hnadi}ga1I=%oyaaz_ zMpTKP9Jkbo!jz2&9M1HT9qGs3`9A1tvvH~DV4gNLM3PXjGsA-gKOMw&fE4D*PjMzC zv*Wva^;drZ%jCWST7{pu#1Ek`SkiI0r^|_}RQJHaGs^qp+BH za@l<|8w1WeOTW^ZBBPKAukg^aGIZ|v#NSg@3v9W%gyn5H+||8nT8D&}c34KcP5lWKe46)i4$tGfz%t0%0Cvm^fY`d97;t8E-1bq>1kYFnDg zyfrXQ#eS#8ziBuyv`eX*pxse=Np^3}N>wR>H*C15(w@w)qwk=aY%c8Q*NK2M2kkp# zSduJ*BCZs?8NKtai!|Pmvi+f6FYGGr%Vb<#Hte#c7$PZF;!y7xARpEuF z&e2b+7j7GuB{Pe6JJo25B#q`dO|h+cI`=e3Qp__pLu~Ue$KA%SuAJr|Ss?5{g%|Qm zrZBb>|<+v?2RgCP3w?czQ}T%M{$vA-cD7&q*kgb zRZh;+&W#%D=+EVRuq`{A9UG&ttcxtzm#mvDB>M{+cTsh*=Ct|FE7H4Rpl-cB6dY6% z0^VM2m&GKwfnSO#(dITzdM4>N6%)$NYHeSUAI?9Sb||wEMKeT{nMAu7Pt<-qz5XJs zoz3o2y`-41=e8bGqJ8GB=05OJxufs1q*qsqo8)N|uY*lB{%93hlTYnUZf(P9R`@k? z?aDqEGH+sx)>LEt5rPm-Jg0x{<$g(D7k6VrnA*|X=6;&zUEblTw%z>X++yA{HF?S* zn1>Jk%&?o#nj$4}vp7O1e`zM_*!p68!<%FoPP2+<&zGZ}Uhk8-SI2aELZ?YO+z4pb0%z)vSpK@s#5k+=r1NmeK+myIl&A!_jc>6KVagw@epba_Uph=QooV!?S|pY zda);Y-YYLs;(LYB-an8_;}$`SJ0J@*l5pR~sk6sI4tYOT<9G1Wu#2I0(O+nez73`s z{B7yv6DW{GkZBFaMh~xS`c??P7`wjT|Bk)N^}K~S&{^Yxt*XJ@=5&P38s_C|UdXOy zqoylKY&6Ti-PUYBI2%Bygy=Gk_-xhSM|Q}uZA^g4BDvqTD0>*F7Q)#YB5p@&gw9Ue zn4RY@1E*ZmJYzDF=@!r8WLbHKYFD_>LzpeN!`q~UTp5aEzF?# zW{B#8Hi%B;->%*CtLo=o*#2E%ko_=prRUB2m ze_wauY@gFLG+0}$k(LSDkG;HxYU>UPn@TvOV^akBlgY*HZfHd+?xUzqT?~q z{usvhBRS5gb<`{qM!AJlKju1o^uS;u`RY)R>M1nz@+($j-{n+1;|Kk`b26FhNtV%> zJt1cHFisRl_lZ)Pja0=UG{rlwOAM!(R2y5Z(Lz_h5%@Vgu?=miE zP5NElUlO$R!%i34$i`ZIzDe{p6&9r)OE{T`oui!{UYZcTr-4TcXh3G_cG&Gq_6?wn?p?@>1XB zyW&_Pm-RV(CQE4bsD=}4Jh^NyN9v}_rpC&OzF6vSx^*R%Y=>nlF;&H;#sTR1$;glp z+s`A0gK&4q#X`^ZiVO?aNUXmg_11~eXS~i&ng-~y(2j!?<+I|PchEoLb(xi@ZM`s* znQGI2R-vs^k&=CqcHYsxVAqQGvG%Gu)o8Bq9TOj{IzU7rwm3pBsbcgod+bymGj86w zAdTWzf6bWEXRAZZ?EUB)5UFZ??{VRo?ai@mGU_Zd#k#$HdRKSuUzJhh>S6U0un$E> zU8)cj4rJv%+u^w)QLMG^G-OI0Vo?3K>Y8a=#OxK8ScZSd+nVt^0auU8f14X%ur#E5 z$ufy$&rSm`$J#`GRgPqBTdH33l8MqO=4dk*RsTS4qWG&JqKG`hDm+ z3wpJV#v(bi+beDHht-46Snfkc2wXogMX>Rpu&=ZR zf`(*V^7_AeA-IhrPEB zh;m)qzEu!GMp9t_5dj4SB$ZB4kX}l63?U#bT_Q+GOG_%~`_g1Z*bypp!V z*k!M(%wh&%8Yn|g;OeDk2j1aKru|T!8Pb~9o;i{DDZN>@^It< zm7Q~~i$Fg9$}-G6Ks>QS_iINF-LB3^{D})d#V+R%1s3v$#r4)iV>X_;y`2Os9o2YE zwj%-UeL=H0b0V`Z8Tni2b^`~u{F7LAPd_W^3(e(Bup>RvYon%ZIrypM{^Kt*Qt z&ua>M$wQ9eVf_91wYl$2xyrU#?VY_>JTtbl$BiBnV7X$8B}lKsP7N9!!-3eZ3Q8_=LJdEDvw6&~of!QBZ#xA-D zl`W4`J<*n4m(z3&uvdBWp7SCWnJp$Q#z|d9@WlGp{l$CkOtp0ab|3X;d_YmiwxN|{ z{*=Z2u)~%r-^H9?*P^|+ZQnI^D~bK>c{%rU)T=QkCX3{qqXp*up9d@pM|ng31m4tD zB{wZ*aKJK7AekUh%{1wspZV%etbE6Ucz*vdeqW|fsq5O&$4FI!GM=6>WX(>>^2An2 z>1Ir)i$;*BTSB_MODk%B&c46!Bk*FOBis4p=@QFPDl$e(vNHlEK;sh4(fPL9e0P`o zNwjZaP*LKcV{COevS%a9#k0x)=T^-qg`$%*MUe?%v1Ouj07prvA2Ubx=u4SVQnOk3 z2IjfnVCxTef>ydI?$eW%0rc34mXLkN{!&=L9zX8oNYXc7J2tCxPSDXl6%0KKJuMsj=C;Tchq9#u<9;-)%dMZ{VVwm+(1f{=TEX^be`#2_u;6s( z1w9tsX%CUi=!5Z0m&^IS8o?hn>Rhc@uUvT#gs4U7+prez=Ne@L zhgP?#uvZK^Z(xa~$m9%;a$NAYXnEtrA7!)tXfs2ma!!1p4nD-HQ-7go3iBxLa&<5$ zEuw{-s;?0MBap_Q#6L(Qr*uU;)->i+T8cAQ+e{{)!F zd&e@4^;C}U;w!mZgxQJhhM(>vo*yRi8AwJr6Qb$F?uBcn$5~5S<7w(&xRtC<7x~F_ zNW(_P3h3e#9l3T)d4_Z}QO2>n8SU|$sQD*8g$(0ij=9>J4-*PdOZ89EdHrv|)Dp*_ zB#n%wBwwUNnsj($(gsjH%jN$PgbJh19{+R-*`C8BFf7B2UYCo&n*haV4CpJe?ZkUHg`er98UFg&}z3VE~D0#R{j*jd& za<3%ccJ!A+=LxqmvtwJE{H5K^!NZ9YQDtM%Z!JKVHrC&rFi$axVw>3mlE4SiPh^}u z_sAX}GwXw;f>Ki$%{8JQ*1bLqiN~_0mfb|3IM8oLx7viPF^xT$kRRZbJ>6JczpjlW zMmYj95fu&!mDG{LY3r=Wy`n+!`t&)9-jPJirDH!!v}0|UbIrfHYk=`u>`oNyi_G(8 zu&h`r9;%?k2THW6+j+Hj-44|fNKnpk=%o(Fy!{IIaf)IMpfZft@cGGHec9&QLe_bQp=*F|)N0{gM6HR?dU|%BlT| z+J{T_D=jQTWtCIEOpGtvLfQBeEw_YYQk0<^QUXLA?XmnfH=R{%B@@mL2E^oS#3q(M zPm^sf*#_`WW((rbvW#br1y6MJY}IH4x2l^5(F=w}1b6UnZS&VSr<(S=u?Hw0n5R%} zLA?4YQ~(?Ge7eiYy>IEtr|mVm!M5`n?Z~nD)FKMo9tq-rP^aByC6tU^{Wy?;RyoP< zAD71>i4_g^z`4d)`}O5MbW6cbVQ?A85ht!7q$l*dIH$6+8E(+~GeHu5@#Hz)K(6@X zC|{`w`z@zx2|hgJ$ACmm*V?#S2|szWb)D2nB2%3brU-^VweG^NdW;Ld(e8UeD=&fl5O-69Gm3sS@V73R5DT9A_iE}1awCV618gV``e0sG5imKAT(>}JrMQg zY&jwQfqIMwD+`AYBN#|%MBTEtV{v5Q7n`SH7-O(#j^3`i>ljLGFH0uxw3MonW`*J(C`D>gC1j8 znnawG+vtY<83CPFJTPT3z9clW(LyW0FLLe;7Kr3}O!yA0W;xA2Tov{$$~Hx^O#IXM zRZg%pbYI3mXKlq#BrQK%CpqK@P~5hUi6aU#`ieHYY=3WPx=z0X?Ai3EGq()oFQ_!b7Atj3n0A`DxN36a=!8=No@W== zRQ2uE=cIaX-t}Uw)ALghqapvDyUPnML;h~&5gYP};<0Hi!Mr!y#+){#E4rXVd5ngv za{_N5=MvbNqUc6Kf*eP>Qp$cr?Fb!Ad$w}7HN{2W;a(664g(2~Tiqpfob-4D>0T*# z^?a)cxn5=(@9;DxR%omxpZ|e)rc8Ki)yWuVx3$5Y+@4I)MfJ>&Z{!XMA6Iz5iaOhE zznD*GMX=BWOawJ%2vr_Ft?_klN|#*RzqhZ6x(OnNgLOPI1@|`AvFn6`C)^3V<$<@n zretgdCs*v^d~2h(nCT}PUewC+T@YazX-ghIS9r8UHIXXtG}fcY4kF9C7n8Yx0)?Gj zWJ{gr-tg@XFY5yz*GVYt!FgKSGgfQc;i(n3wUuhlGuBaKzHjaRv6!M`V_?nHc$Abc zVJqz>T6gfiq8U2PiQNAd!MeGt*k+Atkoo`*>WZsmO*`*5{yVi?vn~X!<1#zkJ@hkf zjwfmA6(e+lVe!GVL5i)`x5)K{$Z$7)XttEmm^A}f%%jWH=Z5o_Y%d?NjUXr$UA9bR zwqKl?hfURc3eK7%D4EPvNVeROPfs1_M};Sj1oha;$4~v3IA~oe&QHuoYwsbo_xTU- z)u>jEg!&Vbr$Y9q8{m(*lD~7F!fRAG&4Oo0stiA{R0Yz9X+t{Q^i5L|(hgF1w)%dr z9d70rls=V=e<(SZTi=-cQ`skt=V5ouCnu}2tN0gLjQh_n(0ap9DnQEyqv%(YawHJy z@$%s)3Vuh^gsJK!QFvy%yillqf6jKHsjl(Ky9Xy0Hu}>hkf=1D2m%zAtY8}1IlZ5V zjOM1?P(18t(pJ)a%Bs|%JROJzVD$BeNp(Do{@$2ISi>FGQ?C2AFx}Z(J&=HX={hni z;F-$(xPxnw!+lt~FZ!Wr;PvHT7mQHWrHRX=8X@cFAc4c!+st~@sjWxANhiqO`__x< z1dr<=Udl}XMgo70r(2d+G%tpWa#lo}y{0<#9e5~e)NUT8{Wa=DH)IbKy_h!-fzDgQ z?IBs~Lf8eplIgp*;_7h2KG`ew`TNns-Yc!qhzsgHa^-IIde2}#J9;l%op3M<5&Hvy zO==BhVRKJ+Xi>h&rKf0*Rq>Q7QF!;HS$u)QHJ~6|8^g# zv|A;^!;`_P^v;Mu!&EH_pS|-_PkCFnp24Str#+e3=+2n1(NND`w1)QRcTbe1Qas}8 z^|eVU=8BXr*>YTn%JFMkme}4N2D(MI>bgUvS(Pz-)W zBzH*<=@Oav2vKBi-mBo5Qer$GJ^21SnZ|7NzS-PhOa5~+;$6M#2Ek@=h2iZ63D^19Oi_G!21;XrpyssmT3*KnFIe4}rejrZiU<_`c_ zEbliAJge(H6bZ+f8G1q)`#)uRt!lcgII}VHfQPs7WRBEE_nPrOSGRSvmrmWO`Ih6t z0w3r?6z3-c;wS}-z^0zNEGwAFo$j3Stp8lW0NeL?GuNZ=NkvZEgvoK0%#<`2aMHHU(u>z>6l6LmbEcor{q9CKC#kz)ud>e0rJ(JH1=Vye;YoG_PX1eKYUO3az>Tt@UT}T)uJ_NyEc_&yS z^BxYkjvEv`YM3VBy`g*eW{QOL5zTmtBkc1BB({;qb=@rP00KxtYBxO(4K2uK36Ssz~+XFLAZ5=n0Uk^tP-=Pk(;Z% z_DA|ys6?E!zR0~s070d)g?(&80{3D!qx4I#ndKA-E+kElWg;chcD0OP2fY%yN2#J! zvPhSy{>}%;lQ`lzJRfmj-9l#cTFYOeO1D_*Ru4V3Jo<&873C+UoR0jtwhL~>xt;%oP5w|RM$QZ2H3bjzO& zvoX<3g`J0_?dmvX_6X7WHR+EHKZ)zbxOhMnGb8wX=JW1P8Q7y*q+rI2<7iy z{_d!b?`civ`R++P8QLs3=PbC6$IjuN+;t6u2u0p*`Glv(ep|k!cg?awhS?&vwGNRo zV=O1|?}cXZRGXhQ(lnp(v5#hY1zr}+5DJYhNZfEDBEB{kn`IKU?#I+~u%OGe69O4m z#y?yb1#7AL325U`KJq_-_sH%Q%J8L{MUNdk0+|b~fLC+XoAHze7pp%OTUpn-L_yh; zc&7;KcNRO*i_aVhZB5458TVNrsnOg7U2EyCII+;I>T>fz5W;8;w0^Ob@TlrfgU%Ti zR*n1*U;<`%-ao$5r_CO~`IO}}u9LUiuUjsnHkzSk$@ZE6OcXY1Q3%J_upJX}T7T56=zrkALPuTP!}e ztDac&o4*<^!l7j&sS0qSNl27L zP!gtreXLpmC5trOUg%ocLG>aVFIjG$N@~vow4YF#>^fTO&~1`g_ewLIalR^{QecDF z*Pi}M=N*{YC@JBdd8;`h&1|RkjefK1fpF!^OTE~t=c>?I#Co)iu$RDzlh4|hWhwVlr)s*%{@<~0-Xd7#O`#S zP&LnZR)tbEHz5v*vgeX2!t^xhYwV}54LcC)?S;< z1L*oTSp7I)rS0b7J>cNO;KL(iLF`6H%vSG7V%X<_KD z-i9ptq~_JP?i@Pv6Jd-?v)>(u%gU#PoUvWR+{bYi^h<7++oJOxh5ESVRsGrc-OD3Z z2*YL(%5=*Y15(S+w-e8+!Q_wkttG@;cV9ErW`wBI4Ea8Gxn`7@jrTa%4CjN7b|Efo zPeJJe`myg}0vPO1%}X;ou8_UniD@JFP6Z#`RL=9vU@Qapym>PU*RVUW z6=0E?IFQF*@U~HZRroYlx$luuD4kqU!3E;nwzUl_RWy~q%8%+nPhdX0uJGZ8MaR@ew#uhscvt9-SODBfp&#&$Tg!aVvidLNwCmuX; zqRUf8L4tnlo}?hN6&Huw#azMKe!_0Ot8s`rA2yuF>R~5@4FV*7*qQ8N7U~P$ zu`_El z+R(TvyC-3~w=Rg5gS#n#3!xQ{la%Lq**ZCfD)-|Oy*xGsuP zNuClBG&8gOV_ePlQ)MpbnF9R>nU3?cxpbMtUwJaEHRRM6)G>fV8g8btxMu3S@$_?~KJ+4+we1zT zasyx+EXK8FRq!}GKLGm|WgmfRy=l(^H(*62!EfUO`6|5iO9_?b@1Y|0jJ_QlsuJM> zTs)ana6-``&nU)qFFjvnez9vFYnrkTY@eI;nTU-^oO}`&al%qXRbpZe^fWvP=;?Ty z`QW2$G5o>WDY$O$s6y!q4CxDi{TZE%8ooRCEtenll3I(im62K?| zRYgAH_`=-SLvDVcxZPUvOupOEr%3ng@ z_;Rt8{O-tAyRvQm)x+aOHZ#!%LaRW#i|EzPD0Cp_k#ux4lepFfNf0O%m2pSrBQ$O} z`j{N=3mE+9?2vdC%Xc`x=}@buF|uaJGjl}O90)1w7U5xQ^jjTn@O|cW#x+e zybCX4c@xXOr3F;K@iaKMN&9qVhRtmFWydGy&_OGqAhY%_4rMnJ%5Sx19lMY|QFR5* zJ*2_>ThE;MgN+in8gqGD#{va^n5H_h3wh!{kOdvjl-DvFJ@ix17aPCMQ4vt|0Egs!MwL0c{D#tIgs{X7khF4EdM zXd(N}Kc7B1?ct*5C4Fm!-(b(aub2LEbaZ@9!0RC&AQ`lV8pvZC0DZb+iHQ&PO-PpR+NpC-H1!h}rt(vuGa+mi;L9vwNU zJiNO_w$#RT(UH40nKmDN7SSJHd%N}vf{nD@(;;yu87Z{E*qU2JQE_bMUIw~}u1duo zNZSdn4~h5DZ45|U*-~4b?%<-W(o||wAYaM2s;y!AF{%>PW|==0`p)F1lT)eAJQChY z-Q&6}pSZkdGPN?#(}D}9z^*rIkQ92cibt}~Eo&De0Us@9ambJDtzG3b9905~+@p-G zoc8z=wM9v{T=nQ#C;r$KlQX{c)@_*}AiXKc!1!1`ltz^tdu%p}D1UmU)0c5@%YW~| z{=q_OS7Pv-bq(kRN}&`drySJz*$yP~_jVvDolIM<;rZRkxR{Q*=$~t!@MRBm^UAc@ ztSGQ9>JmA)2CVJ2ZS~ckEob}FYqHm`pp?HoSqAj%is6@yyWV85iN9B(U|ts($-XcW zVLXz4_+Fw1!;0U=>jo2*>rUTyk6^18ldhWb6$FNY`;9!fDs7VpR2qmVLGeq*FJA`KM^1CPe6xi-=nw}=1m)z#ofN9IpTXldy{x(u?F`CoAP6#dukBhWSP`6Zf-MN(t$B18N$C zGMwi?jC2EQ{L`5Wu%eK!T~rjumrsoS?&%re9HG7%Clw=7o08|-Cd}Wl?*6@I9c8mOE;(5i9ce5|gn_6;=wpjf*8E&nc~eQL zBg-TURq{l`B}sVG5aH{9BScXvuhY8{8LefMR+;l6?DYQgE3Kb-ClV~IDO&=fUI_JH zQ4pv5(i!lh#8|tj$}C7`w=j^Zwm(i7L>)yeVE)aBg-5s~KUazvtKZacdnLdB(75#x zrI6W?%PmrODV=D{_44h>X0s-P7s%6914Gg;q!f9~H9I*E^CS|SYb?CxSVcdL5rtd; zG7)+FpJ%6}4}dWgp;CR%;}em}xOlk=5RfqFQ~q8gn|8+oa#x zMypR9Db?4p5Wg92mi&n&?#E$m%X!ty)gJ>OGSXKrmHMTLGLZW`zikQ9pEn?0IQzqM z;AKz9(hz=3>-&$s{cdp0&^YJ!c|7N-}2y^kXQrhoBRl7PTK zF?lT}4{v{b5r^F%YG5@rWl$XO`j%qVx#MflY@^=<9 zmWU20n=q~e+Q@8-QNsOU-9RKn&(`tC4gt<;dBb3|%zjVCs}J5IH42=58RUp6-56=)hCvKx<&;;CzqnYC=hwm?R2I!XR*2La#C}& zY}%}@tX?r|%<>0LHeB-(V@|>2!p#_**jdJvrsDuF?;B+3bFa%1 zQficcAg6`}d~zl2w{iX-gJU76V%@{LmaXyo%v#)fF?uluk@pQ*II5!K1CF)0R{N7I zKGIe8eJsK>lo<)(F08Trl?6b?`sp%U!zt6OPe@9qtA?UGfe@n<8X2k#Q z3qud_zc=y!^TYms^d@#DREluu|Ha-UP3HkzK&l^H&uXCnzyYGH0eoI|G@t!?hznWb7p^H6n@tF31rlkpg}|ITxTnzgs5BtWxPkn^ zf4tsc$%6R7gLXjVPk&Lj^E0SP_01`0xH`wa*m#csgIym`O&OKcj8k{U+`~kt0U(o! z#%yY@v~2g$qB8`D%g(E0xNPVDpcDx0tUs9HXrBXgpzSvUu)DGtc6}^q;z+ihfdp1( zz-p0Bv?9}_*M8<5DG5vUkH;P!*;uXvAMYlcFl#~a(XU(RzxmMstTz#U4$OaE5bK0- zs!z5$ngNh4X1wW*H29{o18jf#rvK*siQv@W69RwHBUAXRKV|L1oCGXFQoglf`NE%= zN>H+Y;+*gP7lY(qu6b!UAQKBkYjU!`{1}x>P@}9rgc9_$RT%nD@6NAR(fJGb5?#Gs zT%*7K=C_OSD8&STy^>j=zheQ;Lx>Ip}UiAy4jBv~rY}0q$Mu#LVmr zFtE!mpZ_5BzN{McpxNWfEtMaWkd-GCVaNts##5puw-ff87!g5GFl_~_NlD}cb``E3 z11BeWh|Ajyn5{2zz)V;g0oco`@`G9b{tfUBUmRNbCfQo9$@(5c?yRp}0e=F5`Ia?t zpm4pmf%S`=8whVYmN?Il+wmJr5G-&u*TU5ledC1wrVY9qa>DYT1w=unTMpA6^Bb!p zU;M#U=MICAytSH`r7>=Q28Xme3Sb{+0fk0;(*xh-_N{DN7hodj|D~IkQNS7C`R>?r z9|BL)=S<+3XGP1O!q`KU4~AdGFf{XX%O-i_!`8t`*22jR1S7KhGc{o1+Bf_FOx)UX z5LmvbF1H5`y!4QxVMU7EPh=ou$P96B_%?pb0b_9xUCq94nt6kPcaMZwLuDMiadRWS zkj5(zu-=F9i>v4!e5DOenOfy8Z)eE94y_J)ayK8l0dvuGOXP~?hGgaSp}7M9`l*>F z12q)L(B3NB;Ya-yf8ab??C_6N$S^8P_p!0bLC3uAVCYFVL*w9jlu?2s9m+F8Ni9=& z>Ild2Iti3nc#pe;FS;e)4WyJ7a=d=d^Yr6AC{S5`w#kH6O*rgNJPVKBI)-WoBBKdF zj21_d2%Jx^ZY=|Y?&yXeAl{^#@E8i>TaQl2+|HM%eCG@id%yWXQlZp~Zt`NeU>#~c zIoz(=616z+903;_zJsR}Zfu5GmInsLEPz*;>S0saRRmPmJN|@;v|nC-j`z#Yey{sC z;R8kT3y^{FH`d9v;BfJIj-7<5@YZlfKt+`(1h&YiU^}<6z5x-AWz3e}$MFAhfzZJs zbU>tigS{_){uhf@l7I&k1-JDw*kANy3wUV`2+$C2krCp2{ap3fpS2}6_Ie4yK0RhV zIbM=;SJSD!{?-{D`@`osg||LjhZ=sOyZAN~wy`PAtZ@#yq~ilhtwk9dw!b8vGLfmF zdJyH?5Zt;6S5(+8xv>skCkmBh zpB)SAb7U|ZYz#3WxDnwQ!2$4!LNr6HxLJ^*rT|W{Q;>>p(+|y~2lwf_Y5e`kvnZX= zbc%7d$;K|Ac0<;EeSk^(7 z4n2v*BB|lsn_7!gfY3!*#;e<~xB;|{HxU}`qSdWb-rDda-5=g{AGv?{Y^pA5RaP8O z&Cph3dq+?=k-4_LD@SDbT20hnp^6JQ-a4LeBUXl^I(4O^edhpuS^bb92SS#m^dDlR z2#H0U=)x;NA}nPL9$kOIL)4tfcN5VR_Id;WK3fZX`QNbJ@h+aPvBY(4jbK>3<~QsY7~jhp*MN;ZM*%}vdkWg&HXY#BW8)&Ruk>vDuysPHYLF^nSxaeQ zR)y2+zVczi<#Rz1d0VY+MYhtr)43spueZ3YmNqkM6iShAHsPPNUbvdvpusvW3RPo@ ztKi6duo>u&i}8;{;5L?rXrFCGQ6dCk#L&G@*BoN4Y-9=W#<~w5*8S09?$YE}PDB@7 zNPDYc-5h?sQ0k`Dop^$PY&@X*+2K1Qq*n>aTcs1{4e5qc>5jJ?YcOs+^z8;sSUhcH z@NF1hIB5O0QpB#hUgS*Mq6Wa%X4VvQ=}<${0Z7+;{6;n*oQQKpOkwqU46gSp>?%fc zz(-%=;=PXb!i#dksEqF*WbcR0ekHWGP$qZn=rs$?wyg&r1Q#@t^AUgcgh%V%jHYl{ zkcCQ!pb56#!!l`uRey+Uyh->q(8bHZ*fN@K;|q;cuyZYTTpNEz8qoc9gSzP&bx-)a z*A1FmbX6UNu~cUbl*x~`NQ}SBjHXbS>kfNUVH@C7F%@ypwD_gm-)ypE<63O2W1fBg z>5fKTw5RcIX2(Xn z&oJyd?z~E&YQXajHyiid2lmTRcGsI(VA6fW?T_2VUsD+;XL}lkUz!#As|e(;!!cG- z=yWDZJo)m^#lKMk>||2dJE*?RCR;s6xpD98Co-Y&hmTD_IT%&23pN-euufL2pJiix zmoQn9^Tx$x$bqhf>7b(<0WX_>OFaQ3)e|F+V4mzSbyk&nR;)5*;K~f5&4g!FUpJXs zo+ci48u4!Egfu(DAA=Rf1yJy0u}liHDZIJh3h#768mV4zGWi>A(R#n27N@jTV1MOr z152sh479k@NMUz1=L4|6d)1luy5MlItTqA`*f1Tu&pP_v<^b|wU+4DeyMZMjbWo4a z>#Y|bJ9pwCLfvA6y;*Kq$pW48-Edjf63>ciIc0L8b3-vFB-)b3zTutmn}6tr|m zExq?360@%ey^CdojTH*le^;Qc`?XbYuTXD&yu#|^>s1J6g7pFJL1+k@o(^QAsq`XL z8%bh;jXb9RaSUNu*Y&3TWW9A*eI|+R#;X8w_dOJ6nJeF7OSp=tr347!fKbXAdVY9b z#AQ!K7y)5JeNgb~2VVj_IM}YIWXut(kKE5r^fAS535)kpHo*Ca^t_^Qd%Q%bj^q-a zu<|qF8aK+uuU_yC@m_p9&jFKmCffEYjlwndnYsU11E6?eVotVqzr8_^JP$6Xg%5vl1B>q~_8%!49UYB3y|eBqxCXyq zi}_WzcKc&nVbf|k5K%c$0hi#*1{{wCX=|Y1f}U3E_4{qkN?w@Q{gAV;KdgKIVqxS6 zuNzK98QsH7{tJIqVS!6P!elkNX$FiMwP)XrbNg!+EI(Vd_W6V>#%M4CQoY~q!j$wt zq)OI6*dlJ_y4}SUoJdJ~)2zM|kmC-AH+!|+}^>Ze>VS6Hq zi_fG`{#J{sF4OfMo$EKM8+;kpt?uMNF3oQDPw8Bqtu8iO*geIq56NJHJEC0L_s8N8 zx5-*uHr?D3D2AF{?N&vzW;<*fZh60qqg{PXVy^4rBch_p>h#`dRtqjcB)cAUxfJ4L zMiM9{l$a~L+^yNIW(!GNpe(hF8aBOH4x3~_0 zWAFsOvqbFcMu9Cd_f@!RWOcy%hj(f>DF#s|5@ZEn-HeD|c9eC)=K)gA51Tr1bCWZe}3^;Xvkw44ik6KXesv9i4~PX@c)b=@5R{CbIbUML zU2dzI_x3D?;|TkD23;REJ95$KQqXXG#RdmaQcl9p-HM$j$(qz46fiKX%Bl~VAtl8; z7YX<>&4BU!5(ytS4U{}J@~IfU5wrryA6s7rZin>R2DA>B6usgm>9Gp1Y%+H^tT_m@ z(kSpXT_ky1tB{Jj-@61zzCSqDzWRjHwiH|5&MTX8a~>`HFmc4)dhZo@EDwfLKk#=y zN(j`XZ}ZemiMn~(ip;8Rvnlssqh5a}>u>+^32Q+$kW%!Uhs<+qjREV4sCCqN z!^=?9MdgSPJTE@jc~(MOzOIA#(T8qUgnbL(X9JUy9JY`q-wH`qtfqVh>b1v1tiE2l z;goQ3goASA_;i%@D&P?<^1o14_NmVx#&-WtpFOCcl0miyWYLRas(%vX_eNgu=0+zt zn_B@0yB$mdfW&PZ%J1au@@~C?$Lib*qX<-}Ar+H7tQogt%2R$vZHJM~{?>m#D`<82 zod}h=wLU3RKwtt`Np~Cum~|A_E!V)2$Q8=dytcDm+0!`WZE>_U===P1>3M)pWn24s zVhx{1WJ=CP%XuBlm+5Uzk?-p5!@gZ0lUPu~yX`Uy+E8LXuuylQ)MWpiW5 z{^%J&VXe}bH%?p&aPeG!sT4tgP|Lbc=c0sRhLa32_GN^bdxny8V9o!!*b9XCcf3l-p6aHDUOI2h*~) zUq|JRZ|3(en;log+3fPSZR6lCSd>R@C)bi1){68Dw6T+$s|$#~xp46kCN{xs;eYyr zO5A14BNBA7>D2H4TOB5@%ZmEcvVyPV!CmTfg=EP`1zv7vDD5+Y>0h)TKP_QUdCLIg zaLonkvAY2!16S=I33I;`!jZ`Zs^z;gF$#uKNgLnXpKa`d6S;+F~_G=&<4~O`OOl!(#5H%kGL3}nPeZ6E?bTBvc?flQ`6~ACdUf0(Ao#b!TAs^C(0pS(c_a1Jm&@Oo$PkisGeh62G&8&#R^_k zFx9skb)HtA*k{JfMf8AM-m=JDI5zs#Jki$}(F?XJvf_gj1#q5m^s5e4vgMTJicsq+`}Ih8Md~y?A4|Lok&_ts%80NXmJv-@bbQES~uSj$co`8Y|TW z&T!^;`Od5|a@WENlwsTCvWqJq;q}rmI|7F|Kq`r)H3^G;t`|>u(5=;ZY%;I!X4xzI zYaHvKEx-U|*T{?RLmE;a{%%jNGZQHt8-V!9!{;l*X&ZFoBlf8XJN%M&vo$Uq$9sY0 z!Nw>0ILpWV+FIEYt(lek%YtLd>OrYvWA z-=1%@-z8D^QWv_b!$yCYlsdOPyc|tdZWva9Qshd%t3?%tJB!<f-+t>ot+GT9AlYW^6V;>!pDbGo$Y25%^pShu2?}M?#HQC)POhP%3hmlpD5J}3ZQW}yx zV0`c?&hA}lcZ0yX{-K%#qep?lM%IJipB>o@6iQJ?Coe5PF=M+3^!GmU*96LhySfQ# zs!Z$J=SM?6Iqt=$aswwgQ}J#`OESAfIUEDjNKny!bYp#syS|xRzkP0I6cq1=_~UJs zk!LGNNe02e$Yp2fy!O$DM>gMWH+w}se#}WQgoP z6A_e&g>l()Y)$|z-VV((E_W!$5^J`J|L^wM+rm0&xSa`07xof=zlamSrFB3y!z`=g z?)(Qy6)4>W`dF_83{0y%rpF1pd;K+-T#vTzl)q@S!RfuLvX!u3ot~KxX(`EoNKGV` z<^DwcBl}|7s|nz_(vLHLt+YfGEs4DXC9^@OZB6VYS;yc$vashwbfS(3eB{l{f8MuV zwK0i#zD&%wQEgMh*SqGl&A5HH6JEvQx+K+AduLUCD84{dyq0R99;Ip+lTcsfb_|IrxCT&e;F!;O2APc! zxBI*9_7p)9VpY&?Pr69wQ498&d=6L8$N4gHL)u}z8)R?C6n;n*FnHvi*jA;h=`2uW z8t`cywkt;}cz1Bp-HsMk>mI^Px*YWJ_I38`!=3 z^`ps0}HzT(-&>n~D6x)BbH8`JW$Hvgb_nKW&N!3T4y-%fD1Bc6QMhS&+q!S$hd*!G&Awaa{YuU^{PC!J@5myyE z48D)fRNDuF?=l`STdL*4?EJoJz{V+}a^EC%@7qxh7SY;%)Jb~mg+n~LY7P}RMnJNU zFMq6jIb&Hm^Q=KPeY2`KB%fr;|#aA1OeglYvp1=;oy5$bb=w794UWfdHsz|_;TeWbRWoGBxp{^(mf<~sjq3oI*~Xr;G?YeUbRYem-42VF57>30+& z>Ly;keU2}5Vu^@$T~qY14B3_QaTIeNw_Z;`l0#ugGEo-g;D zA$P}RCpi~mE_fJSK6^MCg#E{zK}XobOMh!98`{tAx+9EnF3|OIjy#~wQ_D0Bv4!#{ z@ylu;-?ehve8}cuy}5fEMVRk&qrI+8h{qa8RHS@#=N@C^)Yfk!_MHV zNR-SQ3$O~qU+)>#f@O=?_+eJyBaS~_@SpFA4jTc%qB;!kv*!P-Vt*UlbXvCucGkgq zp7vr8m=8?m+*a&<@j&)J82W!-`w)Z#ixmDJ4Ss(bz2MQTr2_;VC?BB{T_Yj+HFKzO zf#1&O->_5fXX3nU(&n!Qrw{urxwmMJv4r942v>cC?98sC+iOh0cu35mN zwhT1ojTbG^tvop5GsXYI-TBu~EKPbkU8w2{Q_ao`gUr-OhA&YcH4S?{DjIgF6>WPC z9rmrfh>Sj;8k#=5>O_SV^-Ln*aS7ZXmhfZvoO{NowopN-hx|2gzm9)3h1(7Y#c`oF(57d&(^F^|0K zH~(;F{;w`7`rbv4|K5(@ZrT4kw?kUU1DB=R^2jXw!GCbpf!zjYnhPIP|9XJy9ARs` z?tA()!{X>BzWU>TI&Xfx%`j&0Zsid?f5;mCeU|8a#Kzv03B%Ck`0G#mhqw25j)T1u z_`aj_uPeo@nwVZkX&s>jlz;vDKWwpeNteD{iSKjyC;w4L7h3~L$eL_`dZ0BB58;n^ zc^!Sz0x5L-nDvW)p5K4`jSlZs>>ZibWWCtG{!&SIzA9rlH(Gsn^g8mE-@5t7zyF6X z{kNA`DZ%7VYkXdC?XNGnmk|E&>yT<4^1w39&%(opt$?nw|=eT8KQNvY6rUK^}G$0O}q*(yFc>~ZhiQ9>|esO% zf8P@H0Ym~VXFA9&*nc$vxNB|BV2F&k^)0Hlcoh`(pj^RnXPD(6V6G4GYk>-48h@`n zcxq8zSAli*K+#q^qkXBVRc%-=3MAOnA~o$sin@7Mm|j#lfgDuS%`YGesFq|A%4^N} zYZjbU;r7%L)XVURIMF%0S4-`!Gek&Aj8&6FvT7fxfifZAw#*P8q<(MH&hxO2UHOa_ zIzL2RW^oX)5h!XRl42@Yg4!3hJ?}fx2t|H4GD`Jt{FCsZQ(To4Vb7hlR5De}f9T?wS~*-dmJ>RCJr}#pjo#5u@SIrLi~iS}J~4Ig z`T3~alZ~TaXF5T2;UzL*C(w}^fthQf4hpdC{#rYq4KFN#o>`@zr522v7NF_C@W(P# zw}Z`QZ?klcL6Y@Mx^hr#>G?nG{dqjpZQnnRmr%5xmP!a`QFf7ir=k$qcS?4H?AzG2 zTT1q2sO-zgzK*R@1|y8Y*h+Ro8H_Q8@9{qG>%5lpI`8wkfA{D2=lzcd56ryh{XUM@ z@miiss)`N08N^7@N%gpMbVInu`$OaJYk`*U2KT{c58$?T)Mr?BZjiZ`JdXafDqc3|~x@tXs0LJ$2k)FO|+;!xi znUHIGZzG;KAEZE0woI*33~N!I+?xY%2g@~5(?{(eS8%zqDb_!qnhasLntN-%iymzA zL%)N6*0TO}1b5@XAIEWjzv<975zfLO$d;XqkNrI-5^AJj3oz>pDLVV0?cm{8$DX^@ zar%$H%M9Bi&jQHLWw!7B0N9eEJqSf8+n=ZFrz4Mc1Fyomw%hrU+VuA!jZ^nxSX|x9 z9vZ;_cQv*14DJO2i{RFnoRC|{39Y)ly?`MHsGs!bCcseh9nu`fLtU?(A?eD~+EXjQ71y=W*#*(G3GkpU@Ooc^cCO+R;!= zivc$a_k}1w&=63ghMj?c&9OPH-CA@l#rRMJgq9f%A@)A%!S(zYS4IohSz->DmR(F~ zGY>7~g}qjX7p-go4uxVJ?@a6G+Tqjq=ua-sjLa?d`LAGLV6E!JyqCi`xijB}A(2y+ zWR6zQ@lKx^SJdr$5l!O{6ntLJh-sqdR{^peng)?2u{ zuH0<_5*WS_MjtQlE7~R`drPW*81w3JIUqjS&i@>S@cS_4Db-j9y`&XG*M;rD~%#bLm@StMS0FF-<1BIzPDRHGBPR-N)+i{~^v z0va$dm_Dz*1SwLY_jhdeOWvyWUaKjkNFd1C1x?Wk`QrT~NEU7u&@+S}?$IOK9SHs4 z&_)@y@R%?(Uvis1;xEp<&$aiJD;@KNmcJa#2kt6sZ~^Mn`l^3x)8ZzO)(U1`VBZM1 z&%Jj?gA*IQ5eze?;am%ump^kh74hT#7^F5-32?%%bOFvM>G{Q|l3(W7`_)ureirmb6O*p}>9lB$J3C5Yca z>JGXG6z(?f9?;s+4G=b3ss)s_Yy?*k3MZ;La^l0R1UGi1i}@p_mNz8%lxbuLgJiT3=Vb8N z2OI=G<&8~h3gcf?I3$1e1K6_DS;gTD?Yta_J^RDC{; ztu?zD4q#R!O3ke9bRcF3qNkl@r7Po|-}Tz;$*ZV6WDtoP`eqhy*0F9|aItr=V)6us$Odiwj&FRDfoyJc&UB`@jb4DS^7F&IKdl=jI zR2TIO_eSS}Z+gWn_MUg?z4Gn*ZRt-s$dkrgkq+k6ONV%%F!0jzTJq<%j$L~d_h5wG zgP$MHNI%PqKW5qEWfoe4HFl++8&+!auuHpjo2?=>qImStv7jCs0YQ7)`vO86N1KjS z#B{WTMnnR=S)Q9N*vgRhU0U+wqu6FE_Fc)-&z3Z%0IiC zd*axu-ZnG`aoZsr)^Qm^2b?K9`yyh*jBnmxr~APZt>G}R6?}IHzLyTk&m<>zpN=v_ zE@9)HX^#LCW2Iw?XU*T**f-HGGOZU+*?U{PT-%>h2xl zHO{=p&M#yZuP#?~7d4K#g7NhjPJ&}!pEI3p ztTO*TCN?FxR6EDf?TfncC&}$U0%yb`RCY|nClLOAB1Y1%<)kM^yq;j?2_oF?2rSx7 zFg`rDnJzjgt|@-#HuK(6`MnqO5~m~&8At?F(4Q0EUh7icdyu_aeN3v?hJKH?@JA!A zrJUg_Z@85I>UduW+mF=N>T?U|wm#aU%WY$F-oqKKwBN0X={O%XIGbkrqB8_sk=-0J zkUtb)Kya#0ZqG22iij;0{aQCBendR1NYfJlGfuG8%cR44eiq|XP*wYz59oLDv0X*u zUrwBK2zP*s%uO6BWFA9bxv7b)Rp<*R$~|tltvmZ zX05-74|$uWnA`cA#I{ohlVnc{=J7rSz@un*GF!F!YHz9$T&)Imc8Uc~&ne%Tc*UVCas4rewUZ_U$EGbOX0fmzD zUdI#WU8l=}CtsZJl0{I+Uy}@Bu4@)Fd(PxvlKmc+Z;hV#!MlwMwVYo>^l(!qE08z^@xLK(Qhk2bDv_6^!2G=xs#edgW*NnFBAGNE=RF4i=cA`Ru5Q` zb~hiC#l70JpN5%7`i#c8Io9$YN^xcHmEvr}{M@EsnyJ?{V`V0^DsU%UT2x=YGxugQx0N<%sdE#_KzA@|24H!nF!?$%q z*Ar(Uluv!FcofBzU03hz))gyvgP2PQ6Nc*rOp{>9{xv%z`Bq@fy%i`lp4?^2B8@}p zlH^Po09?%QgcomO^>&~f#LQ-Lh8eZ$&fwe(vKrTpbcS6}8ye|ww}j^V{;xk6fN&pKr&$5$uJ8VJZnKHmHKzDamV{4X1CS9q&-&-r%I>$e)YFvbc_a z_EkYi&o>3r1g1%)C(pXLKB{GUUt%;kcDGT^1Btgp=Kbp5#H%uCtwCw{y6T9iJkCIv z4Jw=PqP|LAa&Ou7hUr@G3Sv8Nv+qWWdnm7cc;DO691=9=KaR zA5^pT^E24K#GqbKJ1oUlyw9J|7k)SKEn&GWwCo|-%YB&3=d-V4d5PUCJIeyna;MK< zHw3h$Bii>oHhl0dc=g}IsThooDOqz`048^=7_IW5QIVmOH`By|Uc*5J7I~k|4b*6y zGMO$X7?SNhXFsxgSe|iK<7FV^5dfNkm*?r2xX0dYpoqZtDS7&u|FlrlFU-m4DJS*k zY@aLy=j^i>wDXr#Yg;%=(QW82ZgxA)q@;)m0=%zLpH=XXu*BI$&IQ+#aedQ+Zn|Y6 zUncYA9uomP|F*x5JaFuIWQgyUdP{Si-lmH`0=?B(sTXG-_XRNKW!+>%bCW@JU1g?A z&tI^lO~VhO&9h_`H9iPeF1x==%(xW_8HuM)s|Jhq0T5#TfW;(9zvk-Kxm{jEE9FNJ z8-5)$GIAM&hV1uuko%2nw4oqvNCK$rrnF4bAf)%639y&Fj_Q-T%xVE21g-B(0s#!) zj=nV6HAesfz9j}#d9HO?c*6~{^8Wn()ri?vN>i(A2RiGLg}?r$NJLb~={%5CPq}uC zvE_sU0fD+Xj=fc2-ej!=EV!>yQdNIuc%aBMM|ZzpQeM=S2i#;i(&3iGQ+_mXXGkK9 zn3jS;pBoAVng?GbdT^`zfb#Dqux)RyBs8`i|(y^+?$0^Vz ztVZ%CL|?PUD#yOO)0CoBQ$6NI7zzc;rKz{ZFw)z(p*FpTuu^xt zCBLiQ-GpXs0%z60*CRXN5$rnKyrFn5-WosPVNT;LIg*xR)%V3xxdsz&NZ+&Bui4&I z`i4_&#U!s>5&kyedjLgipJ5+ZGj@1Q4AZiD$IW%Qdrg>G(7+q!*e`Nhq&)a*G#p#B!Ty$@FE{?i@Z; zfPo|(B6Kj^9nm-@#+x6SR#j|m+#h7@OQo%83tTj`r}-lceF55^&?mpPLc1e=WX}MxV|a2^T>0mF=8RTIe<91I zut>$6vckH6rmO=2xgj4SBx&lAvj&C21m8KuQe%5got!lQk8`7eo82vbzs0zLCG-Zn zPI_qI$D|~uJpo6;ukohjhZp8by6a{yxz+vK?!C2F)D3U5r73!z$t3}bJ&A`_+{7V) z?#{mYgzS|Qv#|~mOQf$LpEM%Y9zs(>zTPhinBrO>;m71vPHi^9rM;%4|slzSiPm7hwP`wRq? z^#Xk^XEZ+pp zJ*bcZFQk0Q7FBKgO0RS&VemXhh8-SVHc>zUC5cBwbfhZ<(h`AM&eA6#ElB^45NHKt zJPlz`Mu2|kEF_x@k=X+Q`)MqCRlHvQYdugeY%$PTv)SLY(Na4Gpt3rHQ%#eb<$R_o zw@SFS+YWc1kYDK!QG&EMXCVyxL{DOZZSlU&eG2HN4uvS??by{t1cfFwyI;`$_G(?r$k9z%uB9om}{GZtKOv~{FD*?V9C2J z>B-qKe81rL%KI4C~%^C8ba{yR5vjWG0Od z!8cn)zwe0Sh_W|o?%4=5&nt>5s|_wcvisO{Jo!IV+1H>pOm*Sj?_C($TOFs^!bhw= z=W^-Lf5m!Q(qB1o?#KephO}Y$9^LlIP+IepC_btE1!&Ic#( zxJV+PQh}^p$T_F@9hw2qL{KBI_)fkX2jF~E?>&g@&HNEv+X`iRS|+ox9wByv8e6;v z=lxhwetG55^uaa9qA^~CWYV{leG}Ark{9UWUDdUxEY0g5->8X7AEnTN)c{ zT;TCKi%H!2xQ{FO&Z8%c8#Rz~)XwT+#TPp<+oHuJ*0CC21o>12eDQ zGWmI4T@5}##D*5E*IK{1SdHMi&i1xO{cf5vc7zz1$rxsUt(^>7j& z9JS@B^w$ZMkPw)##_)Qy{B6V$@xhckN{M+vYx>tiijR(Z#PGDfwmCeJ-`F2828^Mw z-amDuC#t`TMpPeLcY7qQe^X=-^0N`w$L7ctN>E-2Fm%=&`<39%#vwNfj;aw43N)1Y zxCl}ne!%Uu`V2Gg2w^KttFY>L`L5LHiny)9*T(9g3`$M68niwNWr%g1A!*1}3z9gY zt7tMQA{(NMNyy&)5_M83AObap24Ld04OS9>N{Ev8Hpo(m%e&QhuAHi~2`i^_z20tU$91vVn3U_j3_hNf z{wR`4eWN7(qHa@OXQ!ujs}i$eenl)7723rEuSRC<(ES)2#1kFKtHYtF7RzAN*2dQE zX0So%w*9Z>6gly(f+qV3Gn3HmT7WUx0Nuq9>rPX>-CmKePdA9N`xlUix^&9Oyx`cnOnoUoUJgq9{XZf3UISt zwpK&xUzUKI(pw9SW*|YgwHi4vI7pqN32ZGC#%2t-#0)8rq{E2Pn~OYkHC~oVk8Ze- zEw`VwZa;IVJw}TOwo+mjf4m3uE*1>e*ozRK1NMTaDAwem0wrwKGOZyBI^nX>ZTp}{ zT6OVL$iU4@zHX2hmlxDRB<4MZ4xD)d!denC#6A$D&+->7voH^vN&b%+4zt;)!S&>(ZROs=O$vF7>K9hyb&ov$P_%I;+i=PSJA64Ry?*qpkx*(lZ8r{ zisMuCMh)vekA0-Ic40+9!@AKjzssVBf?NwPbV-&y^zc$Xq;JqGdgvvxVJ@AIW8i#1 zTA3r~msQ5W>5LUl=**_hSpX8)6N;d_u#dB&~sqHap=NqC`SD<2cL*aI5!c@!|*lT!A4 zacb4aok0MnXR}H=O57}&OcArOLajRPEMRrvp~W8Fiitl)0Fm`e^j9zKv534?!cCbH zW7D3Kue9%L9T3647~Zp1PWQc^Q5j397;#EQd+^ zcaX6#<&@5=wpLgROkG_!Kz8f&q37?k5V3^cy4nl_R8U3o%dghk zF^_1`JvB5#x4j4S+dNCaAJ7S7d!Me_bfM8G><0*R@XSS8}O~=+PXb2GQNgw7geLvF0 zS8N$Lz?wE?58&ScH^=SNw)+C79m2dV#|~LYk2$^Z0)t0?HGj53GmGI2<4)-ZOXn|x z#nP0_1z-rg?On9OU}(3N?{&{=6{t8l?|ce^%x>oPO!q(g%?E@5T^m@V3}vNCD^i3& zWpkZMzMx{eL<>oP^>w>w*3+jD&u>7DNn!|6o}fD2l8vdRMUy~oJhynv8VC|~YIVRE z!Y@`NlS-odZw%5iJUKZqqc_Itlqprp%ROFTwkH{Za!D1Ny7owQEP@&$ za^AD;VAdw7F;Qg~4B`oMML>Zk7bgVGt{7;ZbuX70FQR=_k&GcNG{H$FQ{GXQc0)?} z-=uJAMfc!P+Eap-yd$8ETl+?UX|gsLJ}VPeP8Yu;yw>H*?p0=>Y^Q+_J8puV9(77V zd?HurI$xkZ)+k?dVeJa(6D@6XEIz`gm&Q0Rdr3bQIpHEBGPq8?|2CO%ei2D8iAR68 zELP5Zx!eqK?%u9;gB8W|dCT|PW5IKGA8z@l?8<02F@~Jh`Eg}%knhb?`@wf%|Fneu zWc(=B4j9DD$BN~DPk8`SNdG#2)Q$n`<*#LinrNFybGE{f@)HTp&;TN6Zy>bJk-jOP z(b9aUuhQ8}!@TSOP9b$)H+u>|GtfZJV3M{M5>j}ncbC2_|FsRQ$ad)bk5tttHkJSY z^NLoTuCz(mTRFR>n%6eMyDn$kWT< z)MXnR@&4*s+RMxfC1r|w5>aY)7KaCF)zACxrQTNw6xhxCHU!vYBOD|{-{LMYBHv|M zP1dJl9)Wh4V zSp$d}Q{xi0SiuUDsj0@2b!^r$&IBHMcVe!pDAc3Re`xozrqYZt5W?acNx>x1Rx!yS zvBT>VjeBCBbMRTk_5{~gvU=LIj3CI`xIk)Dx2RvkX^odpCVw|CdAL=XC0kJG))y5bS~RaFP*R7#wwbcr7R~FO zJ?x6!3i=qN+{N9+KSihvlXV|`#F1jDKIg07S`sUnDmM(P`VSzej{+cXGL}XE9fkS) za)Qcxpg?2{JOBn94&0^3V_O@4K&TDc@};&9axEv)duFz%h8CKvc@U46t=dH0mTfxq zlm)%gaKxJ$BpB<~7O|v=nFwI8ZD!e+pgN!e`Z90~2DtmK^k0Tuyb1H-XE&jOrU6Ke zW>nbJ=AMYt4lBA*55iH+nZVi~?-$F&@C{^EEF2wX=+>T{aEu_Dh_{tVOwJa?N3 z+DB8H<^D-f(_B4!Ez}d_l9Gw*dxM;FS~A=`tE5^r?wN2dO)IN1eI#J)= zr8&tI4{rjM!?z3lCFbTxu_;F|MZfDWr&?eQKi<^OqLj8)7>MW#wo|GDRb!(AVYA6E z*nsK43TdKI(S}5&%~{7GXAfVs-Wf>QqOMMOG;D1|m${&8jLYIue%Kx#(RuG^VWrWE zr~T}*jJi=YxD0EtzImj==4^FehS)vogYcEEYpVve64V0ydn@`aN_T5Y-;`g9ho;$$ zvb`{x$D8C@Sfmd_=%nnvhO^7&HU~n^<8+_GL~h-aC1keiL_|9eyYk1wGQwtCK>R7#b`fdeC=8ei|qIh?WN*e4v@A=D!qA%Hk4?BlZX>(JjKj5HZCog7- zOIkWxBsPMyhZp^tlDx!L!=y@Y??*}@EZ2!rcaVc1o5j5ZNzd2n-A$J_qaTG3c0apK zP?h9mtWDMyXeriAT1`vcP2B|mhg8{(e)SBOvuv-@BJw-4PC`;I6~H586lvdDDH3}i zV)0S}ZeO@=d|#&*MTjRsoHeu9F9o7-UC>^a&-?;XFa%CLc3e0T($#D5QJdcy@9p!iOzdK{iW)#5!PP$9TNqmzdx9?IDSE02+ui3I$MY$TCS2$JoU zYDe$N=%_t(9$hn?+rBh}b0^G9v}DuntOtgHLP;84(-}fWC2~D*7Da^OH7 zs!wqSPHb}t0NBMr@(&~V2NpX6hlM~{FrCSu6s_p19|0P{4uL>Bcx&7;k8LIUmMrLK z=t&4CJpjpraPCuJ{#aHDWMXFlP7#wZZ5>Qys4AGM288_*2H5w|;lxFTX81Q|#HY9U zyeob{8wmFZZ0GZKFlg~upg?EB|1{JRr#!S+`l+e64qEA}LK5lnY_|I;VdNAY?#dQR zwK%X#OoH`q)_gb_GJQxrKM(+4UyF*FA=uwAYzQ-|qCz#~aAD+wuV_1&ziVA~g)pSj zO^eTRRRA2PFJf0kHIfRV<0$V&GVwC?I8!FOdsgRWia;2{m_6RL(7PT5ghnId$?f6M z(ua@3Af<~D-nneuL^D3I#=jpX}@k>+Of5hhqh^^#bKJD(mb=*hjyPtW~ zQgf0vzpm}&U@*~*^AEM4SR$=Hd=}^*c|-;P*Nz9w;maq8AMD=5N(n$}aRw0*Ne>4g zqWqjxE|BQjs+t9wQ~|_*w_!{u^ll9T6ajA-hP=| zWnH8G-%=k1>fYyYlAl%^EM9TTN%tI%iUVpyS+iAKokqcg9+^1#q9Z=Quttif%Oz=b z>@Eu6Y4hH(%&{M?zRu925@9%|rH3@psu@y9X@!(!;Zf+8(=H)t?V z^=RbG{XCVui}@0AChf79O|8_tUUAMP4!KFODNX8Zu#Nyx=X$Cr{$+7YV9OEWb@jg- zpxfSI(aZI?qu^4T1_HMMHz1L)a9hwth#X|yX;cB-Xn>H65fC#291^<%{V#K%ybuQZ zWVU7iqMCkiwnZ)cLYcg3i>&wDD-AAq^~H-Xxh-Ku3f@2uQaXuj3^XL<8X9yYTcQ4X z5mRDLZu@MW0vIrZ{qOE9#Ju&)vFx!7E#?rYYS|+`?~N~%52YMH{53%3AWhU!z+i@e z@!GX*{fH>UO3mWGfjmJDUL*&$#`4v$nMJuIw_y-!=a5YZ!)!=$rsU4lv zlzVv6{r558zx)|^_@|aESmM)R|M^1}hpX5mdH(g(@vpCk?+)m}Fg@`7pK$5_yLwx`D=|Mil93K>jl%C_{GdE8(9|G)0Vg$p@qHe6>D|IP^b z51-)QKT~+yHHew>HXrr>`@jDEI#sy9j7Ld~?@y!ApSlo7uz{yvud#4}yB~M>6lKd&|0 zUxDx|pT2?3|MsbZ7t4tU=l=f%j=$Z(i2n;5|Gcwyj`XZbO#E@D*giZu4l?XnAlrHw z0~C(Cp)OYV$4pu8?&H$qdgpg`@6z56ZSoo!zTv9XD`1~xlyaH9?A?7c?^)iR-eK*qxitOqR#)qW)xN;lF`FL2 zx=CqdY5h2peH1q3vb?)YNwa%_nf>tDu)UvoTszc?Wz?)U#v^-wV9^`>MLE!AB#?MN zE9^a6?|+xu{?}O(aoyDR`~I`$BWsxe-21P)`=f{_gp{HsK&D_SIig>i3~tNXYtADB zdtHIN)fU(BJy7MQL|+~N6;?bCnvcZ`$c#!20`Nh<(9mJPPw@=|4iM?#7$-F)gPJd$ zSq26*$L*|it<`}KrWNx&)&=&GZ|{ZtLHi{In7-7NRu9~=2U6kLskbi`ri9LazUD!M zAPta1LK3L=d<-K{p?GQ4CS~UdOV4(?*>_#%1(%VnD3z(V{JL&kB&A(#+4+x-oiL4N zuP@tE=alPr7rojs*%NbF(&rrrmXsgT2%-4h)wIeEajjQ%1lcCbz^>-?G=sSl zb(-@HiqaWIO1|6r_BE_$EW6(5Di}=e+vp~{u7yeojhH#17Hj-d)~SLBk$tZvrdj4k zB1YCNgxy!u7i&msb%ujP?p)N(G1~vc5*s@rbZ&n>c+7M3)E|O2Hw}aOTi&)5mG=@v ztyZCT?<1o=Wu#?OF0oRXn6#HqY$!@U%W&pPnArtL!rY>9NzOy?z9MJ84x!f>-azay z88U8#?BE7P`f>3-cgE*JDfi?eRB7GQC6FOI-@(oC7`5BlUE_!ns_~o;>@2U6i$HN% zQ4Pr@^E3dFv6LJ-SU#d) z%7}SYin|5=TWpXPSa3D1cbq|02m$`F1Pn(Ijks=YixhTDG9z~S>S9h#Wht~WPJ)hM zA~K-%rH$WCg4O9`L-6&AydZi)Xy{b)I?g7na{On%#B8=kHvc879Ya|+lH-0IW}Iw= zS{Bsx11P`IU_bIQPRW87`7LxiC$D={x@W2ushF{iCeZCqiQ8k(Xj|VC&qWD(AWTHk z_6zt8SgNH#A zDp9y%W7l#b6l2t+HL{vb&TR@M43!{`Zv<(K^l@;0Er?0>EoG zHh>u34H>Ov16&hCCr{bL*|h$~JqO4zLXY&PtVidA3h%c2a;_<_in3{mp! z0Ta}{b{5D5IuEDJCIhCZLjw-ARWloi36^5<1O}6m@Huid$1{#9Qw|{1N+>^5j|c?5 zYrAu)oIwk@vtabj4<@6*1;sO(U0w;)bBmcffP`jsueC=U{R;B%Q?LCA)RB$A+}$AE zS(fB=qShi#C+<#P21KFAD)6$-0$}-B2s60F8($?V8uaB^HvkY}IqLd)h-;a?!)CJp z7%c?E35`ab3$nT);eDC8-5>~@nC%-uiC0+Htco?$=$ZQQQ2Od}y1akrk$ZdZ(ByT< zGw-5h2Jr!KKz8Zd>;)|C2-L%jgqfFw?qxoBfpE=`zCCamcHZKjcY<9YU<67^2YTc_ z{VaiRHF2)V`t(ZTt|Ch~mRE3Kkg;0{jHFWl?y4csrf|6oIR!=$rD?Bl!EleDURUcu z4m?ET_`(rC1s)gVYvZN z(KidQ{0)J^IB))>J&2K72_lFu5=E3jW@Vm{H6fnQ#C3oqzzm{&Z;Y?@MOyit`lfKzzYbgMjm@@?dds`sf9)SX~@Y1dmfNGges&1D4zL_|`-e?LyS!hU_IM8Ex z?0xo0@AH{~b7?wZ4G7YQX1;Z{3Sh`AQlY&i(X{}a9o>{9vV)e2=6o||E0Jblla6>{ zny=)k%@K`vQOKUtqT zM`dP@ED0n>@Rq@By8L{Sjnba>s81{}8ZB*6$x%JHQrZ4&Z^&li;neP4+IrTrY!PtW z7Ol7KlEdV~GLqA#un$W_%}z|gjSIRhi-Bo<`#xq8-geVy;x3o6h3YR1+>b9my?pte zYUyPG0UF~|A9Aj2jR;=(@I7|TRp^S#RhQQzl71f^7KTeceR`Z#=&@nmlgBYRZM7r? z$1UYx$9f5Q3tQa86gkAtac$++>Be$JtS?psMe zR@dR%YnSX-FH9j`cPP|3l0xcCFf`IbCM{^27WUyR?0Ij(l#4|E@w0Q3?Vd+ZDzYQJ z3XYe`J*v2=OsyOjre1jm*DwkitG7Ha?UFG+R+e!l`HJ10d$NMf^Q|8*DdD6|MKPTC zu+7aIDew^yy4CedmLn(aLL}Q~iqn+jJan=Jd;Djj2hw{ws*iAoAE5cGFDi*FpY~eo zb-WTsFVmh z%V%GUWkwn@!ZJY!{p|TzEx1Z{cpZ!HGNGc+dyHbDQ!Oq4aKfa(XPVEWU{8jScZPZU)$Fr8b zFLWN7w@jzGnQlzcKB!kFKPFDl(n(zQGxFEwSFodE?XITghbP7wmjg{^kAt>*Yb&rsp z5+aisPq^&!yX9|Q&v@vXU4RIcEA;?zH!iMk+cH;+b8u0IVt!DF?hJIw;A&xbv6~|J z#N|CL8^@jHvGeMTIZH&_y3&q!q_3Wg zsV%ms-6>eCRxTTzkC>ctY1lA>w*n8o#0V#gWTk={4nlCtBOkXsX<_A{QkUlB0?ZLQ z72iz35o0TZ!rN>6g>QzDm_t;jz%CAN?QTaqU$izCayG4KtBEcF)(E1BRJ6xtT7fN6 zEek8zlCo*VOx4EuBX&x-V$QLy#MOc<8n*~P4_nHBe_U>PyLOy%UN6Tq_p8(3PPGT$ z?aZ^ob_YE(@==o(=94Q(f$`68j2mO7Oz6H}wPgLinr6DR8vJHR@k(Wk!K`$RnG*L_ z{>eQ!p&;ym3re0Mf?z3eGp-<==h|*sU%VF3^ATRz|A>x?;oKMYJ&C|~Y>r)hrx3gE z9vH*!g}qf5VCKSQyq$^bI@&$);SG9X94R;+qyALQNWM&YjIZ}r^Xi~c{1cTawW)mr z53@ywZE|VZDB@CE@cQx6~hYm`cv#ky8tozT(Ne{>L=tj4um85(&`R_scXC-$3;1P(!q9>IwM>uLQZd7EamowgsR{EBu#~djgZcj=S%?-AxrW8X9J#_ zY4M!$Qifbrw|vu$b$GbV3XATgZdGlWSQ=c`{K_q?3)jCc)zUS@!dD}Pi1Q^}YPrPf zU^JqkDn0h>`pm5zDgz8cU{MD8GvIJ9aHOWWY|x$yu05{;maZ z)F^FG$%Ga7RAqy+`bU~*r<=|})1$=2Dnn|&V|tt@=ls|u5H{Y*gZNh^%s2nw^c)_b zoh(VK6P@#EN_CqJmyfnQg7UTR!HVwfjPB4#{I2|FR^FO?%M|Ompz2AZhimqlxm@Z`ecyveGa5F<@LCd4+kJgXk=ctXI3o$L~}h7s{Mi5F(bt+bh}7-;!1pIn}Z zyjb8jy72OT>Y9_y3-|c4^@-D{*Hb=Ra}@L&UiZ45uHM{Zf>xXeo+QhC&u81=NrEw3B_5#Z$+Oy)ICfF^PfIBJ-K{` z?S1BARL1a*Pl;FM-3ReYuAzkW=m*cV*pm5s{Ns?O0r^ulX&(!OqR;%*k^9?$tilXb zZE(?kd>N5{IH23oS@bkDdE{eah58HHB05w?(>oZ`frizv7X1#(t4~8LL2^JpAhG=@ zT`bGRJB6xundx||h3HmTZryoiJTc%JlfzQ1H?4b6*vLsg=c2`6^>CchjuS3V>r60P zhcd8HX;o9X-hPbk9?N#2R^pp<8iUin4!9oO_z;V$06h-t#F&cg%+jJ(#>8i_w zBPZo+$^X45U(Y7UUYN5=&X}dIYBT!DU?_iT0e8L-%PphL%Dyc5O9hqOuOoqh_pE4Q zr!*kHoVsXedsC+i1NfO)99*PpYlmHnZTSUVgQgbF4oX?b^nF~s;vVc$wf`F3v!#Hu z(@a`1RK2a!l~8=N`;N}qak`2dvFBLT;Eax&s}u5zZx1A?xe8m<_K;Fo4;PVxJiUZ3TReRdaG^6hNye%v@v%$MAoqZ;*+$bYIZA(L}H-KNViKR+Qu zsQ~LU?i%X4GziMe)6JWchNpCM+jpEEAYfpVHXHEXj@emM;b)vHGH3+Z<6Sf zxj5(MhFhx1!n2T`DSz0y2Hu|N{eVpWXft1+~6 zn)V?80~UihAZ#*)+@wwheG|6fuN3bgrQ}{;O>HSbb!{z~O`DUgas}XwLpo^&t3e*t zth7~aBdo1Levj6Y13dV5ar^tcN?a7T2{+{)QyrL%``quV39iqKqih zKq3N4dUW;$-t|y0?q6RYtW~TP@Jms!{C0jFzn60oYvt+m!lEu+y_}D_oO!y>AZ!5t z%cNF>Qq8+W^tcDTVS{dl$!hXm>X9iC-Nya`v?dU6N%qK;9O;Z5D%VwLL!cVuoQ zkzecF6)}o6Fo-GMacfvbH@cfxB81zD`d_MDaj}7K8i|Y50oHNM-P51G96j2|Z)s<= zntwa%T)x3&U3}ss%lVp_>pX%MCO&e#j_p|sp26jwsRHTF)<#bZ#4ewk8}wl`D(3fm zD{$|$_>dT96{G25*Usy&Y3a#j0#S@|H#$Sotq%&)8DkBVCP?e{v5(j1pP1u=1$I#9l|w_07>z0HJeLJH?+!`;fFd1p zxN)NKNkV2B{yavu@DpcC(@E|%{RQKTjL*gje-wg)a_~UR7a{I|pTAt^=aG*!;cc9O zpVcKqNHZY>L+hF(;5mfdL(j$^E0#pU?*6*(I{ETiy$;f3HkA-mDc)b}NZNfRh7cx_ zl31xMOdK;MryN{eUx(p$D&F*92XM;ym2!xWoFkXKbJh@;r24Nor1l;Mq2zpa*jn)` zu_=$8GjeqN*kqv+FE=v|ddW{n%Z`xYD`5Apzv1~0_1OWMUe)k@PS}ayG}yI2EY)ep zi#eY9L9W&aa58RX#VOA5&hR-+kZ3fn8b$_HedQbh**$ z0o>q-;lcAU4-IQ>-{m^G_F45HZy8SDR~NQoRADWl7XMI#TAA=BQYNify!d5uOlGo> z**6xE%6x9UT7hInr{eXrouYgf`?TVG{?y_{7KI@UAU|_;gfNN<$Rrho4W9`%tna6J zxU@Fj;GNa+xY(;8=jG;y{CK60u-_J8A6SI2N9i;#(&D_0tU>@`y~Af`SI@S#R>x>% z$u(@K_GH?e0ajO%)1p|R@Yb`jy0NZe)AX`M8QqiWV;k#La4MBF)g%QQ8!3FHK{+bsNxCk|7$B?TxvS$&kuX3lWAIx$p>ECT~+IF?jJV{q-DU!>Rgd1P3aJuW=TY@=2 z_ru5Sv;0oWJt99#5#~Nz%C%Xbc!_9&rEHuZc6^fT+S|O@D1{)`as+3RbiR2zDBH-2 znt$#4ShsFNcZSQz=nCc`e&(l=*lF0{8RM9bU~=<*woAxo_}jIdAhN)I;Eq9zYs>9y z`Ma&}y^Oc-uJv37C712ABdBW*qXec)HFHE8%iS=JAxR5ehrQ7gMQ?W1s;V(_CY!SE z;xIKhNq1TH9*)KzFPP{45y$HO0$GYHg8e#8GPKV@B2HfD&d=9TZFlD+s>L*+6pL4< z>YWR_Q;v9wn3{iWbkif+;QAO^&_?SN= zG6!gs*$+IX zpI5;=M~gm&C36QrMLXgnwTgvm0aiP|CuK3F!^xxh=Z(s5cKM3~kGlSA{Xs$SsP&C_ ze~X8TweAygi>2MMzRsI1%1Z+&8-c7W!EZN_vGLg z^2ToU;B9ZakIxy5ewB`dE2K7wRm7# zCNB!JNKs0znd%Z-_5!g)Z8Z7gnFv@*+|h{IR>eEt_(E80=&la#KInXd&+DId9jp@d ze(=`IYJB~wc0|+85~9p*p*)7CzW|51E#iHNU~^W0V@yeQz(l!ccn^N>9p0m!^E^34_9wtvD2_|xuvu2@@}fI%IgEkq2e|5KN&BY zE#vEuim`CR)Qc9Mc>VI}&(-JGYVj!1O_q)&$2?=1GdQBlN?`f)B_JaohOg!4(IgzE z+hb|`JjBvbu5w!vM7EP_#;WFj7G{jGf1$Lj- z7PBt#qXOI4mDy(^v&>VK{8tp;9I!=Nw-{S!_=F1#V>Qsj9`byZ@W~sUPm5pT%B`A8 zCk7lJkAb;!LaBKn>EZv!-gm||wQc(gd!~gf^9q{#!$t%yUt**jWAY52!Q z#BR#$0eN?u2I}=88VM)i3Dy!H8mp4Ifq*B|9?s3KvaUZ^Jm#0uess^3`V?l{#I5N? zxVSGVvSzEpB`8Vp*_q93Jt(Vl-FYUao`YqwT@~(LPcr5Ql7$B*3{bF!W_|-G>vk zi**xU5By1v)u-8B&4MrB74q1a;U&FSV6i8o>K(k96|sz|kPncZ1rRbu6)pl8LY>V* z?`Ojb4)xUHISLdAY&MY^1F4pV^slLo?c=SbJBt^!)puJ+p$*B^&ix7y1E^bgg_4fE zlcXGueJXcv!(uL1a_oiYoa5li#-;|*qd(u`2QLOIwgftX9C=#8MlNitQp zYcT%^%G+Grl60nRI)KX)JOQf2LEhZ8hzf@GN+wvJ%B~cujn89i;j%Y>VE87`V@2<$ z8EMx&Kvfor9$r7szB?1RZRsINx-zulvrT;0f`}`#O}y`c#~w;Nq*>2sX|o$l_(_>9 zdvhdh)pB?rU)`kt!L(G!nNj`8(nC_yf(x})i5+w z0Ui3h3)>VS#I{WBJS`l~Zn`dAM9!s1`J`nGmuE4)+I2DSmJI=Kyiv1i_Kn|`a#y$8pYJzEnYwGqP*Qtty{?L7iMFmFySVN6&uqY zN`Dn8j~75`?F?-FAu}M$lGgdi+gO<;EuCqQ7P6OE*1pG?n zs&v2usIYv{cn|GE`&?c9>Z7WljjeV?o?A6C^tm~fCJ^%nI86%>S|(FS*4QLdU+5-V zcE+=hg*$ButU*je?0#K!F5j5dA^K3>G(p4aS1XsO2Q&EMqN}3;LlWs$vA;For?8u1 zlFpIvDF^1Eo?aJR8P26!d*Vm?w10(07w) zdMU(Q?mk@evF$}R<;ScrTn{8&X8^Q17cJISfqun9i4f1ER`Wso?D6{7oNrAr5%vu^ z#JAP99>br6%7;O=f>FmNXtOGXEzy=K*+R}jAEc!w7^J2W6X7*6{1#6Zj7g1f%x3ap zfDY%=xSwRo{D$-r=*6B{9tTQZlDMD_;v*-K{&hMupP{VR$z_Y}b9u_lcifxP{vq($hwBq2lwFv?k>TbK2t^tdcyY*bo}%(=`1Y0*z_z zn2b!Fh>F;m`Hgtu;`o|!IlbJS=0O&L@Jcw;O-NF)qGuzl4aCpO|iz9iaU348kR6CeP3hqo((Db zta>^R;{tAW?{UaKl5#Y1tIDTL)HD6d{O9a{?@m5R+6PcnAN+#9Xl(0^>Q)OJ=3>= zCz2){fTqSR;`$fgW5_}sNfyx4{xLoey1c(=oPgyhD8Z+G0$Jz@Q2}d^ZrRMXr+DZf zzuiNhq=H;9q^87K4+!85=8FsLK2aX>bL-+34;l&^qk}tktGD8y^mF$T^E)HP*OclO zfnvottp-dT&L5+J=i11yONDk?c?b<+deFYj%fWXi?_0nxj(5`uHHL@% zG=qQ3+XAqz-i{TKL})N+Fo&Mud519bG^_WvAzyE1iv;sDz0=ZgWhK}3Ew!+tT%SBZ zr&AcWkT(7(jF0Oh*5b~W)Nvvfmkp5w{V&G_hdk=eTu?UDNIQZNU~YjJ=nMcomPLz& zIKOnbPITL+?v?BZX#=9Z};q=Yh1DB8zQ*n#c0orTo-{PtlYro8s%B zC2$MqF{Ev3e^Fa*b6M<1Kl0w(iSqebFoP3=NquK#OZ0DUzV0U?2`DD3iYuZCgt>VwL}npI9@zTxB=x{b+*xk-&z{= z_a~;O_IMp$QXc~yB1%K|SU%xa%?pj)mIIa64f*y=B#3{xD4QOg1k4J)FlaM`D=IYUoUqTKnF3Ym z(JaVoKfRLna{huwRFT;Eg>^s6Q}b~Fh0J@=$#jsLpOCv%*`%qmu!)Ywp4(qrmwyCrhSz& z;!mF`r3p&TOW#Yqih&~M6qdEAwk+&D zRL(N;vzz?T7*Y);>l|Kk1?JVaYcx47YF4B-9n1`sx}I9H5QjL*Ng}`j{2*{kz@}XA zjylHq%>2DycArRBk+XG^lD2XEy6{3-45}^Fy)KA}!8-}n$&bArVepMl6dyk?o;-=I z{uG=H06XlLSMrP=cW^vSGg49-K#n^IoVq9PZK03dV^a{&jR(bWCL3 zR1%v(kzL|aI(|#!le;TG3+0^I$F>kPBB<8}-CqU_ZjLJs=F-tEJ!G9z|DJUx%X}!S zl4ENienF<2Esgq)1JCxmNa*g+&T#um^1J@E`kXjZjr7MI8JqLkBR$#|y4_1?jvTC) zT&T=0Yh?|xEwYK%)zSWCN%K%=mYdmN;p}kM7i6I{+)uew{PRlQ30guP4b%a-Cl&Q`xDk# zz@j@Xx-r?pDo?*Jm_xVng-bUZItsQ4?Wa}EO&!0Ny*b-a#|)D3A5!nCgE~%PS zL|-w}@mr*t*L5n#N|XQ)&TDBbecF+C)qa3ai4UdQ?mS89XFXhG(NRmyDj|0KLsCP4 zr-ajtD(K!sOt5n)G=#PQkJk&~MmMr3K9x6bXvOSDSZLedj!F$Wy9^pIrzGdfE$hnA zH4|CHiOXiO=YtDoyg-2@rlU+Sq_*$CUzq`A5VQXooC#h_^g1kDYB#F~$y3By-?I)a z$#x~a%@Wk^mNJ}$i#=_;=FCJ=%h~%=1ua>SWG;lSW;sn`H7Wh+rx!kRX89&_nA9$_ zWR!uWK1r=(dQ$o%^Mpf2ETg_=;WQRvUgzYbFUn1~F$jK&~BEs<_y@#t%h%h5kWJ(e>p%0ZEKdWw7^&BtVBTu;x)HhX!8#XStDdOX($4+oxto*{OeRTtEG zO^U+9ifG=x-g&*OGs7ZWDaZG)r$j%#bZ6pOQI&`Prj-s=IQ3xR7o}}Q#iC(Bf5&5+ zi$0RKf@7LQr%Lj?Zb|6+31+3|Ku@Q&4x3lT+eqZ6r@A8t1h>Lg$z_rIZfOf$JNyXb zZkoHJ#mSUS>7$hbeK5Jzt$RAfR48ZpfoUB~)O-m=B>A*KgJ`X$%j#P)cXqaHBU*2` z$I;g}{yz2kC(Mr2u~QSsZsb7cK#rgko7OFIRZeRB;TzCO`^rTc`DdrhwZ`xV)hY;`f>>3<)8)62gA$SE_FmBTuDB%JcLW!?kI z40H1gN+WA#0yc?4}eRL^66CQObXTr zn2-OkJX)+5K6Ggd^LC>Grlk_L(+$JT9=s#Iprn#%Rlb%l;Cv>dJ{B94U3oxK#{AOO zQscfph2C^o=)D^*SGNO>k+NuI$|Lh*X{w3gDI?LO_gpvs*8KW1`sVzNXS;OF>gwkTKnGzImEoLmDC#o*47ZOZKCebOJn7|D?^xxwRpP|YOs{~BMU5vM!_ao*!;5fH ziGq+$75IY!yQis}=5DBpBJVXrLJLFq{cl9U<&VocCO^Mg>p!|&G~ovk^0Y@=Eg$oY z*|yWSe~nk(-@m^c11Xv*h4dKiixpY;iA_^&-WtevpUgI~j~{5H!CO$4w)0J!ZZGPL z2I8?$!`=FoXAYwxV%nkm4ThUm))y8X`{k8l&vMj<)x5;PQlQ{-veEK8<+W83T$dR zs520OW*-yAi3LQdirE zh=?Ln&D!>VI#U03H-G;Hz9;BGxBkoh=Uq4cq+ipdolsodOBPZQxX#YbZnrp4x0QyT()<&A>Z|IP3I)6e2BKH^((VwlqD4i$uK?2oSgm#>-I9i>1~S5VFTHsb%+VM;lk13%@JExv&_|LJG7j(~~H=*7_bKY1vB zI85Iuu=MdF3nqo<|I^P(r~)fnV4FMP*uOt}{^VJL1}BIly$#lX-EI1lPx$o~q+U{i znXsZSb@DH>%fA|b^h+>3r*b0t0A|8S>GQLHx)%Q3fAF6K z)3eEtCokyN5&u8WEoyS$uW~)g{Qokfe;B&?S#o*3#=w;6-wSg8@aBK~c1Arg6J9P; zi2lXz{MV!Y-$3=(&*;A%^8W^^f04iZ(;QF3?q$5$SReG+5QUG_sFkOpz5@mX8&m|%R_18GUUo(I|x@qv5xdVCQ9OO26`4<=d z<0kn>7c(YoSTMfZMT&h_fBzHzk3I%m>|4tY5}&(Ebfo`$d=uouP>Fv1iN6N%{l%l>mj{Vm zkUhSY?LQvhZ8k8zAB#@}PyNZX`|H{C-_JDRE(B!LZ#~fx|M~bt!1$DGB66kw;z9iN z1{0(Pazvv+PI7JFfBNJ9<2w?5oCf1_DE&zNpD*RNxn^6>rKX{};uT zcX=R!#b!jF`)v*;2vRssHQEE?@?1_1j$fisZAo&Yn?O%+5rAH1T|mD}69)Up*VOOz z*=xr^$md`$ryexxIsvqAs>w0*wvxkQf5DB`2x;|eEr74r&0$yg<93X3B-Z|HJfBIx z?pPqt;MRccu=z36J+o#f(C4j~2k_b;3RFQH-hPS0;~(&vpwid(!(^#h^I}y-jVq7S zJgwPM*IV@DQovn*AK7g=oEYm2=XWvt8<+1L#z2T`3MPp{bxR%m2lI;`>Mrgkpyt5^ zx~02Mk?!##!mG=wC9g<~KN97I{`DG-ek0Yfjrevu_g?RrUzc+N$7vW0Rty|`8#uO& zm8z1I<3vjUHM|Z5>_nCT?~+Qo5*U;7!|pSQx#cvp5R2w*pQmDt@nvg|v>Z`jAwIq6 z?ShLH_bl86coUh(@1 z9h+?y%Iqe>DxLhnu6Eq2 z@$TjB!dv~Ip;Kn)GPBC+`QaF|Ybh))I_+6$t*xx-N**Kq> zGTHU5<>BE)plHx%if4o1g(#0!*ic%Y-&q-jm(a8N8ido++WDg&pp9sh+UkO;03Mdoba@)UDSK zO{(0Yu&eC9?@3u*TS^y{W@gn9s)`uzTi!VS+iLlPY|vbpoIiE^G}ZmJ@Zej(&I>TJ z6X;%Kf4==mz}mHBD4e6110;L_J+I6Fmryp9?4Ceu&)8IUAr=5934k%%GW)`jksNEk z^q=3hPjQ?cDrr~RD-%Pa>6XSRo5j}+qjsl@FSzPe$`DJh$#L^=^$?fsQ*7q;99*}R zDJ5DXQmh~Mw~rf05WS;zTR_Ng-xXk;@$5T!-qs_r(|Myy-Qo0x(zAowZ0m&~oA@3BDIv4sjdsPM54RKt<(ZHZfLEn|XrSnsCr5t> zOqfeR=Uw0o*%(+h!M6I<>7`|#Lr-IDqsrk%nq&#k(B&#@&$D5%U;Dx4=h!rJ0U#R7 zSJr?JgELw3rgU$uL3V{(R7nXF!X(s87GhWey55NTOh|u*zWq14C4lj8Cdk$b#T^(oM&4E44@dGBy8=ctp)FYfFDQocHt123+9lgN*Z~i= zCQGXJVF62EHxR`n;_!*AWUrpOLq;?1dbB1;|BxE)Gs7^amID3g555(^<_&ReFU@3c z@E8i?wf2h}EA%S^a_eVn4uA`_47rU5WrOJ6_6uXvdNHcgTvS2foK%9raoRagSJM*D zj>Fb}9bC?7F;P&v3Uv4Pigx?{mZi2{r69CAw}1M@aEQRJPWMYdpeBvHc_jDGJ#wE* zZWvpe3!bG(`An&_U^+b~ydZK8THE>TbK~ELdhev4HjhDE_bN3B-x|F9C5ClKKC8W- ztttjGbSpSHoNWT|oy4iP4w_T=w(|L|Cq>q1$B1C|`kef!du}f`*9J)zeU5Muy=@d) zcXVjNdGp>a<*JHtA~MoPUMGf5C%Ppb;i!v+>fVrlYX`Z>rB}=gt5v_1{a9&K+zP7` z{*%@wP{ZzEwT@MGl8>sHgT2VQkIxlYA*D28p}v1ZghrBb0J7L`Bwpqfwm9Yadl@Zk zx}f*o_PN)J&F{uTb8~agjSzsq$Op%b`JR?VAVy{Ll3LJ?yh+eV?%ACp4FRx)ve0K$ zJ6t;Z!Mehz1l)c+_1rJb^6L-i7@Qw#+o=dG_X4y>^)Qd<5~c0WpuV2}ZJ4$BG1OQyHG?1uzgb(_a;N$|nG2Y6PgLaHj>J_(AAT zsGyW6cZtgkEc-FM@sbyzab5mj_ZT|!q-ym1;Vn#6WB+#=Fh)GbCeBc32vBZ}0gjE4 zege6#PXHz}w;I<0TDvru`F_8-rh&mys#44p=z31j^B7!7Rg?PwuxP^i=LpTB84M;C zLj+X-12+qhDSVeEGw@A{AJ4+WulkilU1vboXoCR6PiOSlb!gRL7pf00_Mu z;d-07sorqM+wqg_3do{Cc!`42z&hCZUr!Uv5##wSI{D_KPSRfavoH3^SeP>bORO>S zVb`4PE4G?gJQoaK4grCOSnNx4R=||BFB*N#bYeYT+%xq!C4Xot{8XZDkqy`z3)3h2 zD;Nc>wH!+9L8PB6Rwr|gip@oA2s1g+EC|76Fk={kZy+POahtQf_$8T{{${iJtR*#u zG8K%3xSJ16~*j9oJzc+`0)WbhCgf zABcOL#;d1Sj2!#we@!oC{WN^*$>x33Ni49(W4k=n;od_Qf$0}Pzt8|JN!{G&1nhAM z5Y!A0qW7LWZ>aVXwIw^a1Civ(y#r#UD}#ZXp;kq+`{7QD4`FzJ$yHEv(!M)WJCP!G zDp7#@DEwz0=x^SzluSM3S_8UTRm_f!cN3u#AiOglEVb?gMK||>Zfp07>+6Hp-@9Sw zmw>6G0}zojDg42>Sqv1auU?*N*pTQvG3eBxbd470Xj>lQY!vC-qhF+E%M@qj=4-o0 zeZ>X^s2P%_gQx?zyRc-R4ai|ULnvIwywTahYmx|tpLz9M`UayUl6~dnbC*YjjP_2N zER727F`x0;m8=QSFH+863yBbWGB(A{;p%AvA5s1(>HL#7^l-Dilh1Y7DrE_`d_gFz zbtQC9>Gs-Q)4BhaKbaH5G)G>dzAr}&y_Mhp>&=3EsrH17a#!9n2`=INHct)HouM#*JrnB44F%{;L@U2*s>%`Y6w&=rrLp|He(6WRhmSggthXn+6vOc z0ZUm47l2cAhE&S{(*@&GmpQJc&L%4{Ji*DmL=)WdlKOXl)dJ8oEHYQ`eJBZ0=Xm7; z3}bHNsGz(llZ942+^>lI7@3!h-Kk1UnopiwAhGJlBYQxKcTVPopmG?($qp=(d8On+NvX> z4WuQR($2+FawD}xZ#$K-e=<|TV|)&FKKWAjcz1`q0B%DU%5!uQ`LYz~Zqq4)w1Q|Y zn6_zZDSDQyS-`8a>kd{)1HVQ63Wy5P+I61`Is2X7&bqaR1eX_U4D5`B(+NG^mAV#a z9-8RgiO(wRhrW75MRn09c&_&|3QHUW;z*a{1Qt{lWhpIow-#ggf*DRle(5+!GDdH7 z{uk$%9)b@g zSFEZTV2hn3n<*$W7cjk;;J3dK_ps|;mWcI14F^QXC}%Z#x`igwp~Y)P z9sNGFVt!iM$3To!#5KsH1K*E%Y@GWsjkv9vWV3Y+iP>I%us^su{Dint!lj2-!Zlys zWZMX82``R8Jf?2Ss8pPWfh<#N^6aesNVY__uvyCtVSY{QS7j9()t&W!ogKh90eCR9 zOS%5|-&JHLZYQWlMq@`OJq%q)P7o!IOk-a$%6!;+B?bg%XUg*!8B6k^tdi7)mvQ6m zzmLxEgq^SDZqzV;m?%v7J@s+a1sOT)L=x16kf0}hQ=nWUZzYCXzK9n?N~rfC%)qKj zeuT35*(G9&{&rR+%eqAh4e%(XVUz^>Zo;0ZcU^`Ts?$*cYm}MH7M);{rJ)2e1nc=- zaUC4#?pHnRvV~Ezl?Sm(N(gi4r9z(19xdLOU3AX&7DOqL(oq@6h>WciQg@mPCWl>r z=1SK0#8raS&uFyVrp^F<{HshtKP&w~{?%D<_PWHKU9^Uu&Cs`8DZf6EfsDe#igJWc8+f4Esou|W55U*%tI;9)d)%uv@dK# z4e0KuUXo^rebKM{*wbzLA@X!QORaetsO6cxEW;izc6crIc50 z<%%9$1tvbJ>+?BJM;GzU*cP!fvEeJHJo>eu?SuzsDf_+;rzT?D5oierBZlO|G%8lR zYz2`q(}hF^yO3TSFD%Btd9|=o%d7L%1%v=wOOR-RuqadvQBhs8zN|A_zc*#z0Q_;tbh~vc_5SFY2*=g-zly7+>Dpo zvy93&m4oH?aE(FxvWi9ay9Gw2j8Yew5?VXR%5kb9n!`od%-El|A4k5s!f*Wj-Nd>c zCjCOb0O~M0Y5Adt~Qc#KuP+6e&7<@=1dB ziypxB9X4j#l-K756?~g35BHZ1pTL(JT$GcsA)@k5>lcRJJWy)z{HE#FNzF?uuVgE_ z1+MY&bL;~7zu?>MrpGV=)kQ!$a3C`cgtIe6bD^%Bv^bMFv0YG%gG@VMm9rLShAVxv zBA~iGpDcDux4jMkOu7%l$Tg0)bbRxDK4d4A7}tZ~-8CRdL?8Z+B9NSeXWF zXGTokch%{MwJQ={#d$3SKtV_nX4ZMVeb*weYxs%}FjcQGh=|At934nJYdG zCI=PzERM~IiyCxg!-LW3ESWuNJCIM#zAJ8-`#2zdb;~U_52F6PkizwPT$LIEKfI# z*jc|{(&R6cbq3Wj?>j(WJnBhXyy+!Z-m~XnyEc8B)2n;&46C*0XiFC+P;9=Xl9mF- zG*`52?jsx6?X9i8?>=s#@=+K$KdBP7jrBxln@n=_P`7-%S>uH@6>kjT+)7JqQQ>fO zC7p{O4BnZ3M9f6^6sAh0pm)fW%)d$R>|f`BDVBLy|I!3+Q49)BRWan>)alb41VAgd`cxF<2ydXP6j!NvorG^((uf_q@L3mbR# zO6Rb%K+d!#a=;n7PQph#$pHUtrP1;k0zW3$gt@L?vyesCU_-5Lo{FdJ+zF1%bZB6S z!E;2+I441$On9ibPupyORkdz_+^9X`B2i}7M%mHR|krRpqN zBw1;l8gM_fkx$kw81>&~rH+dXd|PfaSuNW+)wq(#11GB9Z}$7Hb2A&$3|Wa7{kHU0 z+TXofmu>yqMBzIE47M{i01?^_ptEuBDws|L2btuV>BghzrG;aGdqm%?rjp$~xh}$~ z`{i6a2(;(k5TDRbgxh{TO*|6j{@#h?w7KJvxLjGVD5$9SF}p{SM7796@Cl1qQR_ml zFlJNbn)#^HQ-n^Rg|0Qj4>$7dX6NXv`9lhp_q}7c-N(i@SohZTIDeX#q>%!I?p0*Z z%?GqYgqjgBJjNeGUu>n!?m|E3IluluAU^AUL@6gLk1LTBz~AjuC7$JdoLsd8N4IvT zao=JojLbbt(m{|$5A`v}?G(pzR-`5Mj*^c8xSwYDaTe#~lSSl$nU^LwZe}e3f73^pw>{KXpB@!nlwDk&i>7df zv}W{)B-&&?tt*qU>WgkxakjavA3@<6jS#YXL$xt`MRD9Y+t^oPU&(MD*7cZDK3U0E z7JXa+2nk`wF6_@pa>9z87H+|<~7f#dZ zJVy)Ng-&J?k;@$|&{r^4DFk?Ol0Cv%s3?n#eR#<90aR_SFl-G6<35 zf!U(Ck0Zq)o6@PHmc0#O)54If0(~BbB-48^i8=Wr3cHT2Tk{dOj-hUNec|@7A@;`j ze~C2}F*=lSU8;h!WkU96PterualU_6ws05mW2v*A)}WuTFgbAV!Zd=?dmS&FDqND? zBy`747x{7ewLy$l_e9?$(Ei$MdcG&p9*4gzD7e3uNu1iTIXUdNzNAgpJbqh=$F-E0 z?oi-7rzWIYJmdSygw@Bu4TJw`Hqe)+STWFZl8$C&x;%Qxx98E&_nLre$I_kG)m}yi zthdg&%#%&{Z5{wKB;ti)n59(>V!7-g4uZrmpx_JHMqf6C1 zzcwfSpbkxLsLpYpJFIkYMsO?s-}b6`Dg{JaTV0^MUl87*-hmDWO0SESfokuXxN|zu zsgR3uG2F2Qt8JqMb6sR#u1Q;!HfBCaQC}5{1q36yk<1ig)$2NOY|>kJc;0Fo)(b0< zo*F3CMe7tWzyiONzY+~Uf94+SpdCF**e*zANbmWI&qlr7>WLrlZqIFkGc*DH+>7LD zf?@lNTpi$>)6~GCYPdfXQCSO=-)WDjJ_gy$$hh!}-h2-p(anp|pUHH0PTU}CSuxnu zaq*zEltOsiwGhsazoH+)e<;^I&hK)mnBmJ6*96KHWssLTAn$0qn?BkNQ@d48=D1&; z%Cmls-3GK~zjIkKm4^-wH(dTMF_v}h+VlXZiE zm-B6!DII)DQ~BZvje~Z|Cdm5ivlW~Ve1SzAkKV1U36PMPG0gG`7-rQEiNL&R57e%We8^+>c0X6KQs%PMi};QC_HspVm>O$MrU=k+d%!=3rD z*1Ad-?Fk+2TbF#K%b~fQg&8t|(7`Lk`dtB=I+D8zuj9N2ut0Xs%V)IU?XqwddLE&@ zuiE8sFvSzBVyoirO0{mWE5BecBU<>wtIC_SSJup(U|B-a)69$EeK6e`bn4Ej>~}C^ zKiG33VwHxgEM3lKsD_f>DRzk9Dy$avu0^=>2fKWWkU%{D>^6jH!U(~NJ0iAp&3WVpLYZU zKLJx!9nFvCA7YnPnpdnGeV96#{-IDHjtz?)1+I-3Ph^NDR{PI3QP=^R{e~r(7n(aa zm5~K=bqgevG&KrIuF2TFjM;rwqO0PizD6JS`iYV{?*ZCRuXf%}37w=RP(y26 zL(0SKG~6B!uUqoS**xI=p*aul$aPvuax7Hw+BCJoqr%<1A7ZfU=a@7*6eznoVRFlF z>)&ZPUB2(q;CfP11}tx%yuf3(FOHrd;+QL#2O&TzAtxv1}^y}`Za z;RiBGU70HD@?hOWd0IzmP3=p(iO5h|pMFiKde#3;;nUA|GBtvh5!1oMQTc+CUX$Vl z9FAC<3j3vzYU40*zQwMA@URpAZMxayap z&3pyq=IKUTEiOqRc z52{C%wV^{@3UXm_sIJh`X3J3sC;sGcx8@BsE#!ANr>t`o4#RElw*Wf_tu9;D9(!Sh ztgE@9TipE~@*=_}dim!@*Cd%sFR=kPo=X-dnMQfqns((n`ET)II1a6)OJW%9cgtz$ zbz;V4KPC?=>wOv;AkO}D#1>{n_+%mPkK{gpS2uXA*scUqW)ns}aos0HDt?P$+xtf9 zsC=|>c#x-CEOGF{%?jJ;_@n05Qex4gp_ieLhA92$OPq1a>MzQa6Z1k zfxezYELrMZ#l2+sc04GHO+e_LHFVNO`oHd^n^Rkv0X^Qw8g^Z-@f9JRcaEezKa93c zNl7s*%|A=E+}T6QE*^uV1=vB}2VD{2JAU2D!&?o3G8AcUin}lLJdD}pQ_)s3&PvRF1YbF&=ppm3%?*xGLLMyU%SP}o>M4%| z%Gr{R-jCbG&E-X7=Q5gNgj)8&IgO@;r*wfY?RmVw(29)Y-|8C9_VD$`cv4ym6MDTF zXoXtVL}FReT}IIiuZbC13Ye1~G*;rd9mBi?X|zED!e=0z;JzWxK)T;76AF4#IdTHC zb!>Yg%7IB;Ed$oclB-ppsZh5@%ynrq7{#t$SO`&8AE|66uF=iLyWty~_EYEfOy5Yh zM32>YgzoQjKz$&|>mPbJk9m3`gv9GIPgldaD8f)>Y|}UQ}Q|>l^5xF#1sgBzB2UuS)U4>jq;}#x!7B^ z1=2sYB(0LWfa=x$=SQfmXoX+b@tvH+rkV$b~zv2etf_9(oi10B%zT&4cF1&>Ci z*84NKj1M|MoF{a>pEAuN?^Q(G5e#%m4w0sLGH-{j9%rHc#=QO`#}d#bQ#oyGA%Z2G z&w3O&pF55%{<_&z?V*zmCw$KjHh`LI?=GUAu2Kko#zjsx zVRl||c&fiSZE{ZB5`Lqj$EfQ`K22s8z|L$r6)2#!xoBe~+M&2lG}8hUea9=bo;ZJY zDecA~V0+ac`CL;fM}*Z&t=-<=PqLlzVfj&z6u|wZFaS!36ze}2);~jtB zQ#ElHM^l#3{zKv5`uZX&{V|;}jIn8l=uY~T=vReLf#E8PxA$I>uF4(x62 zY#W>8*s1s&_2h3fo8KjA@A_#c=CA9~UEjis^lA%ibvnoCiMHV3& zy+;WuzG4FK$=u~7-m8l}fK~fJ(0DTf!lFH6yE^vGkOKX^$GBnB6EBETmCFMSOa9}5 zw0k8$AAIOETdZn~++7v%Qx9xOPwTYB@3>^~$VeA#tk(dsJ&DURd8)C_kEcx-@sFaT znvWs-Hxx?S9+F7yX^K8bA7DiMi$E5lN|q8*7t9=UVMe`6T!3=M1^6$l49d9ZG=ttQ z>uWimMc@;m+*Rb92Z8S^(rjcfP(Y5MwZSb(D5sLTJ>V$s#0S`j{x6&Z*9nMM`5Y-+H(F6utp7MqR)%J)(TGsJ1YgUm&h zm#bdutM)vGI`am}Gy_18Y2iIGID$`ST_fa^RhidDn1>4#L z%>6@MM(AqXy>v3{VXTbJ-Sr)KPUV+ZEu@gB=%jxIW=bnhq>x` zo{H`38`-YQKS$LV8s@6iN{{em&qFS2H_%Gutvb7RQere_A4~JWqg7iK6Yrr<0jBhl z4w~dJ8%mjMTnvm*jN@)`pD7UJ7R|O|y@%|i*R?+X0)g<}F)4ni8DiOglJF35011L# z*#c)dKYK_AhhCa#h)4XQaIREzYRKyyehe}u-prD~t|k47EIIHv=}~~J-&uX6cGMJp zc9o8LHA7T2HN2v!AWo%W7&P{Vfbq!1QYh8q2NC!LqfSZDWJ(ps&q2*~j>q{Q7J(KgPQsD2i;Ej~ySjG>EI4e0i`1AQuSr7ZgBZ!m-Kuxs|0pFIwP>u58=+oxz zV80SeDus7DcIcLZE}LA7q%X?sKRb{s8$ZyBpk+bcI!HEoy&M8h6`2eVThjJvE7oqw z%xrFNMtmH~QlBd;j6EpF7P@yQ2{6p4OLj@DD5nM2SEx&_zivw%m+3lgVWlip#gA6s z7}^H767=)v?$iJ{$2 zbhn3qiN^v^o(jFfO1*sZWcRASUcB4bY2q9X|627EG}MGjqTAlNFe@rLQNE#$+ueXjFFq3~wZy`kbspw1uCG3ep$kJ5u zQG6UPCX%T;?*p1(gFu3VPT$%syYgNzFjYR+jRL5F8o{Va%GiResOCgX-KY1i9_qx#!T{Ot)?rMuiLHCly2^1>HaxNO2vz<=WH!l_~c?%`o`!kT%AEN#sb6hM6|aZza6n7TMoX9i&3( z({~T$1ua+v&;Hg<1wco_5y#rMmkl-a*)w=m3k9vZGbd)q;y=@nMvhi9>8y$4@7XkV zq+eR<`g7+N_ohMQv?Em*g2{3lCZce&94M0G#@s9#{zhG{vN+JLfDwJ$gbsAE19j&! z?tc!$w?4|J-ua}jRQZREjap#`?aoajA z&{;%wPZ)UswUeYZe;FHkr;g97Q!%!$E9NMsBv7PJDi!_n{Z|lek3PEa}#|lI-hs1h(qGMlNRs$ zBsvZU;$%kK?AxX{)znOqzoT2)Oy*0txr@Tn`6Fj`$3yD}om*~0iGAAwn{HLt0mg?mp7TiQyrLL1irsJkwjLiw7rk`~lSxhGEt!b&x z=YdX>Y?uuf0{Q*jwNbUoBp*#^WK0%^R_xCWqC&RPI)dEbxK^t-a9-icc?#5;daFCY zm|YvmWb#TU%V+e+2{g=J2qRvIkuXtquT`y(Ik1e8w;?6;_$|RkTCD1as3bCrOeU#M zlWd}d8C26%0Mx17(^jLzvyKOW%)u{cvLY`IM_fMeGJj+;JQgk@z-QygD$&^O)Zury zqnZreEFezn`E4URLMpR@cJX*_Hxl!~*Y8Ds!*{5CD_EE&(E*KIkoCdOyPx72{7&!V zcZOlbw+{3yUPcD=2r3bq;romeNYSscRC8jU*~=K2w}j8~Mbx-v*R-84pN5L7!!zni z5m0BM(jDh--WO{RGECAVFEJyATev#YCgm|bL=P>Nf4L%$fDCTzaGHDI%GIJL9!=4flXH1>f==GW`20t?qest6$Jy_g+E>*1m{w>@NM5FGdoOpM z)=tyqfs3*F`Qv>bcMPT|7{DRP09e$R==*Qev|RySBuBJ=Y1+=vvJ&S5&!=G%SvmPq@b1 z{OW<*7q`OtQQjsi*lsd*9KxBYD*XQZ(N_-{9?k&^cmGpjhDXYPvzfEaMKduH_5L)# zYt;jN$MQQ#Paj-A^WpBcLOtqE`$m82J{b=8{mr&?{P5ucF;JpKeB*=QexTXS_z(cZ zxs^Pa&+@TO+^D3uKlPC<7q=YG%ZISrG~YVwS<|@S7}3HRJC$7;^HAX8SyTg$kXq01 z)|=q<>A~PDW)e4@^|?B=PrXMZp#;BEHxdW)T?f+SXFi6?M_D|{wum|ZW#{(9 z3o35*E2N73MOy|xyD6r^m#$d6zSruvuywUgKQ7VkD6@F&D;|qh2wiKi)_SgAcU(=} z;BkJ+2Dy9|C43n^i%U_YLQMcmF^;pWsExMC4k}db`vt_oBqF!+aJRCB8(3e;OzX10 zKPj*LT+mGVbZm~@h2+IC{Y2eS020-vNACBvco*vCCbER2cgQy7U%z*K(%Eq)5Y25j zDCfrdRXZr%d;b-C{Iv{OR?!>hp`-52BGa^IQDOt%LBNhrIUA9PxBy?3WhJQ^YnCWs zCLeyeZ_RvdYy&`ITJ5#G)?#$P6t9`cVUS&Wx}r3y7%nm=MD1EEeZQd=86}m3w6w9 z2NP`#SZ*yxEqJ<0gdz)?UpV}-TB?T-yVX_7=s)(L&2cXSg&Oh4=KIduH|re+DR zExJp5d3*XYW#(ZnC6%C;Xowv#WS9F&%JR%$)QVq;_f zJby_fwT<`tG)M|LV|yQ#exujTx%{yD{kzA!-)pain?EC4u8Qtla9)WryZ!c9NeDn6 z2~in6xnQqSLm%5%?H`o;=Ez0oZBX?w7?b@~R+a}#EguHPs1;|w;j-w9pxC#y>w zr(LqYtyFkP#)YoC_HW_sgcBC;TEAwMaR+d`r**t;-go1a2IfUCZ}e7a^Tm7iKKst{ zda`pK1eYc5JV_d(2y1L45D;p4x+EGuVLPJ3^b! zt5I4osiD*D9zZib#DqS)GkDVBk&SgwW}ytrFOxnzG5d=Kc>!-YnKC7A=n5A39LSCx z*SdM-|FHMo@o=_lyKssiAxb1nq6JAr9}$c;5=Kb`kq{EykckL}(Q8C!qC^V@Q9_gu zC3-KTMi)WU!RUh-z3qF}+WYse?^$cT&-&i|{JH=0V_b93T-SMCXF1N}n3iX|Ch5M{ ztg8a53xsJWQh5{K#7f`9x|Yhf^L+EGu5JW*)k-rw2wAPEdx89s!6XOG+lph_(ig+4 zG{@)EsT1An`RtQ)o(}8_b_QBoT%ex1TMYCnuH2lie}{92rE|MYew({!>!Qfp$)o{E zEuOT-1G4}tm=M)27$^30_L6P-dx@ot@PnCpptL+HHDQhajp|v&a9=ZFmjVjNG%B3^ z$E%aa)`e3_Nz0GK49>u`VR}In>{)T2PGI^2fgSsn1y=4!p>2H|P#q0@$tUJ}m6<7v zRXRPCtwy_zEX*UTz}QF98%^1b=5DVt_6e)TjYy@39X%?r{H8_vqn*Ub!g&+&(?SXK>FPc-OARq7(8bP%4&3x z7chM0(agwjKd?0w!ge)1{0i^Ua7?}Yn#?pUHtn?gQ5|oQ41Sq);Yu7JXy*i5TL@sd zx%(Zi@z`(s9;Dir#^K}{N(#}zRoqTh4?oG;c{A#fsH+HT}Zo~Y>bdSmE3G~VoG=I zW{qo-(q%|Mx7)xWwQpk2Fb;X$!26gRM37r;U-?atAFr4{_DBt-PueigI;<(fbiJ2V zG+O`txu|JgWmiTl{CLN^ZcAH*#Y1q|`9vUm0WHQ^wDXe5(cmT`498_}$HBfaSz9}m zpieB})aaN%5YKhFZwTHYru>P~1E$W5Iy3Gauis_cwWRi6&Oz|S8Q2FNg<wlO>00pjzS@&m{$Q^Uy4b6}Om zX#HLHO^_}($PxbdS|24a-n31d1?6^_@(XyJMOIBNPqE6c$PB_VXs9c_<2oFBo%{8I zujZNz8+CeD8wcVpH&l?{@@Z>O$bDl7K;;&a!!-UbD3q%h(Ibqxk3)_AgfyAnMRN&$ zP@WAYfg6I~2id`>xzebQ-b_@xCaKzGi~}c#C1ird_3p^bsc!&)-2&h&RNP)j4`)gdaRid7q;&W~3XVJ0zNRZsu{2;v{g@7#{X@Pb1sJ zNrtqwSI*N$)t|U7=e3u#D>vncoQ?nejg?nw_C2d`&0~jH<2emSc@4I?d4yR1j~5W!5o8(U z`t;YlD@9_Dp_JB2uq!Mk$NiwpR?a?f3a<}X#u6_HXt$|Tem5Pee+dcN93y3e8 zUYk)#W=4RRkn?unci*OyDgf+E%gdc6b+TM{Y_&a2?+tOD!u+1RO#qmx+o7A0;g^-f zI3*p|cq^GJxwZC@9Om8L1Bnybu78teO$|hTpf;TQ(h$QkM`Z`z|J4e8CV1`MU_ozu z<>#HoQEcTbknlWgg16^;RM)QjU;I2VuF`)56p%09$sxSY1djU6PmRA5RSQn?gxwstc`IGIiXH-5q>2C zZg;IBY9dQGUQgCXwGFc8ADuqG8m*FJ#a*oX0^Gyqy6ML|*=LK1uR-9SkI)cRyG}Mr zEsE;elZLY{*=7D#VLu(M4_@k>do}l+DccqEwQFr9)I)ZZ*J=q<~ zDhi+1uoswTjOO&78JPhM&kVZgakPMEjVC1p#t6ITTN~)T7TZ^RjXNfP*q^9wQjZ`8 z;zwS(I@B52&xREjsckCV4N(o&?dsI{aniAuS26SC>Y@U0Nb*)j4R+k`hQnM&hln9{ zK}Bqq68b3RQo=KurLjwOW}5MM{;vU=pSBcBF%K& z>{{RDpD(PwYLv)fh))o;zz|?o&`Sobg(n=j1+{K=!rZQQ)(}C{-jIg7-A}Wl-^&l6 zYPRhbntVjW(eJCl`x(O2j?@U~(j!tZm~GBuP$W&JITjVkd7y3%9EqNZ&Db(?&_&$6 zAPU8$svi<#g3Vl;wl7`|khC-srTax*os{!#^2ddvj7!dUyIaC!`c)=PIPrD-4XO0_ z{wvS)QX%TBePPtr)MNaJdhYuv-gEmO*i@=+)A0U8LRMmR5|5RPV4Tg2qTH#lNnaMv z3jU?G>%)$@-fnDx`n0)xoUL* zTiUIK$P=NL^W8RT7Gm~^_Oxk{G{hhQEpNc%lp3#DxE_%H|~YByW^ z)BuJ15055KJPTX2Ak5T!bh9TeBB2}eK$%k@8O6MLFKG8T{A4$rT?2?iRqU#N$fv0X zGPvHQ@MOaqv0YVEU0Q?nWo~*cdiJy@%NS?P{W7jaX0OP&j#S|_K4>}pmPrYB1M=5mLWh_}24~NW z?5cm0Q~X0Fp(l7=P~83N#VqWZH-*jc&o_Z&M6iOwrpChTJMm`RTW4XPY~J-ZsWEbi za*2fXoi4GS*?{^UbD}+b%2iVx#^k=)o>k;HpC`H%!}=r`lFIlUebBV~p{|ln*1)%q zpE4eDb{cYb=-mdN@+4$&8W!{ODBSxHj&)s%<=|9e&ptah4fgkC#m?ex;a;fh(;o#% zyy-v5d`Gx`vhVm8CuiQ23g5?;58(rXemjT_T1`#>Ju?thsZRRp>kNc!S zyL>ySx;Z)8cWP`Q%5~1`ma>WZH4%Q>ymVUp$`q+|>^k{;WI|t~i_f;5y%7m(l3G7u z`&NHp?~oH}2-6NeXk9PoPkZ><@2J2vBU)sZQMKcp&o1GldMfv0oN%4?c1)d@25$&$ zn3r*lX@eukMLt&MR6we9$_b(tKK|NOJ3jQpbYvx}qIfFUwd5-8Si$Z>(Hu9+zQ-23 zCL++=sj+YMX`oP>F?k3flG<&dYdm94`XZ4dQLl0%Q5`eq#5z)kl?faV@JAk6dSH9RXb(Cpt#) zvJSgI3K-;I$NOZ5TPU(L*09vOZp~cQ8xM46Iw$tJonOlDwQJr=GCTbi5D@V07sbql zNzH$J(3;?JeV?PRwoxj=Dn9BtnA>r8<`;me)`pFR9>$+MpFP#E$cKrY@p$tbeJX3O zoRjTy&^@+yJR^XN`B0+|zZ?g?)y|!$xyF~Xm9xdW=FPVJ@nHX?sQl+Owb zebt%T+H!uB1kl)b2fg4>XjWy8zxS$DZ|XXx2~aF8X>5-s0{V$ar}mq1<^;Flstde# z!%}spzbQLZL-&P0n#quDqu7? z^h;be@im4}HaAX?Be-fZeil(r{3#Pgq`h?XZOom6#$xJ_#U6VVVXmVE4gOLKNYy*e zHE+QT&Xpt>{^jKJ69%DrR%APSc=JP zuV|S|`qC~J|LotDBmnmj`{<1Ggro8C*v~vO#^dV_jwM5!1o2X;Cyj4ELi?KI!#9DL z2PPd-J1X?P^(I*Gj7OvkDeb6_`^Fr!GnCPw=6KvYwv!tr_bw`^m3=zI$9P-QzlX7x^;jR)41zV+_X zaeVa}FK1!>=~dl|)8comj^(tBANJ4ipf{yd)hd2~fW6-T8{436|B&SC+d#&wz{UQpTiIT?BO{bAxe+RMo z=t79o;NSr>2=PZx{}1l#_wIgWAn|ek2z&BV5B`1ygX-qvj{B_T(KH=U1mMqzjh3$& z5YWwd8E2d6Zzbe1gET`ca!oC1Z+p`FWF^5WKYq4LrLwY;C(_35!V+)SY6lSgKZCsU z#Ia7^-q(nJ=WDps^?*VOGxY=k?avH*t+Se}yd73OVAY!&L>Z@Sni;+!t2G}8C8d_< z6|3`@>ds9*_=VsFuq_!;8bmJ#AVXm3MiqF>3CQ645pY0Mq67J|xv*4M3N{{whc&PpXG$p zbbo|sDr;-(E04;TPu&!8eNYMv7%&gpYXY3d4+;r&^At$Gu~*w@^a3jNK)lCdsh}+| z>dor!NP>=k(!=EGzw&T<+5^K$rOPAi#ISL%Lz@S!N z4SImH#D^-bUWvbtn|gPgRl&6ZLn|1kO><7oYD^vMWKvdr|lxaw2d zOImY2vX?p(!{rjcCaCZ{x2`2VFRMH}*6eu+Xgm~;L{-qUN9xC_or#u9rpm1%rXq?Q z?QvU0O$KF$%AOci0s8r!?T|P@@%HMpS$8ftu`^DPq19>CaWyLz;~(CS z+i_pre6xdj>jK9|0fZS3Hm%J)fUa#nj>UK z;jo}Tlkl507TC4N;cw|Yw;=Hk&otfmq&Nw-Oo;niNrv{}z1GzG=iO>30yOLK_PdXM z*!3~?a=q#+ESptQd;2PhJS46aOy&>?`c?MIYOvZNWpci&THzl(+`o7x^x4zw``FYg zdA=04sJkxv{aO8|{uT8hkmt9p0i?&|vdIV)bloQ{f;{5m*t?9_H6od4Va$kl6;c5X1dz0&Yc z-sbfAN9;a$;Bp;#DDX9Ux~*()J`q92?Mt|CuVm-Gz4_oDlGK0yW4#_DSYc96JTNBm zUzW`OPk3u0&8P>oR*K0Z-qPbAcDVn$I{W|mr+a74G61WeTCdjrlRf=68TUUw_v1L= zusJ@6*Z8Xz1fK#<6Xjd{=YMh1{^O_m$F74yx2?_OZ-=_%LjKeNUE^bJ1=1G}>H;ZD+DwV=tV zGt#7sr+U=LN!kBsU;p@Tj^`J8b>pHJ|1+!p55CN^41j+*}A5FFlu|Y}<-+%TseO+8k|36#+z$g63 zLB7A`M*FJ%%LDvRAO3GE_=xhF_|=NqUeD$Ir!PhCGKI~_E{a8|FqZH1`9bb)&%SEk z`tY9~u)psIHvBE)KDIzTFY+%sXO9$tW2WAuycPQw9Sf5~vhjOao4x<57QA)|IA+%* za>{P}(^vXG+clgDVEh|aOjdu@f&&qNk01D^Rde&NvD8Z-d0{Z$LzCC6fS-)P%D_=|2zz*guyal^NMs{Ta_W^w^O zKGQVaP3`~6NBq|*273X!!dfR$sQgzgcoFdNk{4Nr^#6bQm49ceJ{vs1uIzpf7XPaj zv{VI*&+^xoi4T8~2Sam`qwD|Y692Elg#YIf|J39C=?egt_)*GBr|4f9Gy1F#8OJ#S zaJN-H)_K{DiT_&C|8OMK_;6-LNyM#B0pEI5PaZ>90nl^&ITb)F{$;0NaaXx+h0{^8 z=k~~o`{aR7_r4o2uR}ZYrY`dKOktAod9Kkr=%YM0AK!~v+l67<3NqxiPe|;2OxcG zNV3DGN$<5d4tUCRLGO$|7YaEj_GCV{l`ViW7%f|Lk$Iy4@I`ow)`=Ea7q#DN>aJ}g z0R5s)8{pHLc_&(zw!iH9c)YB84DfyRcmdcZ<{eq*wcn9Q_LrFp=|IKRI-s_}pavK$ z-}-S3kb8$E}YlMySddLJGBTSmIkwZfb*twVK^3}69McRJ?zqZT%8T}KUv$e%nQ^CidwFex7?o@Iu7Meia)aSf6crNbjuNc12YZ+Uq@vAfg(#T2R#&sxcOxy(2)v)C=3)I@reERT1sZKlRbt4o_+ z+fLij(s1rt@*mS9pFVPge%IiK&OhCN-+nQb15aBl4^$dF3Aif-E=r{UQ4XTVa}9Nn z$sR~SK32D`COzT}gwnnSpSic2TAR{z6RTQ4r;4?+a; ze|=vAsPNpkS8ctko>F*TX^4EpvGtoPlk~MtrjmRRC-u;*YkI#Ax+;Tg>hsf^bBLGy7jSTFDXAprcT5gM6 zx(0Sic504}-|Q;99q#YbTYr!y9qkX-b@ygPk`=fNa%4WZJ^Ee?{g9~F953Oil3A(t zgp)PHl^Ky_9dVExVih<}6>sc)FgD4o0Ul?jPY5}9{esewZai>Lj&1)VD^fX!$3wRQ zs7E2J@(PucW;6L17cz~%-PS3&NcSuqsC7MQO|T(oH@OYxc4Xx z6ij>CnrizC_o^&$UXCaE95(xF*+iv&{|sb@F|?a39q6AggWf>fMH1=&HrsCh4g?p| z(jEX2x$ZtEU>v^+sG-w*-uNeX2sDLhPIA64ESa_ci)aST=i{*T41GZ~gr;F{!H-F! zjm#~BR<#VFSh`dnybhp{e`L&pmi(eP0~mDZtaJiKqwd6?oIbk_5OZarUIQh`kLI*& zytmVhlbh~nRIWGD(C`M{X2sn>N873Tp3A-6bP8Wv+$}92Lk7m-yHx8kYz9PcgEBnN z@xCQfjk)hsO(qFdtrSI?KP-b)HGjE?J#jv0x~-Jbt``He&L8=TlH0u zr)nk!=cjth0F(s4LbiT^ zW=m~g@UmG*>gz?ED2XkFtoq(9z{Q(6Eu;p%6l5+eZ~u@HcnI{`oDnw3YjmEW`tlgz zt?%JORu3FY-7upWMf5+j0&2C6=}sNJtzd6(J?S()ws+c{4%{&$<#=VQX>{22im+M5 zVvZ%cWRi01qIY7E3pme=I7LnkXUI=3!IB0Us#b;3FhuphdeUuzm?~@z9(j4wT$GKO zbEYQv_KxgRh_t{T_Xo^DjzhzpzA^vtFV}qvZ@%4F0Y=`%vb_#ZJ@P972DJovUQo+J zjR81wOUN)l6LgqS;uUdE@(h_5SRrdcHw$C`(C+Yifv0upxYL+Wbii3M{ps8W#AW$w z8MD^5OxFW>tYBB_G9i*Iz2UB9d(=>WJjZ5}9##_rf0pe>5^ zvbmUCepnej1wDdid=VU>03$6n=qcF0#3>rXW)xK(rWc-T;0c#4vQMRQsHG7u|J0j! zn+C3f*`O1ef-`M?9QVTssvB9 za$5`D#d6Y=#ISG&=aH|MZnGP(4gs~XjTP3(UFFtzdfxlBwg|>3ZLhsfV`dF+59fzB zl+)Or4RO_AJ^z>(e!O-@EHe|N_AjcUKq<8TIi2vHSxWu99iQk ztV{vEoCMHgYvGLcX|V+Wo0>}`U6wqltn!^NN7w}}m<-){&xjsd&t-iU;Ef@b!$6}q z`fKg}l7}Go-r)>O-Raj9ErwI;1;8%VoZsabPSh+bE^r&lmCiEbD(m(!=atjF5-IyY+OlleWt*6CQeC_- z-XTzcVFz3jjTABR@8KEHs92lrRIm^EW4?l>I?Y3RuOg#I^T)l2s5etPb+(Vywz^R7 zJoDN3yZp+(2gAKc>D>IS0Gsdf%3@YZ_cJ|Hf3usv05<-#;@3jMg>Dj?mBTf{`iLaQ zna%EPa;VV_5P8jrX_XrRCg9XP#Frjb-r5a(_GrMYtXn+4yY9vBki_4>jg| zs5owOGufkKYJ>Bf0FF#V-N88F0?uSDYRIu7#iqYx0un$br1p`w<35{&+>DOn^7qlEi7;=Y5|~S zHSGzKuv7-w8@R82zbYVd0!$6_qZghhxdPpD!kGM0iT4ZUJ#T{hWM|M9Vl$KGWlzd^ zn7P|lu5pqz5GGn~My|r6$z0wnbHuJ#|Cl`yU=D&7=do>f$zHp!=4(x)Qdz2fYLM=Q#? zIEi8|aJe;-Q3^9Wt_-F@P}o7g^TP1BU&(a+J!vczg#_s$n~|jq=39kxG74+9ugXM5 z0K_)Dvy&^wm24En)#Oc$7KF!j0zpC5oix41fVR-tb148NXHM?Z$5F>&>C|wRcc#3n zexg@U^u$XT(7=3ml75CMn|TZh`zwM>EK~)AkYeZzCjk2h!HAn`4MEkHfAYB;)4_!t zn^I#upP6N`BklQmKEsYOj_i!~LhE)z(=G;=bQT)*-J|*nPpX{^m(qH+h-!l#tVG4I zKPAH@$gL!r@AQimD%jKJlb$<5Yc;zOCsua|#UJEw97pcJ6?>a>E}8Eanb{`ki%Fz& zD#j4;8&)I`K}x8m7-Bi-MF65CEukbFrcqb!&S8{~i6vPp$IF0|0&3w@5l= zbKRvTIZ+O=TlU5!Le|f15=-H)p2~$IRsr;@O<&f6K-Y65o4JMsn&+L6G=KFTt?g)p zgn+zSpRUQkYLrRiz0lO-jVb3rKEbv^l8nN{vCMr-N zCUVjPn|Is9m>F{wCr#~`N3XE^6r%B`@l+(H(F(S4-~GGnOdUJD+hPPhG7z3|@5W|O z+XO5++X^(UjenI!q?0zrdOD6t3=Owub+i5cS`FcVLvXV)6B}WW-y>;NNdU>K|3vfQ z$0eVxsD;9XGW6Nm`ZG>&_;a->8OG}z?QH7qsp;fVASG?nwcwVYF3uXCZ4NSqGY%X{ zp?8k}2&Qgz52{Xe??_r*zwAWfs#X6XoHOaP(?UKvoy?2h5J?bsg(8s(-a+TS&s6@B z0I=CI!jkP`OMT{J4#$Fc!!3^1L%L$GMwuJZ7OIJrpTyQ%ckap-moQDXh-D7NRN`*$ ziL(z>4fzo0M0zI6BFMRR48exvj^*})j@(V~EZabD$We^f3|EJEVcEJ|8fBOW<%PQs zXQ%(EH=3DT|E_E?TV(9gzi26ZyhQ<~t^^qTL#i4U>C(mgI0Nc7V+tI+z6sOKpMbrl z!#OJ^0n`*Tr6(lDq&(ohc))cFUW)t{z2N0tyK7A}84s?uoHIN&1t}Ve+CMy(ehwYJ z*VN1No-!a+eCs~+V5CRm4st5c&5I3BpI&{I4r21*{+mQM_!obzi^|H`k5y=OrbeCt z0|11}*!zoHAWtI8R-jbh=i8i37ckdy*!CCBt}FEGgoK{uISdqIpbVuBNF~Q*H+Zud zna3po)ggv`;jlADes~{*uUGW7nXO7Xs<)dLI!llR%bz z_;&d#OPwg6D2g62?2UA0G>d5CeeOz)==08)x7oVuEm7fhDGI9d*ivm1|3a-2^+C?sNUAXruZ_?o_uG?d37iLu-@n~2 z=cJwJgj2POMoZ%{sE|!0IESZ11Y2M+Jt6wD#nMlw!48wh)5Qt0{;V(#o=QK0sK|Wo zFR5$d{`lm?!NOtFg6*>>_ziasZ@^+Cw8Mgy&UXazGg?XK;Or(HPMDD8$M2UoB=0&w`8lbeVO2F)`3V{fNGz3 zLs5eEMMdmEGc~iN7(q-pf}V1&tAadl70GhaZ}$m>Jf`cx;-NbgjS)$Tkm)#fY{xKd z8a$dfBL=_X7`{=238+XUTAaNXHa#01wu4p#@!XLx4N^6Hnw2uuyupCpZ4b4hZhf+E z#?O`|g1fHuSwF+F^ME(W;Rz2rS~SF;s>7O+6~bVbuqVJxTY6QoA~Tt!JBl&fe&-eMMCilD_TQtYjA}f5WA4}R!-Pt zrjnbN@zY($m%D*gNp6bhb^b`-w>5zG!6!-04x}7AIIR@6YS??d61xS9_qDk? z$tNGP`&->sZJ=mU8~=$tVWHZ$D{ihbR$g*wxT1EA%|JN!Pzt-gxv@)|^~!f}A?2L= zjNI-1J3sHpe^Whkl(zz#06d_FO8!{uXf?d~NZ0x*;%ct1X~y*P*@N@*)2sG@2HbgE zj@Xn;X|pJ@2Mpa$?-avl;jT(k;OMA^d&O9uT*_3@F4$#^uCfotBkw!Hmds2i`{bM=W3l} zYrY6lNkF_vkv>qz_vFapp#3|L{WC}9 zDfN=Iy02caU<=3>QA^c>7j(oafHwbmiE&PDre(*0V{4xT+^r&Sx2p;X>DXAq>ah`S zLfE7-p^`s_U-b@C}R?A;MXjlihbXzrwVa@-#>cb`6Tf33vhhv7dLiC_$> zx}o&GmuxUPW};#Dz!NuLpTRbOw#>>y?I#K3b_R-t3lUH9U8*7EmUry`x&_3 zG+!uE@3<|zwn-lti=1z=yvR!pT>HJ~FLyz=Oa>nf*5tHtfpXUsA<9mJ^i8L@8BHjB z8iWGQLxTNEvLzX-z!Ue82{+z>v~<9;d@8v?_DeQRw@*>YVrX~KOcw)$O3o?k0x{yj z1)kge4D9c=n}XBW8vL##G16vccBIn#-a=<8g70YZC^m282vKUU-3@+gRL5^oO(U+$WU-U{Mb)-g2;uc~`n- zmIljQrNcqjuPdGrg&*xM;JC2}+OMxfmhMMKZ|`SdqSSj-vLFtJi)hjirBwUWB0l2|K@)z!M7848|#N)!W55-zyv$MsBB zpX7+qbqR|!Exz6(?HF2kv6UCIx#0YpzIzK6mT_DnaFiGn0dnp?dD_7DwlLi5Vam-k zV?)a)t+zbuI8pP@)}oF>h!03qOYHln57s}~2KoxNE@)C8$(u*Grg$;ZTwQ1Dy#+?w ziFTND?uIA|t@=MERCX7)oZG9*lvXy7T!=+Tal1#d;OKF;Jp&Px-PwhRq|Nf!jkv-H zM8t70T+GwIG`&Z{alxu1kQx!);?WLLcDTQEpQM8o^|MBoSz~)M-8hb}@hX1g5q#5N zpujGA40E23x8eTHgHknU-F>?j#G#NZ+u0=#GS$8McBVY=Q%YbTpESp+#)M#6uYlQ` z!I)~kC^xW5^tp6h(&Y?wKjr-NNk3Tg7}5of357>yjxkRhibvYu8a|fWGWFfY`WZP2r0{GC5>KJTany|v1=cLM*IwoBZ)H3PhPS?B ztjz9iow&5DK0j3%3MSfcR*R?P5#%j%TbNfDFfo!s0l!mM}7VhRdN9-Kh z;5M(jI#m^rToU)hYHEj}o?H_a0}j=2qS4!RVK$oTl5CnIxrs$r+aQAfM}b~7xbTBc zLa8x=josl<0Y9subS=-+$i*Y6{!N>=pUO7rMzwRWVp#XJbBiqRWrgTHU;MH9;j_Ut zr9Zk`;xGQkzW{K}Qi{e3AJ!?VmlQmsL8V$@Zvv<9QmTLwOm=-tp;$u@68D+4vcF4Lp01`GP*xEkqaAPN9tJ%x|<9`Zx|mx@E2WJ>UtfGE0m z8)93$-1LviMM^=zfK$WXGSz#p^%F?Z^`e{PX%D{e(494%lfo4j2Mr`jUos*%^QL+osGOeDdm_}i2TvQVr+2u zc{pw-nC@BY<8lu}kFZ+SBc8C^L78Yp=!6am;aMe%%ib3#Da_SQH~m zIG;(G+S`q9^9s3MAG9K8e%pJ2K73d{*)dRzNr|7<()D*|PlZWlBYQxk$KeY^Q0$#j zzo}0QML-`$JtRP>W}x=H7)OAa^$jc^ruK_K7!9b%e@gH)z2C-M5Kmap)0W*vloOzs z#ZnFK8ncKql*W%tkH9^C1NCp_YtFZBu56qR6!$BAz#eKTvJ_g#{VrXppjP=Bc9{WN zJqZ4Qm<3uZA`m}chm5bIjkhr11w2{XgYjH+3EB#hi~%^?&#|VZnl?kw#~eNF_MJky z+MyfQ;0+512!oje8ZJobI^C%fv5p1-qiK^q$rHcngABpIuq`e6DEKQ!iRvc872`)kTLzFUb`u8Uy zbSL0V9ISI1wQy;LXB}blGPHW*y{+KB<3?&j-7RTl-!})hEW*Mo&MRsphcaM(fM4Aj zReE~g)?5(1t!Vs7PV_j#)+|C}wyv_@4=4awTt}D54s)RcG5BQU>kGlhI?lA#>G%x{ z)+3+^vGlD&fd=wox~mrkenN+U?jHlW{9py!e3uZ65%y86Q>zFPmhOG6&`)XxD68L| zpXyb6Fsz9TgBknE!QaTH7z8$%tAd`i;vGW!kV_64lQ;I6q&ZgudRa{TVI?78VPU$k zDdQoRd-{}PAhEH9myS7On)KOGDlv4s!I~av<8XE@F z+UV2r`#Ooz^atz}8l0@#GL$IFo~w-tOtARyJ;s{A4QU5$%<1E$g5w<3&bpk9Kpafp zDu+j6b@|9a0^`Jei3HR$o(i5)9A}vS03dU{NhMxz$ANwIXzioAi~rlzA>A6`AWH9bpqnx@)$d zV^#qbl2wo4PE51RoynF@DVsGE1nJ+M*9m*$*g3t+!P0d5au4^QV_hk8vTmOFZcZ+` zGY0|`a=f*O*H2k?RlE_xQ*xrUh_t!^VbV_Ts{2q3$`RXVie8*^TQ|Bo>A!iTv(S^_ ztG19YXmX_oB^92Dd%h3-shuSlw~F{<{ZY?>B0)&3VqwYhU$$aIeUM)1A%&a)VjxA6 zdLw39za~9axCn*s_gyVk+;8p1Uy|a3)=!la5GyJZ9#|>9HLmWbJ;?0_7YJQKhZZh+O_;`02FseJ`2OJ+ge;Rzk8pqm1bZD)9DPplR@$>>lpfR=FQ&my zP3a9wr>3G0n?Ce^%Hr+#j1(Yl9p3Z+^gNROov01u1UB{vlYJbXjv^UibivOa`4>6^ z=UH%`a{u-n^l{kz7kkeirSqG?bW{S`xjU?$l!2+?$9%E3`qS@| z_S({I*r`?z=Ju6G-Yj;KQX+yLGt$K^%Ch%dftz-rL^)BOW3DN7pNC~ze|h&m;C008 zA-(V;mOSbXXo)@Ns#}naysU1INT?%!5Ztkp^uwWgK0!M$+bZv>h4{2@J_AaR6xbH% z)k1uiLD{d~+AHB^M0&auCrwluM|#ymz6YDAhJd}BT_TLaC)jb&%F9G+{t~UB8R}_@ zt2k#j^|GT9f{H_kDl5%faM<+1fO;*UA;Hr3Ciqn+r&fht?{&;xIUT+Uc_FyMCStEx z->{>#6W0Kq&#}LnzK>>7*?T?}PoF-e9nL=V5uD?$Mzl^xAwAJ7)(#VuM2iGGnx$us!V@e~WQ9F-w`Z=x!C%6T+Ye~^GrKpuI<|D&wm%WtX7yuD>);uLmym=w zyb9J}-UPc+bo_9<14rLFx)9_`a_kUP>$;C#hM`1gLSwFXl@+=uOR4X>e%!KZl3hcv z&1++3r?te(vIsL9MzQuQrN+9FR_VA53p#SR6YlBovG?MyOB){vy1kMLWu~X2G8v0< zmqwMtA1&)U8=2Sz_Z9=S*|EI|?o9>Wy!|kE?{@HEqRld3{b6Y*KeI(pu)QjyeWmK2 zjU;u5ED%FoCJAw;fV z#~n;RkPOC-?zfV1?6VQo`Ws%CLN7$d62NfT9Jhk7!-OzIb~4ALt+J8LAFEL&8(d)1 z_uqF1hZFwABse&L;t8G#Js+WWGnO+?6Da?7ykOd+e(*jFnNtrAUlzR-#F??T!FnF< zxjh!!@I4!*eIJbq($l#K>NoV-n``0g5Mu3#PUlqOm~E9qiU4!lZd5!#uk}>UCG@K$ zP?L@!E1l9D3UY61-PcDc@0;c|w4k02CyyxleIo+|jf*vO8hOyC+p$M?AU0PxDxtv! zY2g|q06Y(CASY*mC{@n6ZNV(-*RT|$lxs$rJr5HCpid;qaU&1dFZ7cZy&y$MTHF`@ zhTAgxrmuH7EE7vloLsf}{k+r<;J+TqfuX^w+GK``rZ3{?j)4{MV3uYh&}QZ6Aan0> zhp=1DxSH1KH%skHFw!iDUkc0bodgrz=(?973^IM^DF03&a6UdPl=8PNnN129KT8;3t;u#o^k$8 z4WRdUFzwe4%Z`-5cJ9r5eaUCd^gluAqRMeM(f6df#Xd-POf9@{Iu%2t&y_L&5~Xu@i5 zcq@MA>@Ts&R7t<}JZz0~A{DoFnNX#u+nH{U>{>u0Li2oUt<5k!&c#(0gf$RHyA8Fc zk5$oEihqEH_dRRkU|JeP_1?91pzHW;b1JFqb)HqaZF7xe->`}(m5SVwO`kYOrPtc^ zXYe~%of*^|ia31W7$xBd$na_t+UDHfZ$hAX&S)1&97kmh>Y0>{5@7F|`s{WQwH%HQ z$~L^tilw#eT$-Quy3@7)-4v>UdM)^_zUmYTLU(t8qf4&Dn>G7dL5@SQgtW$s6w~Ul z%)PBq{PlV)+TP(wG~(%z;7$e)&p1I9@;`D(C?75OY`fG9j(XVi@P7MMLAbNleZu#*Q8quGxbMz$dmvW1EA4d%D5E) zeds5!JH&`NTL(PCmEX%5gxFi`YRl3=wCQ!jLb>`i1GpwLM%SNT^+)5t(IPVMl)Vat zc5-bfagnvdr8ewiq$49Da_~G{HF*qB^v!}?CempQx)?R~(j%U-9XG1VIx=72sOeai zB<#S0m>w-$T%a@F>(C9sgV)mAONW0M{S-}@9TO1RqW5j@J)6^yd)qshN`I6pIQ{7b z)CiAy3MH|pDft)Que*~FmbxgbuhOITJPS*lbK9!yv3~(=RXCj;X)R^AbgES>QJU0O z+H{%Lp`c2~Ny)>_1&d=BtuEaC9s- zQAs`;72m)2;ut((J#hG|b^5)X`LF)3`RZG3B}R@ZMD6f}%y6AWG+>o4hHo7>5<+EF z`n(T@z7MlSF12UpgWKG_Tn)nZF9Io})^}^*+I6GP=jRFAkTa1z5d84OXf{>du)pHF zuJLQ)#MLcfTPpT}&b<}{n>Fr$+H0Ubm7YDpnUkU>kT6;KU|>Vg(^>7nw(L-}6 zZfgnzAlB3{0l`~Gv%L^3GoOe9)Tko7V{I4!syRAXgyJdW#-k}wDFBW8R|8r zehp@`Iyya>lv|b@HTf5XEz-|oc!H>d1}FmDK(}E>W>;oP&|0fKNit=~`OrgA)!o4? zs1Ag+P@438*hn9C8BGi|3vJ+aaYBa{kXWX3CL_r4Murs3?sTe?STu#W%`~ZDL-}<|b)864i{LY+Khn|moO=~$J z=b_)m?SOE{q`k)VEdIJ2_+t4z^G+iLx$>+*23~RSvZHk!3KYX`*Uadj-Uyhq5kzsL zYtsX6;?Q(a@^XNZtY!@b0x2kLyiYC1I}z}emMtJBL65o$k{nbOXC zW3*;$PgSzb>7!u6Po7m4kf*|C%4QG)C1(DgO-Y4no zcP_en+N8lr9Y9X|v5x2B4vX@1&3ur{&zVZ4LhHsrJ9_*>zk?>~D@_mQ?_kPbr2~zA zo7AyacLF4ah-vbDy6#);MyuZ*`xDR3swEfRR@S+eoF2h{B+-2lt_EYqs#A<7SNQHx z-`9NDik_Kzd-6(*Es%j*b!R}`)F(f{A#~;mM(`+fTa7*oMc<>kFQv}Xqgh6@9Q~E0 z&f+hq0av(^0@{(%)y0E&lj7WVuAMbo`IHnva=kXsW?*i#QIP0Gy8}q2lp&|;Pp42P z!SaI(1Etu;Bln8blU$vY_reTnhe*%gWJS41Uqnm?)zeX

}kc7}bVtQ0ff^XJ8tc3Jk2g#dYLZ^PIXtR0Um;_(}q3}6x$X64;Clb*O zQ!G^M7=_)9#2_nGsim>&78#BWQ*lM$?zHN`DY1UJ_4>1gGVR%5y2iI{JQiGa;O$V( zjt(@#zjy&3II=uu7K*~<^g^Zb{(Pgo*>OZUAf?IlV{ zVw97F>2AMNFM?P6u?1~zNd7x5F)S3c6w{g_&FH_cof>5S@PKLjJe)6i%VomV$qu4` zOj`r+vP>dp#M1Jkm&ckxQr5g^p5W?d%%j@?hp6!Uhn;G?oZ=#3?#`$e$F0yzN7a#O zmnU;y-Zh=+k+{^otz)9ulC+ED(Yp~164qRBoRrHMMXi9pJp**cXf4Db#-;s zuX=M4=HqEo2D)wjEZF9WL&Jh?@(b1nl3?V2Tz=&KdSebVhu!V*GVUzgQM?)`EHwo(b&i8r)=1tf5|ez3BmWa&&@ zQ|YlCOQ8Uo-Fita#SebE4qtDz3eKds z0Qj~YPS(qBp;5(#x-oQ-3MAXACQXFIkq%E6YAf8t;tkx9lKapoWM&Xs*QFiM{_&!W zc!x*5#vAN5++HdEg|9}V{l|Z-P)g=FDdt9B29^fDbV_x2)U!c8i^aa~d%_2F!A+hAt+7eDLoE#B4a zG*SZyH3eppVv8HQuTCki1brq(Dd|UhAjO>RSd; zI{IbzH^swwHFE?%rtXA&c|R|n3#O4%X|W~%&X8;@%n`sS^nVAkj^4{UVHAMHUc!8vbB*A>lyQFJ6+U&1b|W?# z>>IR`cDD_EfD#ZBmB;zsy(|+ZO&hmpm^v3~z?mFvZqmZ4czHwI1Q<0QNV2$0GLVl)*?@=OqL$^><&Y3P+do6aWgFS$c1#UwzK z*1)0qtQ#}Oe)W^qQ2qmEsz{6Cw_r6MuB!vEMU7*%M|O+P`94C7cF^zKRiizlpnghh zkzpd+T;vaLiX9r~D;u~}MY+Eh89Wf}xI2FfYiGHL2f6fkB?4J`r53H-V%5pkD;k%Q+tlf~jfk zDZ!$14AlaU3?BA|_O{Ga z^xFsO1wD#4pcXBW2TO5$pUADIug{P*ph4|XNuSE61l_g*kz{Z2l87cCL>9h_cOgZ9 zd#(Fh+sD8gTwy_}$rLDG+Raqp+e^$E{Sn}(t>wf2ruw9EFHcs-Ie*GlS}=3_I53SS zOr;frWmEK#8X8-0Ps3+d1Jdb^T&$oZ4$%Yv8ELmf%m0M1fRH?^vo*u9Wc=g#dzDfL zy(XF*~gsj{zg4P?3-etYR>Oqpko1{~nY z5Q66A2KckS%$b#<2JeLvqgQFt0U%=)N zP~aqzV@n#cewHV0Psdz5bm8_oaV+I*F?{;a z>(5PS&RNAMdY=W8K0NmdK?r5s$6otzE_A;Py-B656M^QVbd0tjr!F`*n1djr8Mp9D zRj2sJ*udf&ZQzH+1bmbnS=$dEN>EArAeA8DWDbACC#*1JMnie<-LTiw#EFw9m=WR(_HDiaVzTi@Nv`|V~54s|p z>F~Mg%ZW?QI=6Pf=rHWdHjmKlPiHi1#NW4X=55hi& zgDEyTUJEsp2*H;KCJ?yHoLR{{E28I34+N3AJs`6JEh>lj$&Cro4+>(X9R^3@yC#{0 z)M>>I75nq#*nFvV7O?-IGav?7kX+aA`r!OEVd;29ubJo3@}3c<3Kx_u-1WnAO)=5Z77r7OOXo3w5=emIV#KFGH)V+NWC z)9)fikJzyh&sA*Y^f?^l&?a*|9VQ`xB&RYPjA`M_A>wh*^8g7*rVH*H<+mDC=RXPH zIhN1?-FXOpGbDlAUe6*UJrNlbglP9B2%bF*Y>m2k0Dy3(phtbs8b%Vna4{lw6RF2E ztcQXchM#u;tl3Z5OyC=1>F6{V;kTW`y~>7qGbsXX`@&Guys1de)=zKv3YjR=3@M{j zFeAxR4u`VHmmA(miX(43Ptr?~sD}he3KR^?H({j32L}dY;!590i=6mUDvLgY;&rl# zfL+ft#+>yF9!7V9PP%hb{I5 zuQ@BE#enKU3Z0K}vm0&h1aJ0PD+LcSl@!A8gLb0$8o$IAV@es_80=H*vZ{{yd8pJ}sgnK(S2$PTW}(FcP2|1r3FI2c5JzQ#Nf;G?#WbcZr56Cu}sulj0ap zzyb8a_3!|$`OlPX2T|Zv(D+$Lv~nt`#N{lzBmk_fpZ^ZE3{KMb>dI^#l{T&-LoB&u z3Oy$!_;-Vz=mq9*v&7hb9#JoMvB86y#7GzqGIwXO@C^3wgY5@q!?92hiSo#{p!De- z;R}v+5kL;R-Gw&$sDt>LOvf9&_tq$cG_;HKJsv06yhc}m9_UP0>*%R0l-ONEI6@Pr^_vylUJk?#OG*NS;HOoF=~?l{`B;~i!0uV$Ve>glHCpBq&3bv3j#ek z?esogANcRnwbo*ot6-w|oatJH9g4(8_Gp=^-c^bqcDm0}ZCf?Fjp>g!^IKT_k-GX@ zZ{t3F_iOT{7m#GHBCKlg_FkAb#l54)a2!NDI z+vmy#v_80DZ0O-Nw#NMwePgCGVGpJclJ`KOfronkh7zL3L)AOAGrWPv5S-oXkCK0#C$Gk+@AqNMeu zO6#0R`s1tK87NI57AM1v>qI0AKzV@cbMvymo#c=bRH18q-hRJOcxAJ@KYa|>OiN?E z?0i?e3Z<4^q$LxEmZ0_E{X6_vh?iH^7p9Tkt|U~P%h%gV($kL5(p?7bz7*N#5}}!y zr4x+Y>=Xu))SH)H^)kw@bUQx`76gnsKByxb^sY6u?QHv)j5L3QzA`R!2su(j48?!E#pG;T7&ImJ$9;%{9+bd~#u=KVg>TX_HmbGI*}zw14zoC6)_b_Pr8tVIlJhftTxqd6t5neJF0rweSIA zp~ciE(w}vKzN)JPh-yRHUd**e2F<>?vYZ!~dRgFtukeXT-`JK3E@=^-oTX_Dad$Hf zWVL&&Wi9#fvcesUj3hpb8tqUyH{w!D_1d6fV{ry4Sp^itzT?o-#Hsix5$qn~nY3W9 z_cOP+2lPI4Y$oB0;X~Jl$^j3(;0Zono=A@hBPF${Tu}XTbh00RuWdMUQzQ@N(=!~r zm^N5H|3IC`T4e!f5(!PVE)fEYN@D*&?02j|gTs>kw-);YTk2ht+SA z5li0b)X%9+UgPutadpn5!+vpuU|Wv=Lf|1>Gm!K?8XqpOt&@1>6`dikQGo7Xr{JP* zCY#UOC+}u9i*f~Gr|>9FL>?68hh?7J9Bp!Cq0IFek6V-lSG>4^KNBWjyR;ue6o%EHC>ofeT8+KKTpFQ ze$f%z1`%(=)QFx$x*|t^b3M zqFukk$T*qX!KwT}&d4r+Xt{RTbiS%W9ukC(e5!Z2Tg-=~w2xBL*`cHykddM+B57lT z+QN%Mc=|~ucXqSoI3R>s;!aVNl~q|)96Ka5R9beaDFn(5Htv;1Vlj>uMPgq(^>)bI zfYEDiArE_ z3z@JyzDTjCARr&`iSywXhBZ?6^huk;N0UbulRlmiyyoD%Ju1HeWWWQQ=Qa<-zt@($ zGfG4q=Q<36thd5X-j*nAhi}tZa{EZ@n~N3Gn*`EbH(#-u910MNoC)u_ViJ^00^`j< z7CSHZThh!O0(ssLhbMkE*Bt3nRHl=ATgVUZJF*bfn^%_^bk)9lB6^0(J6rqFf%bt0 z!mUhFFaoGH{r$DekOwO^p?)L~Kh}j>o@)MIs9j-Ui_+0oA^oo7Cvf zQ4?NGJrrw-;t=X6BL?(PUGo?mxO8-k#Fw^>&G?c13N*08B;iD*zSz!r9Vcn0eb#$) zL{vQgez2H?|K0GHDlL&^bZ1rnB%%Qzq|fOQe%js{R6;AuI3;I(&2nzJ50XGg?cB zfHTCQUKhwOP2b|iZl~=9lt_n}5@LR-87M=CTyD!fEfa0KZ$7<5x|1@rQE4M8{f0`cYJ zRa4v+VD&@ssG~WquI-i~u8!RY4U8XgdMQD!q(&!F9HE=ZUpT`tDWeeY zi|=HF5Ct-F)KDOBdgXu@T_2}ZpuE3`(}_T)s;cOhdN|_Dt~AQZ*}Cg>X>`^VQxr89 zQ7?;KX=#h(1rgqa)(Y>wn|bbMI*w&8!<-5~5~o)7Y1pUJkM3Px;QC;o!`Cm^_GDp> z%H|M?9cI+sRM*hQ-_K1cceph4PG0=QI=tQ&+VzotsY)(^#aw$qsh&w=m@SeQ+#bAA zV@;Lkj^oR{JqTpxCoBf1Ew2c?=(B|ftoQFhrIi}^y4sBt<_=!s)QBVVmV|F1Rl0;l z(%`9s1-*nGbvcpTX6^_(u#Tw1>zXu*4GQEs5K+`~X(iLRCNz5nCtIi$WqI`U(dn(N zC7`BQiL)gH4<2tMHtT6l-neut&8?UIQL-AU&*>z;1EkqLF%-ZR?qX~eCMKSLs5-|@ z1oBX;pP&mnf*&#NZ$g3I=q+wpRyxc@=io2^K&@XOi$$mNXFUUcfh^u`t&Y|Aklxgz zuR-{qd~rEmwu!jOIems+HVsr2pXCd4-aNzL!%H83oO%%e8M@wh|5v;~!>72aRFhFp zhBNc8=kI$r<1t^GBV3;1Gl`-5#1Nj{BeLDq%<$&1hjj!APJUqAFE50RJhwTCgxUab z4~QNRybkc)2Pzn50+RTEx?p;fGmo$T@)?GG#d^((0)l*V1c-RHV;%iktzl?^OXE6a zAHcl>^hJGhR&5u3xm7C&3AZBxeIUFs!-`Lu?XajL5F_)4K`hpjy28M& zdq5EQ{th~X#I>6ha`4S-+Nmx)BXZ&*64P149H(D`2|A)fH*D% z5D@bBSx4Wwy8~~S_vrUZ{{z7ff#_Fjc5#00Uwn^vWO#;36AQGr{isY?#_{An(7Mw& zV8bL<>-MD6pnO=vt+^BXd4%Gx)Q5*QRxMJVy?LC)rk(>Ym7qPK{VSPts~lUosk|8Y1U2)=GyCs&@YkIA z<*$YvXJ*VKo9%xK`Y!f4Uw#y?X|I_wGXl!ygUK{^?{bDd*SRX8mtj9*I=aAFY@! zhkm?=aamev%9xYAUj&Z?^E&v2J3C{@irh!(8L4Q>(nTpyGBQUAG>bt=pDIcC@bl+@ z;LsARsS8Ta#;@UZJyqmDCe?szY)4hqG@Lb@VY!FX$h>Mb=ibBG$`>{BwWnfOxK|h? zUS7C}@gpD7?lJW7cbLMa@P6Vy{eED>X)LV6kqB*>-~ahXDLDIIoE;b#;54tjZpaUk z{TEY>$V;T!PG>tm{F#Y=&!@C|oLn8z{I0)pt}*p;S8p%za$iK^&yZzEdFLTE3O z6$4BC>Xk*ECtvJ;9Z}*DmU1wM_||KzqY{77AKRgSEmcB`0q0-~owY8^w&JE>hyYm5 z*Q4M2D<6C#FKtm|9|y3gbs?Rnf?xdwd;YUfFfqMe=Vc2z9QZu#BXmCt_K+2j2@QBO z$n3Y#Z@z92$N+ojx(*UI{+yUBYBIor5?H=_st6b6GhDyFy0KPAzxJY~TD&LMlysWQ>q(-Vk8 z>n6c&toSp-OBg=#Wi32@p&M`e5h`SGKsQXr$|CVnqW4mM->9Q9;niO?%!BzE(RO!t zcR+RZ_zZ91tsm04-`M)*%41Y&rl(RTDl_u)XZg1hmo8x)vbZS|g#0RQ&&aw6Q8h_+$x8`?nBK@@l7F$BiS5}5;T@<*#7>6c)(?&OFS>kr@px(}%Dsw~ z6RvB-U+^siG^l1)`Teh~{zQUt!2iL&>H+Lnnm{7uCkDQVfq4&Lbwt?qu*O1|57EX` z0!ZUGeB2$v6G0x#pAm@P+XKoAz9DR$>%13Qcpm;XitkNBbH~w(zyIOi8+(aLjCIL1 z^{4I_Xwpv)3{I`epZ=-{i#abjltt%;D1Tx&!*r~vFd^lv$>)DG5Z;Hq%0$BYP zNA_<2UpL`5tMKpWz0l61+TmGydhk5|;o!;QudV7#i{VarV z#V2`9s5bfYe5ND!@$Q3vb27 z{(_j_7T?&1i;LRbQ;poM5G?rB5j{WA502N!dHG!nxJ960_kFYk%;V0A-`~vn_g_gd zFbq{zMBnMF^j&LetJOJm3Ph)`dDpjKYe#XC9FVkIsn3_TMzDgnf^7G^W zloW6d0K`nEc>6-vyN7;|qkloN?;VDr(P3Kfh$d+P zaRKs8bc*9_jf>k}{6e9nkI;Xw9(VA*4<7>{iv7Qw~FG8M$gga(x^mS9Bb+mj*`j~Co!+tywY72r!5e3 z^L$QjG8} z`)|%a69rh2;dsB!SL`A*qjZ((H+2W3Du#T2ETa_eY^oSE9(ZrSxw74El9v%^&CU(a zk%`@E@qZ57K)%3 zBB%2?jN}}hb={d_^`=YS-k)vY%QT%I741WH64Mh$g@e#D*4uEsl=yK&!Ra5Alhi{D z!Lc|Qp-@&7p+TzRYYv0nJz(*g(Gm#px0V)9O)#*8uHNwuk0)N*t{g~9!zR1I`3mCz zR#aCc)TB$W0dbjk5Wxe^E$$Dgsa5$j`(>~=n9fTQT&c@ zsiLuKnSsbeU9{?O?_{%U4bTE0H>K&>>5))G$=2x91+WHt5BSZu+66ksq!U(B_)ZkL zWbbrgu%=bNShqc7!My{~*3wT*iSIf^iwUMEr$pHJ74}evxeQaiee;`>dR|`sy-&Y> z*m=SLiyO3nW!do7HevW(iM4d5A7lszmuFSpuk=j@&sXO`{5;NYKILQIQZfEg%K1sp zDDC}5IsO2@3O0inp!N1Ok8#hemWPvyyYZ_rr+WeO^B4Q0acln}(!QY}~zg);`$oGSK zd;Dp+AC+r}f$mRPu585|bPy`E+S)Yieh%3%rJqzQ+MYvp1;~iHEul)ib|Ot<<$%a!EY(X6QCwtbJjS(p6qqo&3bS#RBnK{caQ3RttN6d??ua~ z@pPH!cQFuru7QV&&T~xpXcrUT=NLipqWO_V_wZ2-sNHLfIr3=HudNv=;(UX~LtM+1-d-7X!M8Z(u5_BVC{P1%{ zTX-LL6s>{%+ukr=s0y7piL$P3-MhpiLjvr~R0D*|ice3FqjB*1D@p|+Br$)`(W)9& zl(*ZFSJlnH^u0*Q8gaK%r~M(aRsjbzt8EfKhsS8oMy5(;GhUST^-(?H2X1q)HG-7W zk*(B$Zr7e5{Gy}+@fo)*Re{ZZIcJwPqYS%2W+=2SYRJa}Fm~K22kI%=C0tB?-U_7#-R$?{N1Z)fbjyRZ

>%Pb8Z?HS znk{wgiF?t46tTYlwC(iPlazABcFRK;=oz0iXftg)StYA^1|ME8j$X8Kn=X3F?wzR| zUlwY35^k-dDbHPR`-4YSV=CTvJA)W0M?bn{)@f`ZvpAUrguEm)S`h%Q;51m^jk$Tw z%1HQNLATRxyxZwJiekg3DPOU^@nY^ag6UHDfD4F#Z;lg&xt9|#n#4)f<9(2Mh)4&8 z!C(%fPPi0a*6yk7!}X$t-{YFS%pXC4V!E7-rZ}6+4F`^-VW5M&C=OygD4c09%m@mi z7XaYm>T))GVbj<2uSu!jefPd%6e3DnsPkUK1HGrXs!gyB+1S|JfU58Z5TEF;GGk{B zxC|=Jw`|W;l%7n{Vgg)+%GFG?#7NPn6$t6g+d^ z94>{|Q}p_miu04MPw9I_y0*5S46N)tW;OZUS$>Xde{{h(7cuVg#I=CclH_9AE8dPa zubH!wQFkHkeyN02fLkiidsJO`l)UH8TB*CQF{JN4i>vV4(?cV*4rZHKd~zV%2Gw_0 zt4$b}C=5>R2`+nY>Xh#GO^r58iWmYL6cIq&dclwgmyT3}EivNJsaW9KfqicpDFx5z zG;*kGrr-hgfU!0mg3z~&11>Umc-i_%|hM>@vPX$}oGVJF{ zVcyFFqrD(whK)0m!+PQ^q)RUq66fW}PUE_`OuG!(2boOKw=PfEF*t2g1rOI%?{j%hP`mom&nS7#utSB0THH<#1KEsL=C62jSL{}D zCB;=JwhiA)xq9TGO^}iaGOaUYIc}(OfT={?Rfp6u>JMi5l;PHR3t%4&xQzLlmp5ov zd3uP|!IS0~$2>0joIi8^Z7cr#!Lad?gI!@L>t05&n!PS7WGh3MMT2F00l_T3@VUn$ zxNmN4wX}z|LDws}*$b%TADgvQJb%c$HNSo->+U z}@Z;_MzD)5}sXV9SIXMIQ~A};2}1*xM}yLzP-89n}jOUQiEhYg6+8y;n_K_ zwu9Pcb7uKKPwRpJna>O>8igDWiW=xOmX5}8tci}$PuWB^zie*!yIE(k6*QubD|O`W zQ&0Su!^P==p+>kzXq;_}s%6RZ66N(jNFY>uT@)XRI>)-~w-T4X&aEBU2hRCN5#ssgVg8;V4sXv2=!AZ41#whf7_ZIUZ|^GO0|L;64b_y06I6Zx z=AM>Na%TowPL_2eV-!|ZoSoFHk_D~RtmmjVn6WvuSDIfAD47E#litBkCu2HYuG52; z_a+b^HcP+@VVTXPJ!W*Dghc`mEb!x&qUreP*aZaGD@Q-c%d^+pulmJefXyE@P#kyn z*qvDNkG2#FZXD*K6oV$$wb7=|IsCon*7G-B1_S39h8jVin*GzYo~>_KheEj9<%!v# z^FB&;sKjOJ47({}WL%Fs`%lz2O!sX>i1U5z;n`y-!M~B@Os7A z3s>D>#dewmX~|oQA$D-GS_2&!n9tqp1og%#kjl{tADFyzl921U=Pscm{7b;6RjV$D zx3$rw1_!|(eOBYl#Zc2Nuzmdi>_@V=a-++2IA+ve#|FwvV?P8pn)E&jUtVagJ9jV3 zu4-(W43&WKdkaw5)PTjNT#<7hKDNPq=7b&L9RV?`dlRJs9`g;|{^EkJ8tB&KL`(Q* z;A#=n#yN<&mluGuCQh!uHv@rh!?ZPD6csnI>)S?r$S#|n_L3}{HEc_-pT*Z8Vz4>ig+%Y@J| zcsmfl5ie*P+dblMmw0S*}lX7|bHZH@x6Ft)ZUCI=kC-@OOJ3 zDjgf@t_gX3Z)#okLRPzFiVb0cC^wQ|FdB^;I$~R=t4b$#0x&u1ZP<0G{gi5FId6xp z$2sqXqU3|}pyCvC2aUxWnG1>mQb-m2l=|EL@}%V(b5NOMsPiFs#P|CkhdQ2Pb5Y$4(N*uS#y7h0)+Z3 zFGw98^LqMo8rV)QI1YbiiG0=A+}q{1A$X7>6CP0MeZ!v~xIrHy;#>k`Xms&=!&z5( zC&j?6R{Nutp&K)4AVZtM?_TdCsL~U?m9gGCxO0x%e(r0=1;d&bgVh{9Hp_&BU+9M$Y;dp!IL|5Xf3kU)OrM+@Ag-Z7^Q-MWZ$Hn$ z?a<+tHe<9XIXUyuLQ!wvl0x$c{3VG?1{!80!C(EK31zCW7IaZJlqYhNqI{#x zwyk7LDaU;`x=_)lNd^Sl^{W)jElo!hbI|>WYP%-2Qh3(nLd!}odm(Zs9*nY84A>Fu zn(PU#!dU-no9Jcs@t6SXR^=6!ARf~$q`pJ zG;=(za6u_66w=)gLNq7@Sff}lWS%T7$te0q=?keC%ppT;$CQYt(E@=N92qzRHf+Lt zGdh9WZ^M2YEnKB2Y$;RC3SlqeL!!asxd|g@Y|?1I0Lv4 z%bmNLKyZB>OSm=1`^y}9*{;^Q}@+6grzf`VLRc z)OT;FLZ5YX4mz6G@S@qAfO_1glq+INFWS5u8pd&`%N=&o5Vua=6?KBj#5rdXu_eAy z%*q-qoWkerRx#ViHBMw1s~r~C_FLLp8a}JFDUWaM)kNQJ;ijwD(Z7t$sPSA=!bk3R z=WG;Gu-}7IuuhkJdIkHeyy>9ADki?(S9~ZCRf8ND{X#P@mlut0bP ztZk&FJS!kkdy+)~${E8$w!SNqeA#G*tGi{Jay=p>;g;`Zyldio*TiXhTi}`@ z>Iz)a8#HR)gBTs{)wbxlS`$YJuA8#*SV%EGWAp7iMNAyNq;vV)G<;o?BCUF};=pbg zHUHr7qsvBxwt(5B=6ee$?znNDt4&H#P^j|j#n+ToliJ$BNWgS^ZD&@$Zg$oQyY!iS zrC{%?RhNVwDy`aXVeiU%g8|!Wsj2yRDS1R@0|M%^;VHCg~Xw&qwiPycQmzJMG6L5IW#|H$olZ3P4Z;j_LUCimgmGh2FZO|BUgvpSL% zC6dyHmfq8&>%3eU5F56KI{2D(>|h{lR;Pw!GKGw$oQ&LU92H18c`BWUg&STXF?@9~ z&5OMY?1a_f3q97kvn9s9UHhremSmrrmW=yWA4K9r(Ear(Z7p(!5ql<9y{R+h)a5EF z_#%{PJxGCraUp|2otjVO_|4BdxLs@+h`(rP+E*q1A8FiUc#k+=1(fOuT#i7%WiV^BkWqsSv7j zch-Us836Y++DIv`9~9m5JM+sGG?($UpZ#3_&i#CKn})cx`OW#gqkfn5v~lR=56}<9 zJG;DBMiC4K2JI88mv`v?c{zW+*Pi`Fit)9SgZusZ%bjyBp2PZd1{0{#H3xAu53wO% zC=~0v$FUZEuOrD^Nj){}E%uPl${ z+Y;zF@-Ej|XA*~fc?Q#@^9SSmUi+>Cde3p`!2dvd-{YqeQ_nHe*O~0GZv{=Nj&*yz zS(+Bil2)UXFR{K-AI-C;7f*J09Y;N6)Dm|KZ$oXC;5ybt49hEH**@<_IG?NG5!6X+ zrpavj*`L5G_U~n)LNg%SN0{PTqzrw`zQ;(78B`8+_f^enUq-cOIOwvcNMXvXuCgqg zsmlhHWD@tW{#mvfU*o2GJ_?mdc;*c)qkn52`JHE^Mppko?Bj4n%!H=(v^sMEy*1y8 zW&Pn@6kDIh$d}%*+NJHhNR&;~>U&wqf1crwL92hovqtXpg-PhsJ+QIlh1Zo2b))&$ zM_H`^rhGq$m}B0<$H>Fo&R|I4OkoZF>jD1G%>T06z7ba+HE)0&IDPOCudoHti`ZF4`P1$l;OBiiwCE7&?3%7+sg3zSe32 zV$lw;8aUUyPcc;s#;+6lTHsz2Pv$v!nWRtA$)DA0;;jn$EG?Pp@vqme8Qci7F|4if zSX2zefCA>rDvX1G#jf@a-pij0(Lc`c~?o{CgLJ0hOg*d7&4*62JAF z2W04piSXUgukMQ`I*^^wsQYPDiz~Cz(muC#5ha7S|9YeTwCX=ejwRWbrkbf9?N^R- z6++fogD>@LY6o4my%N@h*GjdfS?c)J10?fTD1``q5lL80G8N>%%}n@&?R(9IRa|{i z)g~?aa5{D^y*s8OUF@iOw-nv4{h_FZbQm@`o+oF*DLtMyZUN+!>{2W)h>NX=Q;S}u zB=DV;iOHZ(4a?|4q-4Rd;^wE`?7^7z4iuiVc9vD#imNhz%$iB{mOz3ivaI2S zGyhh7Ki&?R&tO@3;rd#{Fs+u(Go1gq*jc@9oYOFlAgqp1zXl z#h$oV-4JuQ#PfXAyn2fgm{(1>dt!am6NI*?QeGp{1&s54N0hAjb%OtP@5uloQE|NZ zT(jD0!nG85ds=oSSEx|vEVr;M*wrbYTtNkK1N-r0ef9n#nJdv|fo)O! z_49-&zsAtU30ups1_d+8AtupM5kPZoqxGEh_#R3 z!br@$N$K=BLEIO5`MZzP(L_ia-WAQ;&yMeue){hG_~=vBvGZp9s*T{PNrb25Kb}$N zOdn?nMg;0(=IE}c8bUO0mzjx`yNVQfTR@`+rz|(FvrKqqSyPLvd@xtH76hqKVjo?0 z&q9~w8hAIvS4_>(>T!9Ae7?R$-< zGsz`5;^I}Mc(-=S>Xf-}^C{W6 zu+GnLKqB=z#@(B7^pMDGy@O#oRup8yth`)Uy>QUH+5vLx#pditjh;~4$%8vvS9aqm zAk4sro?IPtl`|%edgRDf|8n477>uiWKwm^6cqvox6`*-Mn+B)>;-RI|s-4F{p`%P+ zKpj80_ zvGUlAzg+kB+N5>0Zs~niT1)k;WBpp_B<)IBghO@P0;B?_S+wa~H1Tzf$UBvzH~lLV zoiURBpxt(tL19#qwEFRktBA;@|2YZHiQ##D_`M?jI}}Cf;d_m*Gs-1e&oW;9LxsYm zHE)H&a%Jx1*Q0{A=_k|PG4YEUEh_^yiB4vhn|4igq&jI;*HXVUmkDu<)8iCeW#HVw zus(^TJ*-Y-@FgUBEEk(apBu+vo*cW=EuOEw8Pq{|Yf$h%%>HjO&VP{5@%5s&K#1n- zda7G)e~7gV%X=QOjI<0hwYJZnI|{dykIZ^C{m$HqY1GWTq;jyzAP3gAh4vPG?+cA#k4?^-W6+Butjrmif0h%i z%Yu{(61nvQdvcx`|0Y4ALwin!G~$|BXH;8D!A$R*HmxF(e^R?XrE~AdZ9R+rD5h=3 zJjIV<(bB$T+dt9Z^1-6TUY=F)qw*b>Fe!W)hmesBb#8p;|Jr~!`0_PDdA6^~T9QAt z-1Qz=&QvHZsfLIIZmqN}unybagMD4`E*pQ+(B1Kr$8*qA%aH6t4Y`_j9j&kh_-0gd zGO!kE#%CEm`FzGFhc|)3q*ldCat-JkU?OVzB~zS~-UtEV!=JG?1SO5@s0{Y$ETJ#k z*Qbj|Ez0+zIXH4h(ygG()y0q1zA3o^;F%P=bEfF zkImYIwR66jaD;W70e@a}RHbliN4}FtrOHfErb4xMQ8KYYYSGtLCFwErz+L*aJ@>c* z-n+>&7F&DGwcSxp!m@M!wR&6oUQ_LiBDuKul^sC>> kubeConfig: /tmp/kubeconfig-1841037317 +Jan 14 03:34:54.821: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable +Jan 14 03:34:54.837: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready +Jan 14 03:34:54.877: INFO: 56 / 57 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) +Jan 14 03:34:54.877: INFO: expected 5 pod replicas in namespace 'kube-system', 5 are Running and Ready. +Jan 14 03:34:54.877: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start +Jan 14 03:34:54.883: INFO: 6 / 6 pods ready in namespace 'kube-system' in daemonset 'csi-cbs-node' (0 seconds elapsed) +Jan 14 03:34:54.883: INFO: 6 / 6 pods ready in namespace 'kube-system' in daemonset 'ip-masq-agent' (0 seconds elapsed) +Jan 14 03:34:54.883: INFO: 6 / 6 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) +Jan 14 03:34:54.883: INFO: 6 / 6 pods ready in namespace 'kube-system' in daemonset 'tke-bridge-agent' (0 seconds elapsed) +Jan 14 03:34:54.883: INFO: 6 / 6 pods ready in namespace 'kube-system' in daemonset 'tke-cni-agent' (0 seconds elapsed) +Jan 14 03:34:54.883: INFO: 6 / 6 pods ready in namespace 'kube-system' in daemonset 'tke-monitor-agent' (0 seconds elapsed) +Jan 14 03:34:54.883: INFO: e2e test version: v1.26.0 +Jan 14 03:34:54.884: INFO: kube-apiserver version: v1.26.1-tke.1-rc1 +[SynchronizedBeforeSuite] TOP-LEVEL + test/e2e/e2e.go:77 +Jan 14 03:34:54.884: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +Jan 14 03:34:54.887: INFO: Cluster IP family: ipv4 +------------------------------ +[SynchronizedBeforeSuite] PASSED [0.067 seconds] +[SynchronizedBeforeSuite] +test/e2e/e2e.go:77 + + Begin Captured GinkgoWriter Output >> + [SynchronizedBeforeSuite] TOP-LEVEL + test/e2e/e2e.go:77 + Jan 14 03:34:54.820: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + Jan 14 03:34:54.821: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable + Jan 14 03:34:54.837: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready + Jan 14 03:34:54.877: INFO: 56 / 57 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) + Jan 14 03:34:54.877: INFO: expected 5 pod replicas in namespace 'kube-system', 5 are Running and Ready. + Jan 14 03:34:54.877: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start + Jan 14 03:34:54.883: INFO: 6 / 6 pods ready in namespace 'kube-system' in daemonset 'csi-cbs-node' (0 seconds elapsed) + Jan 14 03:34:54.883: INFO: 6 / 6 pods ready in namespace 'kube-system' in daemonset 'ip-masq-agent' (0 seconds elapsed) + Jan 14 03:34:54.883: INFO: 6 / 6 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) + Jan 14 03:34:54.883: INFO: 6 / 6 pods ready in namespace 'kube-system' in daemonset 'tke-bridge-agent' (0 seconds elapsed) + Jan 14 03:34:54.883: INFO: 6 / 6 pods ready in namespace 'kube-system' in daemonset 'tke-cni-agent' (0 seconds elapsed) + Jan 14 03:34:54.883: INFO: 6 / 6 pods ready in namespace 'kube-system' in daemonset 'tke-monitor-agent' (0 seconds elapsed) + Jan 14 03:34:54.883: INFO: e2e test version: v1.26.0 + Jan 14 03:34:54.884: INFO: kube-apiserver version: v1.26.1-tke.1-rc1 + [SynchronizedBeforeSuite] TOP-LEVEL + test/e2e/e2e.go:77 + Jan 14 03:34:54.884: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + Jan 14 03:34:54.887: INFO: Cluster IP family: ipv4 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods Extended Pods Set QOS Class + should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] + test/e2e/node/pods.go:161 +[BeforeEach] [sig-node] Pods Extended + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:34:54.913 +Jan 14 03:34:54.913: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename pods 01/14/23 03:34:54.913 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:34:54.93 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:34:54.932 +[BeforeEach] [sig-node] Pods Extended + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] Pods Set QOS Class + test/e2e/node/pods.go:152 +[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] + test/e2e/node/pods.go:161 +STEP: creating the pod 01/14/23 03:34:54.934 +STEP: submitting the pod to kubernetes 01/14/23 03:34:54.934 +STEP: verifying QOS class is set on the pod 01/14/23 03:34:54.943 +[AfterEach] [sig-node] Pods Extended + test/e2e/framework/node/init/init.go:32 +Jan 14 03:34:54.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Pods Extended + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Pods Extended + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Pods Extended + tear down framework | framework.go:193 +STEP: Destroying namespace "pods-7872" for this suite. 01/14/23 03:34:54.951 +------------------------------ +• [0.046 seconds] +[sig-node] Pods Extended +test/e2e/node/framework.go:23 + Pods Set QOS Class + test/e2e/node/pods.go:150 + should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] + test/e2e/node/pods.go:161 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Pods Extended + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:34:54.913 + Jan 14 03:34:54.913: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename pods 01/14/23 03:34:54.913 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:34:54.93 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:34:54.932 + [BeforeEach] [sig-node] Pods Extended + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] Pods Set QOS Class + test/e2e/node/pods.go:152 + [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] + test/e2e/node/pods.go:161 + STEP: creating the pod 01/14/23 03:34:54.934 + STEP: submitting the pod to kubernetes 01/14/23 03:34:54.934 + STEP: verifying QOS class is set on the pod 01/14/23 03:34:54.943 + [AfterEach] [sig-node] Pods Extended + test/e2e/framework/node/init/init.go:32 + Jan 14 03:34:54.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Pods Extended + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Pods Extended + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Pods Extended + tear down framework | framework.go:193 + STEP: Destroying namespace "pods-7872" for this suite. 01/14/23 03:34:54.951 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD without validation schema [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:153 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:34:54.959 +Jan 14 03:34:54.959: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename crd-publish-openapi 01/14/23 03:34:54.96 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:34:54.975 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:34:54.977 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[It] works for CRD without validation schema [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:153 +Jan 14 03:34:54.980: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: kubectl validation (kubectl create and apply) allows request with any unknown properties 01/14/23 03:34:56.811 +Jan 14 03:34:56.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-7089 --namespace=crd-publish-openapi-7089 create -f -' +Jan 14 03:34:57.376: INFO: stderr: "" +Jan 14 03:34:57.376: INFO: stdout: "e2e-test-crd-publish-openapi-7699-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" +Jan 14 03:34:57.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-7089 --namespace=crd-publish-openapi-7089 delete e2e-test-crd-publish-openapi-7699-crds test-cr' +Jan 14 03:34:57.470: INFO: stderr: "" +Jan 14 03:34:57.470: INFO: stdout: "e2e-test-crd-publish-openapi-7699-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" +Jan 14 03:34:57.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-7089 --namespace=crd-publish-openapi-7089 apply -f -' +Jan 14 03:34:57.649: INFO: stderr: "" +Jan 14 03:34:57.649: INFO: stdout: "e2e-test-crd-publish-openapi-7699-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" +Jan 14 03:34:57.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-7089 --namespace=crd-publish-openapi-7089 delete e2e-test-crd-publish-openapi-7699-crds test-cr' +Jan 14 03:34:57.718: INFO: stderr: "" +Jan 14 03:34:57.718: INFO: stdout: "e2e-test-crd-publish-openapi-7699-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR without validation schema 01/14/23 03:34:57.718 +Jan 14 03:34:57.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-7089 explain e2e-test-crd-publish-openapi-7699-crds' +Jan 14 03:34:57.894: INFO: stderr: "" +Jan 14 03:34:57.894: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-7699-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jan 14 03:34:59.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "crd-publish-openapi-7089" for this suite. 01/14/23 03:34:59.682 +------------------------------ +• [4.730 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + works for CRD without validation schema [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:153 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:34:54.959 + Jan 14 03:34:54.959: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename crd-publish-openapi 01/14/23 03:34:54.96 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:34:54.975 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:34:54.977 + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [It] works for CRD without validation schema [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:153 + Jan 14 03:34:54.980: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: kubectl validation (kubectl create and apply) allows request with any unknown properties 01/14/23 03:34:56.811 + Jan 14 03:34:56.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-7089 --namespace=crd-publish-openapi-7089 create -f -' + Jan 14 03:34:57.376: INFO: stderr: "" + Jan 14 03:34:57.376: INFO: stdout: "e2e-test-crd-publish-openapi-7699-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" + Jan 14 03:34:57.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-7089 --namespace=crd-publish-openapi-7089 delete e2e-test-crd-publish-openapi-7699-crds test-cr' + Jan 14 03:34:57.470: INFO: stderr: "" + Jan 14 03:34:57.470: INFO: stdout: "e2e-test-crd-publish-openapi-7699-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" + Jan 14 03:34:57.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-7089 --namespace=crd-publish-openapi-7089 apply -f -' + Jan 14 03:34:57.649: INFO: stderr: "" + Jan 14 03:34:57.649: INFO: stdout: "e2e-test-crd-publish-openapi-7699-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" + Jan 14 03:34:57.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-7089 --namespace=crd-publish-openapi-7089 delete e2e-test-crd-publish-openapi-7699-crds test-cr' + Jan 14 03:34:57.718: INFO: stderr: "" + Jan 14 03:34:57.718: INFO: stdout: "e2e-test-crd-publish-openapi-7699-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" + STEP: kubectl explain works to explain CR without validation schema 01/14/23 03:34:57.718 + Jan 14 03:34:57.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-7089 explain e2e-test-crd-publish-openapi-7699-crds' + Jan 14 03:34:57.894: INFO: stderr: "" + Jan 14 03:34:57.894: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-7699-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" + [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Jan 14 03:34:59.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "crd-publish-openapi-7089" for this suite. 01/14/23 03:34:59.682 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-apps] CronJob + should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + test/e2e/apps/cronjob.go:124 +[BeforeEach] [sig-apps] CronJob + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:34:59.689 +Jan 14 03:34:59.689: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename cronjob 01/14/23 03:34:59.69 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:34:59.706 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:34:59.708 +[BeforeEach] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:31 +[It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + test/e2e/apps/cronjob.go:124 +STEP: Creating a ForbidConcurrent cronjob 01/14/23 03:34:59.71 +STEP: Ensuring a job is scheduled 01/14/23 03:34:59.715 +STEP: Ensuring exactly one is scheduled 01/14/23 03:35:01.721 +STEP: Ensuring exactly one running job exists by listing jobs explicitly 01/14/23 03:35:01.725 +STEP: Ensuring no more jobs are scheduled 01/14/23 03:35:01.728 +STEP: Removing cronjob 01/14/23 03:40:01.737 +[AfterEach] [sig-apps] CronJob + test/e2e/framework/node/init/init.go:32 +Jan 14 03:40:01.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] CronJob + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] CronJob + tear down framework | framework.go:193 +STEP: Destroying namespace "cronjob-8290" for this suite. 01/14/23 03:40:01.749 +------------------------------ +• [SLOW TEST] [302.066 seconds] +[sig-apps] CronJob +test/e2e/apps/framework.go:23 + should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + test/e2e/apps/cronjob.go:124 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] CronJob + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:34:59.689 + Jan 14 03:34:59.689: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename cronjob 01/14/23 03:34:59.69 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:34:59.706 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:34:59.708 + [BeforeEach] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:31 + [It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] + test/e2e/apps/cronjob.go:124 + STEP: Creating a ForbidConcurrent cronjob 01/14/23 03:34:59.71 + STEP: Ensuring a job is scheduled 01/14/23 03:34:59.715 + STEP: Ensuring exactly one is scheduled 01/14/23 03:35:01.721 + STEP: Ensuring exactly one running job exists by listing jobs explicitly 01/14/23 03:35:01.725 + STEP: Ensuring no more jobs are scheduled 01/14/23 03:35:01.728 + STEP: Removing cronjob 01/14/23 03:40:01.737 + [AfterEach] [sig-apps] CronJob + test/e2e/framework/node/init/init.go:32 + Jan 14 03:40:01.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] CronJob + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] CronJob + tear down framework | framework.go:193 + STEP: Destroying namespace "cronjob-8290" for this suite. 01/14/23 03:40:01.749 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from NodePort to ExternalName [Conformance] + test/e2e/network/service.go:1557 +[BeforeEach] [sig-network] Services + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:40:01.756 +Jan 14 03:40:01.756: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename services 01/14/23 03:40:01.757 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:40:01.779 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:40:01.782 +[BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should be able to change the type from NodePort to ExternalName [Conformance] + test/e2e/network/service.go:1557 +STEP: creating a service nodeport-service with the type=NodePort in namespace services-8044 01/14/23 03:40:01.785 +STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service 01/14/23 03:40:01.801 +STEP: creating service externalsvc in namespace services-8044 01/14/23 03:40:01.801 +STEP: creating replication controller externalsvc in namespace services-8044 01/14/23 03:40:01.812 +I0114 03:40:01.820717 25 runners.go:193] Created replication controller with name: externalsvc, namespace: services-8044, replica count: 2 +I0114 03:40:04.871781 25 runners.go:193] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +STEP: changing the NodePort service to type=ExternalName 01/14/23 03:40:04.875 +Jan 14 03:40:04.892: INFO: Creating new exec pod +Jan 14 03:40:04.900: INFO: Waiting up to 5m0s for pod "execpodnd94h" in namespace "services-8044" to be "running" +Jan 14 03:40:04.903: INFO: Pod "execpodnd94h": Phase="Pending", Reason="", readiness=false. Elapsed: 3.123365ms +Jan 14 03:40:06.907: INFO: Pod "execpodnd94h": Phase="Running", Reason="", readiness=true. Elapsed: 2.006789912s +Jan 14 03:40:06.907: INFO: Pod "execpodnd94h" satisfied condition "running" +Jan 14 03:40:06.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-8044 exec execpodnd94h -- /bin/sh -x -c nslookup nodeport-service.services-8044.svc.cluster.local' +Jan 14 03:40:07.075: INFO: stderr: "+ nslookup nodeport-service.services-8044.svc.cluster.local\n" +Jan 14 03:40:07.075: INFO: stdout: "Server:\t\t10.55.255.254\nAddress:\t10.55.255.254#53\n\nnodeport-service.services-8044.svc.cluster.local\tcanonical name = externalsvc.services-8044.svc.cluster.local.\nName:\texternalsvc.services-8044.svc.cluster.local\nAddress: 10.55.254.187\n\n" +STEP: deleting ReplicationController externalsvc in namespace services-8044, will wait for the garbage collector to delete the pods 01/14/23 03:40:07.075 +Jan 14 03:40:07.138: INFO: Deleting ReplicationController externalsvc took: 6.06811ms +Jan 14 03:40:07.238: INFO: Terminating ReplicationController externalsvc pods took: 100.457811ms +Jan 14 03:40:09.356: INFO: Cleaning up the NodePort to ExternalName test service +[AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 +Jan 14 03:40:09.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 +STEP: Destroying namespace "services-8044" for this suite. 01/14/23 03:40:09.372 +------------------------------ +• [SLOW TEST] [7.622 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should be able to change the type from NodePort to ExternalName [Conformance] + test/e2e/network/service.go:1557 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:40:01.756 + Jan 14 03:40:01.756: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename services 01/14/23 03:40:01.757 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:40:01.779 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:40:01.782 + [BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should be able to change the type from NodePort to ExternalName [Conformance] + test/e2e/network/service.go:1557 + STEP: creating a service nodeport-service with the type=NodePort in namespace services-8044 01/14/23 03:40:01.785 + STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service 01/14/23 03:40:01.801 + STEP: creating service externalsvc in namespace services-8044 01/14/23 03:40:01.801 + STEP: creating replication controller externalsvc in namespace services-8044 01/14/23 03:40:01.812 + I0114 03:40:01.820717 25 runners.go:193] Created replication controller with name: externalsvc, namespace: services-8044, replica count: 2 + I0114 03:40:04.871781 25 runners.go:193] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + STEP: changing the NodePort service to type=ExternalName 01/14/23 03:40:04.875 + Jan 14 03:40:04.892: INFO: Creating new exec pod + Jan 14 03:40:04.900: INFO: Waiting up to 5m0s for pod "execpodnd94h" in namespace "services-8044" to be "running" + Jan 14 03:40:04.903: INFO: Pod "execpodnd94h": Phase="Pending", Reason="", readiness=false. Elapsed: 3.123365ms + Jan 14 03:40:06.907: INFO: Pod "execpodnd94h": Phase="Running", Reason="", readiness=true. Elapsed: 2.006789912s + Jan 14 03:40:06.907: INFO: Pod "execpodnd94h" satisfied condition "running" + Jan 14 03:40:06.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-8044 exec execpodnd94h -- /bin/sh -x -c nslookup nodeport-service.services-8044.svc.cluster.local' + Jan 14 03:40:07.075: INFO: stderr: "+ nslookup nodeport-service.services-8044.svc.cluster.local\n" + Jan 14 03:40:07.075: INFO: stdout: "Server:\t\t10.55.255.254\nAddress:\t10.55.255.254#53\n\nnodeport-service.services-8044.svc.cluster.local\tcanonical name = externalsvc.services-8044.svc.cluster.local.\nName:\texternalsvc.services-8044.svc.cluster.local\nAddress: 10.55.254.187\n\n" + STEP: deleting ReplicationController externalsvc in namespace services-8044, will wait for the garbage collector to delete the pods 01/14/23 03:40:07.075 + Jan 14 03:40:07.138: INFO: Deleting ReplicationController externalsvc took: 6.06811ms + Jan 14 03:40:07.238: INFO: Terminating ReplicationController externalsvc pods took: 100.457811ms + Jan 14 03:40:09.356: INFO: Cleaning up the NodePort to ExternalName test service + [AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 + Jan 14 03:40:09.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 + STEP: Destroying namespace "services-8044" for this suite. 01/14/23 03:40:09.372 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should have a working scale subresource [Conformance] + test/e2e/apps/statefulset.go:848 +[BeforeEach] [sig-apps] StatefulSet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:40:09.378 +Jan 14 03:40:09.378: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename statefulset 01/14/23 03:40:09.379 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:40:09.392 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:40:09.394 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 +STEP: Creating service test in namespace statefulset-5041 01/14/23 03:40:09.396 +[It] should have a working scale subresource [Conformance] + test/e2e/apps/statefulset.go:848 +STEP: Creating statefulset ss in namespace statefulset-5041 01/14/23 03:40:09.401 +Jan 14 03:40:09.411: INFO: Found 0 stateful pods, waiting for 1 +Jan 14 03:40:19.416: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: getting scale subresource 01/14/23 03:40:19.422 +STEP: updating a scale subresource 01/14/23 03:40:19.424 +STEP: verifying the statefulset Spec.Replicas was modified 01/14/23 03:40:19.431 +STEP: Patch a scale subresource 01/14/23 03:40:19.433 +STEP: verifying the statefulset Spec.Replicas was modified 01/14/23 03:40:19.44 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 +Jan 14 03:40:19.445: INFO: Deleting all statefulset in ns statefulset-5041 +Jan 14 03:40:19.448: INFO: Scaling statefulset ss to 0 +Jan 14 03:40:29.469: INFO: Waiting for statefulset status.replicas updated to 0 +Jan 14 03:40:29.472: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + test/e2e/framework/node/init/init.go:32 +Jan 14 03:40:29.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] StatefulSet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] StatefulSet + tear down framework | framework.go:193 +STEP: Destroying namespace "statefulset-5041" for this suite. 01/14/23 03:40:29.488 +------------------------------ +• [SLOW TEST] [20.116 seconds] +[sig-apps] StatefulSet +test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:103 + should have a working scale subresource [Conformance] + test/e2e/apps/statefulset.go:848 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] StatefulSet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:40:09.378 + Jan 14 03:40:09.378: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename statefulset 01/14/23 03:40:09.379 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:40:09.392 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:40:09.394 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 + [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 + STEP: Creating service test in namespace statefulset-5041 01/14/23 03:40:09.396 + [It] should have a working scale subresource [Conformance] + test/e2e/apps/statefulset.go:848 + STEP: Creating statefulset ss in namespace statefulset-5041 01/14/23 03:40:09.401 + Jan 14 03:40:09.411: INFO: Found 0 stateful pods, waiting for 1 + Jan 14 03:40:19.416: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true + STEP: getting scale subresource 01/14/23 03:40:19.422 + STEP: updating a scale subresource 01/14/23 03:40:19.424 + STEP: verifying the statefulset Spec.Replicas was modified 01/14/23 03:40:19.431 + STEP: Patch a scale subresource 01/14/23 03:40:19.433 + STEP: verifying the statefulset Spec.Replicas was modified 01/14/23 03:40:19.44 + [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 + Jan 14 03:40:19.445: INFO: Deleting all statefulset in ns statefulset-5041 + Jan 14 03:40:19.448: INFO: Scaling statefulset ss to 0 + Jan 14 03:40:29.469: INFO: Waiting for statefulset status.replicas updated to 0 + Jan 14 03:40:29.472: INFO: Deleting statefulset ss + [AfterEach] [sig-apps] StatefulSet + test/e2e/framework/node/init/init.go:32 + Jan 14 03:40:29.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] StatefulSet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] StatefulSet + tear down framework | framework.go:193 + STEP: Destroying namespace "statefulset-5041" for this suite. 01/14/23 03:40:29.488 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Should recreate evicted statefulset [Conformance] + test/e2e/apps/statefulset.go:739 +[BeforeEach] [sig-apps] StatefulSet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:40:29.495 +Jan 14 03:40:29.495: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename statefulset 01/14/23 03:40:29.495 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:40:29.508 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:40:29.51 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 +STEP: Creating service test in namespace statefulset-7036 01/14/23 03:40:29.513 +[It] Should recreate evicted statefulset [Conformance] + test/e2e/apps/statefulset.go:739 +STEP: Looking for a node to schedule stateful set and pod 01/14/23 03:40:29.517 +STEP: Creating pod with conflicting port in namespace statefulset-7036 01/14/23 03:40:29.523 +STEP: Waiting until pod test-pod will start running in namespace statefulset-7036 01/14/23 03:40:29.532 +Jan 14 03:40:29.532: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "statefulset-7036" to be "running" +Jan 14 03:40:29.535: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.972737ms +Jan 14 03:40:31.539: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.007357715s +Jan 14 03:40:31.539: INFO: Pod "test-pod" satisfied condition "running" +STEP: Creating statefulset with conflicting port in namespace statefulset-7036 01/14/23 03:40:31.539 +STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7036 01/14/23 03:40:31.545 +Jan 14 03:40:31.558: INFO: Observed stateful pod in namespace: statefulset-7036, name: ss-0, uid: 28238295-8717-4bdd-a86a-85cb10812388, status phase: Pending. Waiting for statefulset controller to delete. +Jan 14 03:40:31.572: INFO: Observed stateful pod in namespace: statefulset-7036, name: ss-0, uid: 28238295-8717-4bdd-a86a-85cb10812388, status phase: Failed. Waiting for statefulset controller to delete. +Jan 14 03:40:31.579: INFO: Observed stateful pod in namespace: statefulset-7036, name: ss-0, uid: 28238295-8717-4bdd-a86a-85cb10812388, status phase: Failed. Waiting for statefulset controller to delete. +Jan 14 03:40:31.587: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7036 +STEP: Removing pod with conflicting port in namespace statefulset-7036 01/14/23 03:40:31.587 +STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7036 and will be in running state 01/14/23 03:40:31.606 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 +Jan 14 03:40:33.614: INFO: Deleting all statefulset in ns statefulset-7036 +Jan 14 03:40:33.618: INFO: Scaling statefulset ss to 0 +Jan 14 03:40:43.639: INFO: Waiting for statefulset status.replicas updated to 0 +Jan 14 03:40:43.642: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + test/e2e/framework/node/init/init.go:32 +Jan 14 03:40:43.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] StatefulSet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] StatefulSet + tear down framework | framework.go:193 +STEP: Destroying namespace "statefulset-7036" for this suite. 01/14/23 03:40:43.659 +------------------------------ +• [SLOW TEST] [14.171 seconds] +[sig-apps] StatefulSet +test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:103 + Should recreate evicted statefulset [Conformance] + test/e2e/apps/statefulset.go:739 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] StatefulSet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:40:29.495 + Jan 14 03:40:29.495: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename statefulset 01/14/23 03:40:29.495 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:40:29.508 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:40:29.51 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 + [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 + STEP: Creating service test in namespace statefulset-7036 01/14/23 03:40:29.513 + [It] Should recreate evicted statefulset [Conformance] + test/e2e/apps/statefulset.go:739 + STEP: Looking for a node to schedule stateful set and pod 01/14/23 03:40:29.517 + STEP: Creating pod with conflicting port in namespace statefulset-7036 01/14/23 03:40:29.523 + STEP: Waiting until pod test-pod will start running in namespace statefulset-7036 01/14/23 03:40:29.532 + Jan 14 03:40:29.532: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "statefulset-7036" to be "running" + Jan 14 03:40:29.535: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.972737ms + Jan 14 03:40:31.539: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.007357715s + Jan 14 03:40:31.539: INFO: Pod "test-pod" satisfied condition "running" + STEP: Creating statefulset with conflicting port in namespace statefulset-7036 01/14/23 03:40:31.539 + STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7036 01/14/23 03:40:31.545 + Jan 14 03:40:31.558: INFO: Observed stateful pod in namespace: statefulset-7036, name: ss-0, uid: 28238295-8717-4bdd-a86a-85cb10812388, status phase: Pending. Waiting for statefulset controller to delete. + Jan 14 03:40:31.572: INFO: Observed stateful pod in namespace: statefulset-7036, name: ss-0, uid: 28238295-8717-4bdd-a86a-85cb10812388, status phase: Failed. Waiting for statefulset controller to delete. + Jan 14 03:40:31.579: INFO: Observed stateful pod in namespace: statefulset-7036, name: ss-0, uid: 28238295-8717-4bdd-a86a-85cb10812388, status phase: Failed. Waiting for statefulset controller to delete. + Jan 14 03:40:31.587: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7036 + STEP: Removing pod with conflicting port in namespace statefulset-7036 01/14/23 03:40:31.587 + STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7036 and will be in running state 01/14/23 03:40:31.606 + [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 + Jan 14 03:40:33.614: INFO: Deleting all statefulset in ns statefulset-7036 + Jan 14 03:40:33.618: INFO: Scaling statefulset ss to 0 + Jan 14 03:40:43.639: INFO: Waiting for statefulset status.replicas updated to 0 + Jan 14 03:40:43.642: INFO: Deleting statefulset ss + [AfterEach] [sig-apps] StatefulSet + test/e2e/framework/node/init/init.go:32 + Jan 14 03:40:43.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] StatefulSet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] StatefulSet + tear down framework | framework.go:193 + STEP: Destroying namespace "statefulset-7036" for this suite. 01/14/23 03:40:43.659 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + test/e2e/apimachinery/garbage_collector.go:550 +[BeforeEach] [sig-api-machinery] Garbage collector + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:40:43.667 +Jan 14 03:40:43.667: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename gc 01/14/23 03:40:43.668 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:40:43.683 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:40:43.685 +[BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:31 +[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + test/e2e/apimachinery/garbage_collector.go:550 +STEP: create the deployment 01/14/23 03:40:43.687 +STEP: Wait for the Deployment to create new ReplicaSet 01/14/23 03:40:43.692 +STEP: delete the deployment 01/14/23 03:40:44.203 +STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs 01/14/23 03:40:44.211 +STEP: Gathering metrics 01/14/23 03:40:44.729 +Jan 14 03:40:44.758: INFO: Waiting up to 5m0s for pod "kube-controller-manager-10.0.1.231" in namespace "kube-system" to be "running and ready" +Jan 14 03:40:44.761: INFO: Pod "kube-controller-manager-10.0.1.231": Phase="Running", Reason="", readiness=true. Elapsed: 3.106483ms +Jan 14 03:40:44.761: INFO: The phase of Pod kube-controller-manager-10.0.1.231 is Running (Ready = true) +Jan 14 03:40:44.761: INFO: Pod "kube-controller-manager-10.0.1.231" satisfied condition "running and ready" +Jan 14 03:40:44.814: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/node/init/init.go:32 +Jan 14 03:40:44.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + tear down framework | framework.go:193 +STEP: Destroying namespace "gc-6749" for this suite. 01/14/23 03:40:44.82 +------------------------------ +• [1.160 seconds] +[sig-api-machinery] Garbage collector +test/e2e/apimachinery/framework.go:23 + should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + test/e2e/apimachinery/garbage_collector.go:550 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Garbage collector + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:40:43.667 + Jan 14 03:40:43.667: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename gc 01/14/23 03:40:43.668 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:40:43.683 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:40:43.685 + [BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:31 + [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + test/e2e/apimachinery/garbage_collector.go:550 + STEP: create the deployment 01/14/23 03:40:43.687 + STEP: Wait for the Deployment to create new ReplicaSet 01/14/23 03:40:43.692 + STEP: delete the deployment 01/14/23 03:40:44.203 + STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs 01/14/23 03:40:44.211 + STEP: Gathering metrics 01/14/23 03:40:44.729 + Jan 14 03:40:44.758: INFO: Waiting up to 5m0s for pod "kube-controller-manager-10.0.1.231" in namespace "kube-system" to be "running and ready" + Jan 14 03:40:44.761: INFO: Pod "kube-controller-manager-10.0.1.231": Phase="Running", Reason="", readiness=true. Elapsed: 3.106483ms + Jan 14 03:40:44.761: INFO: The phase of Pod kube-controller-manager-10.0.1.231 is Running (Ready = true) + Jan 14 03:40:44.761: INFO: Pod "kube-controller-manager-10.0.1.231" satisfied condition "running and ready" + Jan 14 03:40:44.814: INFO: For apiserver_request_total: + For apiserver_request_latency_seconds: + For apiserver_init_events_total: + For garbage_collector_attempt_to_delete_queue_latency: + For garbage_collector_attempt_to_delete_work_duration: + For garbage_collector_attempt_to_orphan_queue_latency: + For garbage_collector_attempt_to_orphan_work_duration: + For garbage_collector_dirty_processing_latency_microseconds: + For garbage_collector_event_processing_latency_microseconds: + For garbage_collector_graph_changes_queue_latency: + For garbage_collector_graph_changes_work_duration: + For garbage_collector_orphan_processing_latency_microseconds: + For namespace_queue_latency: + For namespace_queue_latency_sum: + For namespace_queue_latency_count: + For namespace_retries: + For namespace_work_duration: + For namespace_work_duration_sum: + For namespace_work_duration_count: + For function_duration_seconds: + For errors_total: + For evicted_pods_total: + + [AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/node/init/init.go:32 + Jan 14 03:40:44.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + tear down framework | framework.go:193 + STEP: Destroying namespace "gc-6749" for this suite. 01/14/23 03:40:44.82 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:46 +[BeforeEach] [sig-storage] Projected secret + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:40:44.827 +Jan 14 03:40:44.827: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename projected 01/14/23 03:40:44.828 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:40:44.841 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:40:44.844 +[BeforeEach] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:46 +STEP: Creating projection with secret that has name projected-secret-test-43632bb8-7888-4735-844b-10a8550ba112 01/14/23 03:40:44.846 +STEP: Creating a pod to test consume secrets 01/14/23 03:40:44.852 +Jan 14 03:40:44.862: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4ea94bdf-35b2-4719-9104-e92e214ca8ba" in namespace "projected-3463" to be "Succeeded or Failed" +Jan 14 03:40:44.865: INFO: Pod "pod-projected-secrets-4ea94bdf-35b2-4719-9104-e92e214ca8ba": Phase="Pending", Reason="", readiness=false. Elapsed: 3.088848ms +Jan 14 03:40:46.870: INFO: Pod "pod-projected-secrets-4ea94bdf-35b2-4719-9104-e92e214ca8ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008302473s +Jan 14 03:40:48.869: INFO: Pod "pod-projected-secrets-4ea94bdf-35b2-4719-9104-e92e214ca8ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007389021s +STEP: Saw pod success 01/14/23 03:40:48.869 +Jan 14 03:40:48.869: INFO: Pod "pod-projected-secrets-4ea94bdf-35b2-4719-9104-e92e214ca8ba" satisfied condition "Succeeded or Failed" +Jan 14 03:40:48.873: INFO: Trying to get logs from node 10.0.1.106 pod pod-projected-secrets-4ea94bdf-35b2-4719-9104-e92e214ca8ba container projected-secret-volume-test: +STEP: delete the pod 01/14/23 03:40:48.886 +Jan 14 03:40:48.900: INFO: Waiting for pod pod-projected-secrets-4ea94bdf-35b2-4719-9104-e92e214ca8ba to disappear +Jan 14 03:40:48.902: INFO: Pod pod-projected-secrets-4ea94bdf-35b2-4719-9104-e92e214ca8ba no longer exists +[AfterEach] [sig-storage] Projected secret + test/e2e/framework/node/init/init.go:32 +Jan 14 03:40:48.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected secret + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected secret + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-3463" for this suite. 01/14/23 03:40:48.907 +------------------------------ +• [4.087 seconds] +[sig-storage] Projected secret +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:46 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected secret + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:40:44.827 + Jan 14 03:40:44.827: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename projected 01/14/23 03:40:44.828 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:40:44.841 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:40:44.844 + [BeforeEach] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:46 + STEP: Creating projection with secret that has name projected-secret-test-43632bb8-7888-4735-844b-10a8550ba112 01/14/23 03:40:44.846 + STEP: Creating a pod to test consume secrets 01/14/23 03:40:44.852 + Jan 14 03:40:44.862: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4ea94bdf-35b2-4719-9104-e92e214ca8ba" in namespace "projected-3463" to be "Succeeded or Failed" + Jan 14 03:40:44.865: INFO: Pod "pod-projected-secrets-4ea94bdf-35b2-4719-9104-e92e214ca8ba": Phase="Pending", Reason="", readiness=false. Elapsed: 3.088848ms + Jan 14 03:40:46.870: INFO: Pod "pod-projected-secrets-4ea94bdf-35b2-4719-9104-e92e214ca8ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008302473s + Jan 14 03:40:48.869: INFO: Pod "pod-projected-secrets-4ea94bdf-35b2-4719-9104-e92e214ca8ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007389021s + STEP: Saw pod success 01/14/23 03:40:48.869 + Jan 14 03:40:48.869: INFO: Pod "pod-projected-secrets-4ea94bdf-35b2-4719-9104-e92e214ca8ba" satisfied condition "Succeeded or Failed" + Jan 14 03:40:48.873: INFO: Trying to get logs from node 10.0.1.106 pod pod-projected-secrets-4ea94bdf-35b2-4719-9104-e92e214ca8ba container projected-secret-volume-test: + STEP: delete the pod 01/14/23 03:40:48.886 + Jan 14 03:40:48.900: INFO: Waiting for pod pod-projected-secrets-4ea94bdf-35b2-4719-9104-e92e214ca8ba to disappear + Jan 14 03:40:48.902: INFO: Pod pod-projected-secrets-4ea94bdf-35b2-4719-9104-e92e214ca8ba no longer exists + [AfterEach] [sig-storage] Projected secret + test/e2e/framework/node/init/init.go:32 + Jan 14 03:40:48.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected secret + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected secret + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-3463" for this suite. 01/14/23 03:40:48.907 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-node] Kubelet when scheduling a busybox command in a pod + should print the output to logs [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:52 +[BeforeEach] [sig-node] Kubelet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:40:48.914 +Jan 14 03:40:48.915: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename kubelet-test 01/14/23 03:40:48.915 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:40:48.928 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:40:48.931 +[BeforeEach] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 +[It] should print the output to logs [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:52 +Jan 14 03:40:48.945: INFO: Waiting up to 5m0s for pod "busybox-scheduling-bd632d34-3187-4edb-8396-dd78b0c65647" in namespace "kubelet-test-4640" to be "running and ready" +Jan 14 03:40:48.949: INFO: Pod "busybox-scheduling-bd632d34-3187-4edb-8396-dd78b0c65647": Phase="Pending", Reason="", readiness=false. Elapsed: 3.608887ms +Jan 14 03:40:48.949: INFO: The phase of Pod busybox-scheduling-bd632d34-3187-4edb-8396-dd78b0c65647 is Pending, waiting for it to be Running (with Ready = true) +Jan 14 03:40:50.954: INFO: Pod "busybox-scheduling-bd632d34-3187-4edb-8396-dd78b0c65647": Phase="Running", Reason="", readiness=true. Elapsed: 2.008885696s +Jan 14 03:40:50.954: INFO: The phase of Pod busybox-scheduling-bd632d34-3187-4edb-8396-dd78b0c65647 is Running (Ready = true) +Jan 14 03:40:50.954: INFO: Pod "busybox-scheduling-bd632d34-3187-4edb-8396-dd78b0c65647" satisfied condition "running and ready" +[AfterEach] [sig-node] Kubelet + test/e2e/framework/node/init/init.go:32 +Jan 14 03:40:50.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Kubelet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Kubelet + tear down framework | framework.go:193 +STEP: Destroying namespace "kubelet-test-4640" for this suite. 01/14/23 03:40:50.976 +------------------------------ +• [2.068 seconds] +[sig-node] Kubelet +test/e2e/common/node/framework.go:23 + when scheduling a busybox command in a pod + test/e2e/common/node/kubelet.go:44 + should print the output to logs [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:52 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Kubelet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:40:48.914 + Jan 14 03:40:48.915: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename kubelet-test 01/14/23 03:40:48.915 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:40:48.928 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:40:48.931 + [BeforeEach] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 + [It] should print the output to logs [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:52 + Jan 14 03:40:48.945: INFO: Waiting up to 5m0s for pod "busybox-scheduling-bd632d34-3187-4edb-8396-dd78b0c65647" in namespace "kubelet-test-4640" to be "running and ready" + Jan 14 03:40:48.949: INFO: Pod "busybox-scheduling-bd632d34-3187-4edb-8396-dd78b0c65647": Phase="Pending", Reason="", readiness=false. Elapsed: 3.608887ms + Jan 14 03:40:48.949: INFO: The phase of Pod busybox-scheduling-bd632d34-3187-4edb-8396-dd78b0c65647 is Pending, waiting for it to be Running (with Ready = true) + Jan 14 03:40:50.954: INFO: Pod "busybox-scheduling-bd632d34-3187-4edb-8396-dd78b0c65647": Phase="Running", Reason="", readiness=true. Elapsed: 2.008885696s + Jan 14 03:40:50.954: INFO: The phase of Pod busybox-scheduling-bd632d34-3187-4edb-8396-dd78b0c65647 is Running (Ready = true) + Jan 14 03:40:50.954: INFO: Pod "busybox-scheduling-bd632d34-3187-4edb-8396-dd78b0c65647" satisfied condition "running and ready" + [AfterEach] [sig-node] Kubelet + test/e2e/framework/node/init/init.go:32 + Jan 14 03:40:50.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Kubelet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Kubelet + tear down framework | framework.go:193 + STEP: Destroying namespace "kubelet-test-4640" for this suite. 01/14/23 03:40:50.976 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should manage the lifecycle of a ResourceQuota [Conformance] + test/e2e/apimachinery/resource_quota.go:943 +[BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:40:50.984 +Jan 14 03:40:50.984: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename resourcequota 01/14/23 03:40:50.985 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:40:51.002 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:40:51.004 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 +[It] should manage the lifecycle of a ResourceQuota [Conformance] + test/e2e/apimachinery/resource_quota.go:943 +STEP: Creating a ResourceQuota 01/14/23 03:40:51.007 +STEP: Getting a ResourceQuota 01/14/23 03:40:51.013 +STEP: Listing all ResourceQuotas with LabelSelector 01/14/23 03:40:51.017 +STEP: Patching the ResourceQuota 01/14/23 03:40:51.02 +STEP: Deleting a Collection of ResourceQuotas 01/14/23 03:40:51.025 +STEP: Verifying the deleted ResourceQuota 01/14/23 03:40:51.037 +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 +Jan 14 03:40:51.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 +STEP: Destroying namespace "resourcequota-610" for this suite. 01/14/23 03:40:51.044 +------------------------------ +• [0.066 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should manage the lifecycle of a ResourceQuota [Conformance] + test/e2e/apimachinery/resource_quota.go:943 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:40:50.984 + Jan 14 03:40:50.984: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename resourcequota 01/14/23 03:40:50.985 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:40:51.002 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:40:51.004 + [BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 + [It] should manage the lifecycle of a ResourceQuota [Conformance] + test/e2e/apimachinery/resource_quota.go:943 + STEP: Creating a ResourceQuota 01/14/23 03:40:51.007 + STEP: Getting a ResourceQuota 01/14/23 03:40:51.013 + STEP: Listing all ResourceQuotas with LabelSelector 01/14/23 03:40:51.017 + STEP: Patching the ResourceQuota 01/14/23 03:40:51.02 + STEP: Deleting a Collection of ResourceQuotas 01/14/23 03:40:51.025 + STEP: Verifying the deleted ResourceQuota 01/14/23 03:40:51.037 + [AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 + Jan 14 03:40:51.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 + STEP: Destroying namespace "resourcequota-610" for this suite. 01/14/23 03:40:51.044 + << End Captured GinkgoWriter Output +------------------------------ +[sig-network] DNS + should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + test/e2e/network/dns.go:193 +[BeforeEach] [sig-network] DNS + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:40:51.05 +Jan 14 03:40:51.050: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename dns 01/14/23 03:40:51.051 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:40:51.065 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:40:51.067 +[BeforeEach] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:31 +[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + test/e2e/network/dns.go:193 +STEP: Creating a test headless service 01/14/23 03:40:51.069 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5463 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5463;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5463 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5463;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5463.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5463.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5463.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5463.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5463.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5463.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5463.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5463.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5463.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5463.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5463.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5463.svc;check="$$(dig +notcp +noall +answer +search 25.255.55.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.55.255.25_udp@PTR;check="$$(dig +tcp +noall +answer +search 25.255.55.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.55.255.25_tcp@PTR;sleep 1; done + 01/14/23 03:40:51.085 +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5463 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5463;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5463 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5463;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5463.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5463.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5463.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5463.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5463.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5463.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5463.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5463.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5463.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5463.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5463.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5463.svc;check="$$(dig +notcp +noall +answer +search 25.255.55.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.55.255.25_udp@PTR;check="$$(dig +tcp +noall +answer +search 25.255.55.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.55.255.25_tcp@PTR;sleep 1; done + 01/14/23 03:40:51.085 +STEP: creating a pod to probe DNS 01/14/23 03:40:51.085 +STEP: submitting the pod to kubernetes 01/14/23 03:40:51.085 +Jan 14 03:40:51.105: INFO: Waiting up to 15m0s for pod "dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d" in namespace "dns-5463" to be "running" +Jan 14 03:40:51.109: INFO: Pod "dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.490171ms +Jan 14 03:40:53.113: INFO: Pod "dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d": Phase="Running", Reason="", readiness=true. Elapsed: 2.00808251s +Jan 14 03:40:53.113: INFO: Pod "dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d" satisfied condition "running" +STEP: retrieving the pod 01/14/23 03:40:53.113 +STEP: looking for the results for each expected name from probers 01/14/23 03:40:53.117 +Jan 14 03:40:53.121: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5463/dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d: the server could not find the requested resource (get pods dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d) +Jan 14 03:40:53.125: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5463/dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d: the server could not find the requested resource (get pods dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d) +Jan 14 03:40:53.128: INFO: Unable to read wheezy_udp@dns-test-service.dns-5463 from pod dns-5463/dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d: the server could not find the requested resource (get pods dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d) +Jan 14 03:40:53.131: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5463 from pod dns-5463/dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d: the server could not find the requested resource (get pods dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d) +Jan 14 03:40:53.135: INFO: Unable to read wheezy_udp@dns-test-service.dns-5463.svc from pod dns-5463/dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d: the server could not find the requested resource (get pods dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d) +Jan 14 03:40:53.138: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5463.svc from pod dns-5463/dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d: the server could not find the requested resource (get pods dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d) +Jan 14 03:40:53.142: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5463.svc from pod dns-5463/dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d: the server could not find the requested resource (get pods dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d) +Jan 14 03:40:53.145: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5463.svc from pod dns-5463/dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d: the server could not find the requested resource (get pods dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d) +Jan 14 03:40:53.161: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5463/dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d: the server could not find the requested resource (get pods dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d) +Jan 14 03:40:53.164: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5463/dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d: the server could not find the requested resource (get pods dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d) +Jan 14 03:40:53.167: INFO: Unable to read jessie_udp@dns-test-service.dns-5463 from pod dns-5463/dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d: the server could not find the requested resource (get pods dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d) +Jan 14 03:40:53.170: INFO: Unable to read jessie_tcp@dns-test-service.dns-5463 from pod dns-5463/dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d: the server could not find the requested resource (get pods dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d) +Jan 14 03:40:53.173: INFO: Unable to read jessie_udp@dns-test-service.dns-5463.svc from pod dns-5463/dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d: the server could not find the requested resource (get pods dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d) +Jan 14 03:40:53.176: INFO: Unable to read jessie_tcp@dns-test-service.dns-5463.svc from pod dns-5463/dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d: the server could not find the requested resource (get pods dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d) +Jan 14 03:40:53.179: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5463.svc from pod dns-5463/dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d: the server could not find the requested resource (get pods dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d) +Jan 14 03:40:53.181: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5463.svc from pod dns-5463/dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d: the server could not find the requested resource (get pods dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d) +Jan 14 03:40:53.193: INFO: Lookups using dns-5463/dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5463 wheezy_tcp@dns-test-service.dns-5463 wheezy_udp@dns-test-service.dns-5463.svc wheezy_tcp@dns-test-service.dns-5463.svc wheezy_udp@_http._tcp.dns-test-service.dns-5463.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5463.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5463 jessie_tcp@dns-test-service.dns-5463 jessie_udp@dns-test-service.dns-5463.svc jessie_tcp@dns-test-service.dns-5463.svc jessie_udp@_http._tcp.dns-test-service.dns-5463.svc jessie_tcp@_http._tcp.dns-test-service.dns-5463.svc] + +Jan 14 03:40:58.265: INFO: DNS probes using dns-5463/dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d succeeded + +STEP: deleting the pod 01/14/23 03:40:58.265 +STEP: deleting the test service 01/14/23 03:40:58.292 +STEP: deleting the test headless service 01/14/23 03:40:58.396 +[AfterEach] [sig-network] DNS + test/e2e/framework/node/init/init.go:32 +Jan 14 03:40:58.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] DNS + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] DNS + tear down framework | framework.go:193 +STEP: Destroying namespace "dns-5463" for this suite. 01/14/23 03:40:58.484 +------------------------------ +• [SLOW TEST] [7.452 seconds] +[sig-network] DNS +test/e2e/network/common/framework.go:23 + should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + test/e2e/network/dns.go:193 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] DNS + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:40:51.05 + Jan 14 03:40:51.050: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename dns 01/14/23 03:40:51.051 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:40:51.065 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:40:51.067 + [BeforeEach] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:31 + [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] + test/e2e/network/dns.go:193 + STEP: Creating a test headless service 01/14/23 03:40:51.069 + STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5463 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5463;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5463 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5463;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5463.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5463.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5463.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5463.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5463.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5463.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5463.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5463.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5463.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5463.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5463.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5463.svc;check="$$(dig +notcp +noall +answer +search 25.255.55.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.55.255.25_udp@PTR;check="$$(dig +tcp +noall +answer +search 25.255.55.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.55.255.25_tcp@PTR;sleep 1; done + 01/14/23 03:40:51.085 + STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5463 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5463;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5463 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5463;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5463.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5463.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5463.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5463.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5463.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5463.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5463.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5463.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5463.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5463.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5463.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5463.svc;check="$$(dig +notcp +noall +answer +search 25.255.55.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.55.255.25_udp@PTR;check="$$(dig +tcp +noall +answer +search 25.255.55.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.55.255.25_tcp@PTR;sleep 1; done + 01/14/23 03:40:51.085 + STEP: creating a pod to probe DNS 01/14/23 03:40:51.085 + STEP: submitting the pod to kubernetes 01/14/23 03:40:51.085 + Jan 14 03:40:51.105: INFO: Waiting up to 15m0s for pod "dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d" in namespace "dns-5463" to be "running" + Jan 14 03:40:51.109: INFO: Pod "dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.490171ms + Jan 14 03:40:53.113: INFO: Pod "dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d": Phase="Running", Reason="", readiness=true. Elapsed: 2.00808251s + Jan 14 03:40:53.113: INFO: Pod "dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d" satisfied condition "running" + STEP: retrieving the pod 01/14/23 03:40:53.113 + STEP: looking for the results for each expected name from probers 01/14/23 03:40:53.117 + Jan 14 03:40:53.121: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5463/dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d: the server could not find the requested resource (get pods dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d) + Jan 14 03:40:53.125: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5463/dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d: the server could not find the requested resource (get pods dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d) + Jan 14 03:40:53.128: INFO: Unable to read wheezy_udp@dns-test-service.dns-5463 from pod dns-5463/dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d: the server could not find the requested resource (get pods dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d) + Jan 14 03:40:53.131: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5463 from pod dns-5463/dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d: the server could not find the requested resource (get pods dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d) + Jan 14 03:40:53.135: INFO: Unable to read wheezy_udp@dns-test-service.dns-5463.svc from pod dns-5463/dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d: the server could not find the requested resource (get pods dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d) + Jan 14 03:40:53.138: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5463.svc from pod dns-5463/dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d: the server could not find the requested resource (get pods dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d) + Jan 14 03:40:53.142: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5463.svc from pod dns-5463/dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d: the server could not find the requested resource (get pods dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d) + Jan 14 03:40:53.145: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5463.svc from pod dns-5463/dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d: the server could not find the requested resource (get pods dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d) + Jan 14 03:40:53.161: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5463/dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d: the server could not find the requested resource (get pods dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d) + Jan 14 03:40:53.164: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5463/dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d: the server could not find the requested resource (get pods dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d) + Jan 14 03:40:53.167: INFO: Unable to read jessie_udp@dns-test-service.dns-5463 from pod dns-5463/dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d: the server could not find the requested resource (get pods dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d) + Jan 14 03:40:53.170: INFO: Unable to read jessie_tcp@dns-test-service.dns-5463 from pod dns-5463/dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d: the server could not find the requested resource (get pods dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d) + Jan 14 03:40:53.173: INFO: Unable to read jessie_udp@dns-test-service.dns-5463.svc from pod dns-5463/dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d: the server could not find the requested resource (get pods dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d) + Jan 14 03:40:53.176: INFO: Unable to read jessie_tcp@dns-test-service.dns-5463.svc from pod dns-5463/dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d: the server could not find the requested resource (get pods dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d) + Jan 14 03:40:53.179: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5463.svc from pod dns-5463/dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d: the server could not find the requested resource (get pods dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d) + Jan 14 03:40:53.181: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5463.svc from pod dns-5463/dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d: the server could not find the requested resource (get pods dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d) + Jan 14 03:40:53.193: INFO: Lookups using dns-5463/dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5463 wheezy_tcp@dns-test-service.dns-5463 wheezy_udp@dns-test-service.dns-5463.svc wheezy_tcp@dns-test-service.dns-5463.svc wheezy_udp@_http._tcp.dns-test-service.dns-5463.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5463.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5463 jessie_tcp@dns-test-service.dns-5463 jessie_udp@dns-test-service.dns-5463.svc jessie_tcp@dns-test-service.dns-5463.svc jessie_udp@_http._tcp.dns-test-service.dns-5463.svc jessie_tcp@_http._tcp.dns-test-service.dns-5463.svc] + + Jan 14 03:40:58.265: INFO: DNS probes using dns-5463/dns-test-7116dfd6-ddb4-433b-88ce-c2ca9a64d55d succeeded + + STEP: deleting the pod 01/14/23 03:40:58.265 + STEP: deleting the test service 01/14/23 03:40:58.292 + STEP: deleting the test headless service 01/14/23 03:40:58.396 + [AfterEach] [sig-network] DNS + test/e2e/framework/node/init/init.go:32 + Jan 14 03:40:58.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] DNS + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] DNS + tear down framework | framework.go:193 + STEP: Destroying namespace "dns-5463" for this suite. 01/14/23 03:40:58.484 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl version + should check is all data is printed [Conformance] + test/e2e/kubectl/kubectl.go:1685 +[BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:40:58.505 +Jan 14 03:40:58.505: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename kubectl 01/14/23 03:40:58.506 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:40:58.536 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:40:58.538 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[It] should check is all data is printed [Conformance] + test/e2e/kubectl/kubectl.go:1685 +Jan 14 03:40:58.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-853 version' +Jan 14 03:40:58.598: INFO: stderr: "WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.\n" +Jan 14 03:40:58.598: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"26\", GitVersion:\"v1.26.0\", GitCommit:\"b46a3f887ca979b1a5d14fd39cb1af43e7e5d12d\", GitTreeState:\"clean\", BuildDate:\"2022-12-08T19:58:30Z\", GoVersion:\"go1.19.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nKustomize Version: v4.5.7\nServer Version: version.Info{Major:\"1\", Minor:\"26+\", GitVersion:\"v1.26.1-tke.1-rc1\", GitCommit:\"3ad1340aa1dc964e00d8a3ea8ce31275a5f782bb\", GitTreeState:\"clean\", BuildDate:\"2023-01-06T14:43:33Z\", GoVersion:\"go1.19.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 +Jan 14 03:40:58.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 +STEP: Destroying namespace "kubectl-853" for this suite. 01/14/23 03:40:58.604 +------------------------------ +• [0.107 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl version + test/e2e/kubectl/kubectl.go:1679 + should check is all data is printed [Conformance] + test/e2e/kubectl/kubectl.go:1685 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:40:58.505 + Jan 14 03:40:58.505: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename kubectl 01/14/23 03:40:58.506 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:40:58.536 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:40:58.538 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [It] should check is all data is printed [Conformance] + test/e2e/kubectl/kubectl.go:1685 + Jan 14 03:40:58.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-853 version' + Jan 14 03:40:58.598: INFO: stderr: "WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.\n" + Jan 14 03:40:58.598: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"26\", GitVersion:\"v1.26.0\", GitCommit:\"b46a3f887ca979b1a5d14fd39cb1af43e7e5d12d\", GitTreeState:\"clean\", BuildDate:\"2022-12-08T19:58:30Z\", GoVersion:\"go1.19.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nKustomize Version: v4.5.7\nServer Version: version.Info{Major:\"1\", Minor:\"26+\", GitVersion:\"v1.26.1-tke.1-rc1\", GitCommit:\"3ad1340aa1dc964e00d8a3ea8ce31275a5f782bb\", GitTreeState:\"clean\", BuildDate:\"2023-01-06T14:43:33Z\", GoVersion:\"go1.19.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 + Jan 14 03:40:58.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 + STEP: Destroying namespace "kubectl-853" for this suite. 01/14/23 03:40:58.604 + << End Captured GinkgoWriter Output +------------------------------ +[sig-apps] Deployment + should validate Deployment Status endpoints [Conformance] + test/e2e/apps/deployment.go:479 +[BeforeEach] [sig-apps] Deployment + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:40:58.612 +Jan 14 03:40:58.612: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename deployment 01/14/23 03:40:58.613 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:40:58.624 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:40:58.626 +[BeforeEach] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] should validate Deployment Status endpoints [Conformance] + test/e2e/apps/deployment.go:479 +STEP: creating a Deployment 01/14/23 03:40:58.632 +Jan 14 03:40:58.632: INFO: Creating simple deployment test-deployment-7hpsq +Jan 14 03:40:58.646: INFO: deployment "test-deployment-7hpsq" doesn't have the required revision set +STEP: Getting /status 01/14/23 03:41:00.659 +Jan 14 03:41:00.664: INFO: Deployment test-deployment-7hpsq has Conditions: [{Available True 2023-01-14 03:40:59 +0000 UTC 2023-01-14 03:40:59 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2023-01-14 03:40:59 +0000 UTC 2023-01-14 03:40:58 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-7hpsq-54bc444df" has successfully progressed.}] +STEP: updating Deployment Status 01/14/23 03:41:00.664 +Jan 14 03:41:00.676: INFO: updatedStatus.Conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 3, 40, 59, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 3, 40, 59, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 3, 40, 59, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 3, 40, 58, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"test-deployment-7hpsq-54bc444df\" has successfully progressed."}, v1.DeploymentCondition{Type:"StatusUpdate", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the Deployment status to be updated 01/14/23 03:41:00.676 +Jan 14 03:41:00.678: INFO: Observed &Deployment event: ADDED +Jan 14 03:41:00.678: INFO: Observed Deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-14 03:40:58 +0000 UTC 2023-01-14 03:40:58 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-7hpsq-54bc444df"} +Jan 14 03:41:00.678: INFO: Observed &Deployment event: MODIFIED +Jan 14 03:41:00.678: INFO: Observed Deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-14 03:40:58 +0000 UTC 2023-01-14 03:40:58 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-7hpsq-54bc444df"} +Jan 14 03:41:00.678: INFO: Observed Deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-01-14 03:40:58 +0000 UTC 2023-01-14 03:40:58 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Jan 14 03:41:00.678: INFO: Observed &Deployment event: MODIFIED +Jan 14 03:41:00.678: INFO: Observed Deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-01-14 03:40:58 +0000 UTC 2023-01-14 03:40:58 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Jan 14 03:41:00.678: INFO: Observed Deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-14 03:40:58 +0000 UTC 2023-01-14 03:40:58 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-7hpsq-54bc444df" is progressing.} +Jan 14 03:41:00.678: INFO: Observed &Deployment event: MODIFIED +Jan 14 03:41:00.678: INFO: Observed Deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-01-14 03:40:59 +0000 UTC 2023-01-14 03:40:59 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Jan 14 03:41:00.678: INFO: Observed Deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-14 03:40:59 +0000 UTC 2023-01-14 03:40:58 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-7hpsq-54bc444df" has successfully progressed.} +Jan 14 03:41:00.678: INFO: Observed &Deployment event: MODIFIED +Jan 14 03:41:00.678: INFO: Observed Deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-01-14 03:40:59 +0000 UTC 2023-01-14 03:40:59 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Jan 14 03:41:00.678: INFO: Observed Deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-14 03:40:59 +0000 UTC 2023-01-14 03:40:58 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-7hpsq-54bc444df" has successfully progressed.} +Jan 14 03:41:00.678: INFO: Found Deployment test-deployment-7hpsq in namespace deployment-7362 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Jan 14 03:41:00.678: INFO: Deployment test-deployment-7hpsq has an updated status +STEP: patching the Statefulset Status 01/14/23 03:41:00.678 +Jan 14 03:41:00.679: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Jan 14 03:41:00.686: INFO: Patched status conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"StatusPatched", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} +STEP: watching for the Deployment status to be patched 01/14/23 03:41:00.686 +Jan 14 03:41:00.688: INFO: Observed &Deployment event: ADDED +Jan 14 03:41:00.688: INFO: Observed deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-14 03:40:58 +0000 UTC 2023-01-14 03:40:58 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-7hpsq-54bc444df"} +Jan 14 03:41:00.688: INFO: Observed &Deployment event: MODIFIED +Jan 14 03:41:00.688: INFO: Observed deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-14 03:40:58 +0000 UTC 2023-01-14 03:40:58 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-7hpsq-54bc444df"} +Jan 14 03:41:00.688: INFO: Observed deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-01-14 03:40:58 +0000 UTC 2023-01-14 03:40:58 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Jan 14 03:41:00.688: INFO: Observed &Deployment event: MODIFIED +Jan 14 03:41:00.688: INFO: Observed deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-01-14 03:40:58 +0000 UTC 2023-01-14 03:40:58 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} +Jan 14 03:41:00.688: INFO: Observed deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-14 03:40:58 +0000 UTC 2023-01-14 03:40:58 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-7hpsq-54bc444df" is progressing.} +Jan 14 03:41:00.688: INFO: Observed &Deployment event: MODIFIED +Jan 14 03:41:00.688: INFO: Observed deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-01-14 03:40:59 +0000 UTC 2023-01-14 03:40:59 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Jan 14 03:41:00.688: INFO: Observed deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-14 03:40:59 +0000 UTC 2023-01-14 03:40:58 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-7hpsq-54bc444df" has successfully progressed.} +Jan 14 03:41:00.689: INFO: Observed &Deployment event: MODIFIED +Jan 14 03:41:00.689: INFO: Observed deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-01-14 03:40:59 +0000 UTC 2023-01-14 03:40:59 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} +Jan 14 03:41:00.689: INFO: Observed deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-14 03:40:59 +0000 UTC 2023-01-14 03:40:58 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-7hpsq-54bc444df" has successfully progressed.} +Jan 14 03:41:00.689: INFO: Observed deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Jan 14 03:41:00.689: INFO: Observed &Deployment event: MODIFIED +Jan 14 03:41:00.689: INFO: Found deployment test-deployment-7hpsq in namespace deployment-7362 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } +Jan 14 03:41:00.689: INFO: Deployment test-deployment-7hpsq has a patched status +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Jan 14 03:41:00.693: INFO: Deployment "test-deployment-7hpsq": +&Deployment{ObjectMeta:{test-deployment-7hpsq deployment-7362 9680531c-db7c-45f6-bccd-df2ae43ef804 416132 1 2023-01-14 03:40:58 +0000 UTC map[e2e:testing name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2023-01-14 03:40:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {e2e.test Update apps/v1 2023-01-14 03:41:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"StatusPatched\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update apps/v1 2023-01-14 03:41:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00458fa98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:StatusPatched,Status:True,Reason:,Message:,LastUpdateTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:FoundNewReplicaSet,Message:Found new replica set "test-deployment-7hpsq-54bc444df",LastUpdateTime:2023-01-14 03:41:00 +0000 UTC,LastTransitionTime:2023-01-14 03:41:00 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Jan 14 03:41:00.697: INFO: New ReplicaSet "test-deployment-7hpsq-54bc444df" of Deployment "test-deployment-7hpsq": +&ReplicaSet{ObjectMeta:{test-deployment-7hpsq-54bc444df deployment-7362 d63221ae-f660-4e5e-ad48-632f09035cb0 416121 1 2023-01-14 03:40:58 +0000 UTC map[e2e:testing name:httpd pod-template-hash:54bc444df] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment-7hpsq 9680531c-db7c-45f6-bccd-df2ae43ef804 0xc00458fea0 0xc00458fea1}] [] [{kube-controller-manager Update apps/v1 2023-01-14 03:40:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9680531c-db7c-45f6-bccd-df2ae43ef804\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 03:40:59 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,pod-template-hash: 54bc444df,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd pod-template-hash:54bc444df] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00458ff48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Jan 14 03:41:00.700: INFO: Pod "test-deployment-7hpsq-54bc444df-grr85" is available: +&Pod{ObjectMeta:{test-deployment-7hpsq-54bc444df-grr85 test-deployment-7hpsq-54bc444df- deployment-7362 769d6be7-e7c8-4ba9-a47a-373bcf3b57d9 416120 0 2023-01-14 03:40:58 +0000 UTC map[e2e:testing name:httpd pod-template-hash:54bc444df] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.1.20" + ], + "mac": "7e:48:cf:5f:d4:2b", + "default": true, + "dns": {} +}]] [{apps/v1 ReplicaSet test-deployment-7hpsq-54bc444df d63221ae-f660-4e5e-ad48-632f09035cb0 0xc00335f770 0xc00335f771}] [] [{kube-controller-manager Update v1 2023-01-14 03:40:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d63221ae-f660-4e5e-ad48-632f09035cb0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 03:40:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.1.20\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status} {multus Update v1 2023-01-14 03:40:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-952hj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-952hj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.106,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 03:40:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 03:40:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 03:40:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 03:40:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.106,PodIP:10.52.1.20,StartTime:2023-01-14 03:40:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 03:40:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:containerd://a89f53147f74eaeb8222af7ee736622517a410e5dd79805ccf99f9b9096ca2da,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.52.1.20,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + test/e2e/framework/node/init/init.go:32 +Jan 14 03:41:00.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Deployment + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Deployment + tear down framework | framework.go:193 +STEP: Destroying namespace "deployment-7362" for this suite. 01/14/23 03:41:00.707 +------------------------------ +• [2.102 seconds] +[sig-apps] Deployment +test/e2e/apps/framework.go:23 + should validate Deployment Status endpoints [Conformance] + test/e2e/apps/deployment.go:479 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Deployment + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:40:58.612 + Jan 14 03:40:58.612: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename deployment 01/14/23 03:40:58.613 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:40:58.624 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:40:58.626 + [BeforeEach] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 + [It] should validate Deployment Status endpoints [Conformance] + test/e2e/apps/deployment.go:479 + STEP: creating a Deployment 01/14/23 03:40:58.632 + Jan 14 03:40:58.632: INFO: Creating simple deployment test-deployment-7hpsq + Jan 14 03:40:58.646: INFO: deployment "test-deployment-7hpsq" doesn't have the required revision set + STEP: Getting /status 01/14/23 03:41:00.659 + Jan 14 03:41:00.664: INFO: Deployment test-deployment-7hpsq has Conditions: [{Available True 2023-01-14 03:40:59 +0000 UTC 2023-01-14 03:40:59 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2023-01-14 03:40:59 +0000 UTC 2023-01-14 03:40:58 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-7hpsq-54bc444df" has successfully progressed.}] + STEP: updating Deployment Status 01/14/23 03:41:00.664 + Jan 14 03:41:00.676: INFO: updatedStatus.Conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 3, 40, 59, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 3, 40, 59, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 3, 40, 59, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 3, 40, 58, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"test-deployment-7hpsq-54bc444df\" has successfully progressed."}, v1.DeploymentCondition{Type:"StatusUpdate", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} + STEP: watching for the Deployment status to be updated 01/14/23 03:41:00.676 + Jan 14 03:41:00.678: INFO: Observed &Deployment event: ADDED + Jan 14 03:41:00.678: INFO: Observed Deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-14 03:40:58 +0000 UTC 2023-01-14 03:40:58 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-7hpsq-54bc444df"} + Jan 14 03:41:00.678: INFO: Observed &Deployment event: MODIFIED + Jan 14 03:41:00.678: INFO: Observed Deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-14 03:40:58 +0000 UTC 2023-01-14 03:40:58 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-7hpsq-54bc444df"} + Jan 14 03:41:00.678: INFO: Observed Deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-01-14 03:40:58 +0000 UTC 2023-01-14 03:40:58 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} + Jan 14 03:41:00.678: INFO: Observed &Deployment event: MODIFIED + Jan 14 03:41:00.678: INFO: Observed Deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-01-14 03:40:58 +0000 UTC 2023-01-14 03:40:58 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} + Jan 14 03:41:00.678: INFO: Observed Deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-14 03:40:58 +0000 UTC 2023-01-14 03:40:58 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-7hpsq-54bc444df" is progressing.} + Jan 14 03:41:00.678: INFO: Observed &Deployment event: MODIFIED + Jan 14 03:41:00.678: INFO: Observed Deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-01-14 03:40:59 +0000 UTC 2023-01-14 03:40:59 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} + Jan 14 03:41:00.678: INFO: Observed Deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-14 03:40:59 +0000 UTC 2023-01-14 03:40:58 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-7hpsq-54bc444df" has successfully progressed.} + Jan 14 03:41:00.678: INFO: Observed &Deployment event: MODIFIED + Jan 14 03:41:00.678: INFO: Observed Deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-01-14 03:40:59 +0000 UTC 2023-01-14 03:40:59 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} + Jan 14 03:41:00.678: INFO: Observed Deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-14 03:40:59 +0000 UTC 2023-01-14 03:40:58 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-7hpsq-54bc444df" has successfully progressed.} + Jan 14 03:41:00.678: INFO: Found Deployment test-deployment-7hpsq in namespace deployment-7362 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} + Jan 14 03:41:00.678: INFO: Deployment test-deployment-7hpsq has an updated status + STEP: patching the Statefulset Status 01/14/23 03:41:00.678 + Jan 14 03:41:00.679: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} + Jan 14 03:41:00.686: INFO: Patched status conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"StatusPatched", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} + STEP: watching for the Deployment status to be patched 01/14/23 03:41:00.686 + Jan 14 03:41:00.688: INFO: Observed &Deployment event: ADDED + Jan 14 03:41:00.688: INFO: Observed deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-14 03:40:58 +0000 UTC 2023-01-14 03:40:58 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-7hpsq-54bc444df"} + Jan 14 03:41:00.688: INFO: Observed &Deployment event: MODIFIED + Jan 14 03:41:00.688: INFO: Observed deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-14 03:40:58 +0000 UTC 2023-01-14 03:40:58 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-7hpsq-54bc444df"} + Jan 14 03:41:00.688: INFO: Observed deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-01-14 03:40:58 +0000 UTC 2023-01-14 03:40:58 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} + Jan 14 03:41:00.688: INFO: Observed &Deployment event: MODIFIED + Jan 14 03:41:00.688: INFO: Observed deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-01-14 03:40:58 +0000 UTC 2023-01-14 03:40:58 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} + Jan 14 03:41:00.688: INFO: Observed deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-14 03:40:58 +0000 UTC 2023-01-14 03:40:58 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-7hpsq-54bc444df" is progressing.} + Jan 14 03:41:00.688: INFO: Observed &Deployment event: MODIFIED + Jan 14 03:41:00.688: INFO: Observed deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-01-14 03:40:59 +0000 UTC 2023-01-14 03:40:59 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} + Jan 14 03:41:00.688: INFO: Observed deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-14 03:40:59 +0000 UTC 2023-01-14 03:40:58 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-7hpsq-54bc444df" has successfully progressed.} + Jan 14 03:41:00.689: INFO: Observed &Deployment event: MODIFIED + Jan 14 03:41:00.689: INFO: Observed deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-01-14 03:40:59 +0000 UTC 2023-01-14 03:40:59 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} + Jan 14 03:41:00.689: INFO: Observed deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-14 03:40:59 +0000 UTC 2023-01-14 03:40:58 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-7hpsq-54bc444df" has successfully progressed.} + Jan 14 03:41:00.689: INFO: Observed deployment test-deployment-7hpsq in namespace deployment-7362 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} + Jan 14 03:41:00.689: INFO: Observed &Deployment event: MODIFIED + Jan 14 03:41:00.689: INFO: Found deployment test-deployment-7hpsq in namespace deployment-7362 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } + Jan 14 03:41:00.689: INFO: Deployment test-deployment-7hpsq has a patched status + [AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 + Jan 14 03:41:00.693: INFO: Deployment "test-deployment-7hpsq": + &Deployment{ObjectMeta:{test-deployment-7hpsq deployment-7362 9680531c-db7c-45f6-bccd-df2ae43ef804 416132 1 2023-01-14 03:40:58 +0000 UTC map[e2e:testing name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2023-01-14 03:40:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {e2e.test Update apps/v1 2023-01-14 03:41:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"StatusPatched\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update apps/v1 2023-01-14 03:41:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00458fa98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:StatusPatched,Status:True,Reason:,Message:,LastUpdateTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:FoundNewReplicaSet,Message:Found new replica set "test-deployment-7hpsq-54bc444df",LastUpdateTime:2023-01-14 03:41:00 +0000 UTC,LastTransitionTime:2023-01-14 03:41:00 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + + Jan 14 03:41:00.697: INFO: New ReplicaSet "test-deployment-7hpsq-54bc444df" of Deployment "test-deployment-7hpsq": + &ReplicaSet{ObjectMeta:{test-deployment-7hpsq-54bc444df deployment-7362 d63221ae-f660-4e5e-ad48-632f09035cb0 416121 1 2023-01-14 03:40:58 +0000 UTC map[e2e:testing name:httpd pod-template-hash:54bc444df] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment-7hpsq 9680531c-db7c-45f6-bccd-df2ae43ef804 0xc00458fea0 0xc00458fea1}] [] [{kube-controller-manager Update apps/v1 2023-01-14 03:40:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9680531c-db7c-45f6-bccd-df2ae43ef804\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 03:40:59 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,pod-template-hash: 54bc444df,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[e2e:testing name:httpd pod-template-hash:54bc444df] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00458ff48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} + Jan 14 03:41:00.700: INFO: Pod "test-deployment-7hpsq-54bc444df-grr85" is available: + &Pod{ObjectMeta:{test-deployment-7hpsq-54bc444df-grr85 test-deployment-7hpsq-54bc444df- deployment-7362 769d6be7-e7c8-4ba9-a47a-373bcf3b57d9 416120 0 2023-01-14 03:40:58 +0000 UTC map[e2e:testing name:httpd pod-template-hash:54bc444df] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.1.20" + ], + "mac": "7e:48:cf:5f:d4:2b", + "default": true, + "dns": {} + }]] [{apps/v1 ReplicaSet test-deployment-7hpsq-54bc444df d63221ae-f660-4e5e-ad48-632f09035cb0 0xc00335f770 0xc00335f771}] [] [{kube-controller-manager Update v1 2023-01-14 03:40:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d63221ae-f660-4e5e-ad48-632f09035cb0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 03:40:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.1.20\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status} {multus Update v1 2023-01-14 03:40:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-952hj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-952hj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.106,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 03:40:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 03:40:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 03:40:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 03:40:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.106,PodIP:10.52.1.20,StartTime:2023-01-14 03:40:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 03:40:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:containerd://a89f53147f74eaeb8222af7ee736622517a410e5dd79805ccf99f9b9096ca2da,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.52.1.20,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + [AfterEach] [sig-apps] Deployment + test/e2e/framework/node/init/init.go:32 + Jan 14 03:41:00.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Deployment + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Deployment + tear down framework | framework.go:193 + STEP: Destroying namespace "deployment-7362" for this suite. 01/14/23 03:41:00.707 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-node] RuntimeClass + should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:129 +[BeforeEach] [sig-node] RuntimeClass + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:41:00.715 +Jan 14 03:41:00.715: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename runtimeclass 01/14/23 03:41:00.716 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:41:00.729 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:41:00.731 +[BeforeEach] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:31 +[It] should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:129 +Jan 14 03:41:00.747: INFO: Waiting up to 1m20s for at least 1 pods in namespace runtimeclass-9506 to be scheduled +Jan 14 03:41:00.750: INFO: 1 pods are not scheduled: [runtimeclass-9506/test-runtimeclass-runtimeclass-9506-preconfigured-handler-c7fdf(10076cc7-f6df-48a5-a974-6d0f3d40e442)] +[AfterEach] [sig-node] RuntimeClass + test/e2e/framework/node/init/init.go:32 +Jan 14 03:41:02.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] RuntimeClass + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] RuntimeClass + tear down framework | framework.go:193 +STEP: Destroying namespace "runtimeclass-9506" for this suite. 01/14/23 03:41:02.766 +------------------------------ +• [2.058 seconds] +[sig-node] RuntimeClass +test/e2e/common/node/framework.go:23 + should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:129 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] RuntimeClass + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:41:00.715 + Jan 14 03:41:00.715: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename runtimeclass 01/14/23 03:41:00.716 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:41:00.729 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:41:00.731 + [BeforeEach] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:31 + [It] should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:129 + Jan 14 03:41:00.747: INFO: Waiting up to 1m20s for at least 1 pods in namespace runtimeclass-9506 to be scheduled + Jan 14 03:41:00.750: INFO: 1 pods are not scheduled: [runtimeclass-9506/test-runtimeclass-runtimeclass-9506-preconfigured-handler-c7fdf(10076cc7-f6df-48a5-a974-6d0f3d40e442)] + [AfterEach] [sig-node] RuntimeClass + test/e2e/framework/node/init/init.go:32 + Jan 14 03:41:02.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] RuntimeClass + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] RuntimeClass + tear down framework | framework.go:193 + STEP: Destroying namespace "runtimeclass-9506" for this suite. 01/14/23 03:41:02.766 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:105 +[BeforeEach] [sig-network] Networking + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:41:02.774 +Jan 14 03:41:02.774: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename pod-network-test 01/14/23 03:41:02.775 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:41:02.789 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:41:02.791 +[BeforeEach] [sig-network] Networking + test/e2e/framework/metrics/init/init.go:31 +[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:105 +STEP: Performing setup for networking test in namespace pod-network-test-2216 01/14/23 03:41:02.793 +STEP: creating a selector 01/14/23 03:41:02.793 +STEP: Creating the service pods in kubernetes 01/14/23 03:41:02.793 +Jan 14 03:41:02.793: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Jan 14 03:41:02.832: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-2216" to be "running and ready" +Jan 14 03:41:02.838: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 5.745045ms +Jan 14 03:41:02.838: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Jan 14 03:41:04.843: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.010479633s +Jan 14 03:41:04.843: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jan 14 03:41:06.844: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.011567904s +Jan 14 03:41:06.844: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jan 14 03:41:08.842: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.00975302s +Jan 14 03:41:08.842: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jan 14 03:41:10.842: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.010110476s +Jan 14 03:41:10.842: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jan 14 03:41:12.843: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.010624762s +Jan 14 03:41:12.843: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jan 14 03:41:14.843: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 12.010673603s +Jan 14 03:41:14.843: INFO: The phase of Pod netserver-0 is Running (Ready = true) +Jan 14 03:41:14.843: INFO: Pod "netserver-0" satisfied condition "running and ready" +Jan 14 03:41:14.846: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-2216" to be "running and ready" +Jan 14 03:41:14.849: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 2.864309ms +Jan 14 03:41:14.849: INFO: The phase of Pod netserver-1 is Running (Ready = true) +Jan 14 03:41:14.849: INFO: Pod "netserver-1" satisfied condition "running and ready" +Jan 14 03:41:14.851: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-2216" to be "running and ready" +Jan 14 03:41:14.854: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 2.820067ms +Jan 14 03:41:14.854: INFO: The phase of Pod netserver-2 is Running (Ready = true) +Jan 14 03:41:14.854: INFO: Pod "netserver-2" satisfied condition "running and ready" +STEP: Creating test pods 01/14/23 03:41:14.857 +Jan 14 03:41:14.872: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-2216" to be "running" +Jan 14 03:41:14.874: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.746595ms +Jan 14 03:41:16.880: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.007925578s +Jan 14 03:41:16.880: INFO: Pod "test-container-pod" satisfied condition "running" +Jan 14 03:41:16.883: INFO: Waiting up to 5m0s for pod "host-test-container-pod" in namespace "pod-network-test-2216" to be "running" +Jan 14 03:41:16.886: INFO: Pod "host-test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.962922ms +Jan 14 03:41:16.886: INFO: Pod "host-test-container-pod" satisfied condition "running" +Jan 14 03:41:16.888: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 +Jan 14 03:41:16.888: INFO: Going to poll 10.52.1.21 on port 8083 at least 0 times, with a maximum of 39 tries before failing +Jan 14 03:41:16.891: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.52.1.21:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2216 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jan 14 03:41:16.891: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +Jan 14 03:41:16.891: INFO: ExecWithOptions: Clientset creation +Jan 14 03:41:16.892: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/pod-network-test-2216/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F10.52.1.21%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +Jan 14 03:41:16.940: INFO: Found all 1 expected endpoints: [netserver-0] +Jan 14 03:41:16.940: INFO: Going to poll 10.52.0.229 on port 8083 at least 0 times, with a maximum of 39 tries before failing +Jan 14 03:41:16.944: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.52.0.229:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2216 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jan 14 03:41:16.944: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +Jan 14 03:41:16.944: INFO: ExecWithOptions: Clientset creation +Jan 14 03:41:16.944: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/pod-network-test-2216/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F10.52.0.229%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +Jan 14 03:41:16.995: INFO: Found all 1 expected endpoints: [netserver-1] +Jan 14 03:41:16.995: INFO: Going to poll 10.52.1.112 on port 8083 at least 0 times, with a maximum of 39 tries before failing +Jan 14 03:41:16.999: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.52.1.112:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2216 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jan 14 03:41:16.999: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +Jan 14 03:41:16.999: INFO: ExecWithOptions: Clientset creation +Jan 14 03:41:16.999: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/pod-network-test-2216/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F10.52.1.112%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +Jan 14 03:41:17.047: INFO: Found all 1 expected endpoints: [netserver-2] +[AfterEach] [sig-network] Networking + test/e2e/framework/node/init/init.go:32 +Jan 14 03:41:17.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Networking + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Networking + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Networking + tear down framework | framework.go:193 +STEP: Destroying namespace "pod-network-test-2216" for this suite. 01/14/23 03:41:17.052 +------------------------------ +• [SLOW TEST] [14.285 seconds] +[sig-network] Networking +test/e2e/common/network/framework.go:23 + Granular Checks: Pods + test/e2e/common/network/networking.go:32 + should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:105 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Networking + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:41:02.774 + Jan 14 03:41:02.774: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename pod-network-test 01/14/23 03:41:02.775 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:41:02.789 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:41:02.791 + [BeforeEach] [sig-network] Networking + test/e2e/framework/metrics/init/init.go:31 + [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:105 + STEP: Performing setup for networking test in namespace pod-network-test-2216 01/14/23 03:41:02.793 + STEP: creating a selector 01/14/23 03:41:02.793 + STEP: Creating the service pods in kubernetes 01/14/23 03:41:02.793 + Jan 14 03:41:02.793: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable + Jan 14 03:41:02.832: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-2216" to be "running and ready" + Jan 14 03:41:02.838: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 5.745045ms + Jan 14 03:41:02.838: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) + Jan 14 03:41:04.843: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.010479633s + Jan 14 03:41:04.843: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jan 14 03:41:06.844: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.011567904s + Jan 14 03:41:06.844: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jan 14 03:41:08.842: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.00975302s + Jan 14 03:41:08.842: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jan 14 03:41:10.842: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.010110476s + Jan 14 03:41:10.842: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jan 14 03:41:12.843: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.010624762s + Jan 14 03:41:12.843: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jan 14 03:41:14.843: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 12.010673603s + Jan 14 03:41:14.843: INFO: The phase of Pod netserver-0 is Running (Ready = true) + Jan 14 03:41:14.843: INFO: Pod "netserver-0" satisfied condition "running and ready" + Jan 14 03:41:14.846: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-2216" to be "running and ready" + Jan 14 03:41:14.849: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 2.864309ms + Jan 14 03:41:14.849: INFO: The phase of Pod netserver-1 is Running (Ready = true) + Jan 14 03:41:14.849: INFO: Pod "netserver-1" satisfied condition "running and ready" + Jan 14 03:41:14.851: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-2216" to be "running and ready" + Jan 14 03:41:14.854: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 2.820067ms + Jan 14 03:41:14.854: INFO: The phase of Pod netserver-2 is Running (Ready = true) + Jan 14 03:41:14.854: INFO: Pod "netserver-2" satisfied condition "running and ready" + STEP: Creating test pods 01/14/23 03:41:14.857 + Jan 14 03:41:14.872: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-2216" to be "running" + Jan 14 03:41:14.874: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.746595ms + Jan 14 03:41:16.880: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.007925578s + Jan 14 03:41:16.880: INFO: Pod "test-container-pod" satisfied condition "running" + Jan 14 03:41:16.883: INFO: Waiting up to 5m0s for pod "host-test-container-pod" in namespace "pod-network-test-2216" to be "running" + Jan 14 03:41:16.886: INFO: Pod "host-test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.962922ms + Jan 14 03:41:16.886: INFO: Pod "host-test-container-pod" satisfied condition "running" + Jan 14 03:41:16.888: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 + Jan 14 03:41:16.888: INFO: Going to poll 10.52.1.21 on port 8083 at least 0 times, with a maximum of 39 tries before failing + Jan 14 03:41:16.891: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.52.1.21:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2216 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jan 14 03:41:16.891: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + Jan 14 03:41:16.891: INFO: ExecWithOptions: Clientset creation + Jan 14 03:41:16.892: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/pod-network-test-2216/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F10.52.1.21%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) + Jan 14 03:41:16.940: INFO: Found all 1 expected endpoints: [netserver-0] + Jan 14 03:41:16.940: INFO: Going to poll 10.52.0.229 on port 8083 at least 0 times, with a maximum of 39 tries before failing + Jan 14 03:41:16.944: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.52.0.229:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2216 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jan 14 03:41:16.944: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + Jan 14 03:41:16.944: INFO: ExecWithOptions: Clientset creation + Jan 14 03:41:16.944: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/pod-network-test-2216/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F10.52.0.229%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) + Jan 14 03:41:16.995: INFO: Found all 1 expected endpoints: [netserver-1] + Jan 14 03:41:16.995: INFO: Going to poll 10.52.1.112 on port 8083 at least 0 times, with a maximum of 39 tries before failing + Jan 14 03:41:16.999: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.52.1.112:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2216 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jan 14 03:41:16.999: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + Jan 14 03:41:16.999: INFO: ExecWithOptions: Clientset creation + Jan 14 03:41:16.999: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/pod-network-test-2216/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+--max-time+15+--connect-timeout+1+http%3A%2F%2F10.52.1.112%3A8083%2FhostName+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) + Jan 14 03:41:17.047: INFO: Found all 1 expected endpoints: [netserver-2] + [AfterEach] [sig-network] Networking + test/e2e/framework/node/init/init.go:32 + Jan 14 03:41:17.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Networking + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Networking + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Networking + tear down framework | framework.go:193 + STEP: Destroying namespace "pod-network-test-2216" for this suite. 01/14/23 03:41:17.052 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:68 +[BeforeEach] [sig-storage] Secrets + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:41:17.059 +Jan 14 03:41:17.059: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename secrets 01/14/23 03:41:17.06 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:41:17.075 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:41:17.077 +[BeforeEach] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:68 +STEP: Creating secret with name secret-test-c760d42b-f135-42eb-a033-f2dba1ba911a 01/14/23 03:41:17.079 +STEP: Creating a pod to test consume secrets 01/14/23 03:41:17.083 +Jan 14 03:41:17.093: INFO: Waiting up to 5m0s for pod "pod-secrets-ccc02b7f-629e-4537-9e05-f63409e5f5b0" in namespace "secrets-7333" to be "Succeeded or Failed" +Jan 14 03:41:17.095: INFO: Pod "pod-secrets-ccc02b7f-629e-4537-9e05-f63409e5f5b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.693316ms +Jan 14 03:41:19.099: INFO: Pod "pod-secrets-ccc02b7f-629e-4537-9e05-f63409e5f5b0": Phase="Running", Reason="", readiness=false. Elapsed: 2.006586569s +Jan 14 03:41:21.101: INFO: Pod "pod-secrets-ccc02b7f-629e-4537-9e05-f63409e5f5b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008100017s +STEP: Saw pod success 01/14/23 03:41:21.101 +Jan 14 03:41:21.101: INFO: Pod "pod-secrets-ccc02b7f-629e-4537-9e05-f63409e5f5b0" satisfied condition "Succeeded or Failed" +Jan 14 03:41:21.104: INFO: Trying to get logs from node 10.0.1.106 pod pod-secrets-ccc02b7f-629e-4537-9e05-f63409e5f5b0 container secret-volume-test: +STEP: delete the pod 01/14/23 03:41:21.111 +Jan 14 03:41:21.124: INFO: Waiting for pod pod-secrets-ccc02b7f-629e-4537-9e05-f63409e5f5b0 to disappear +Jan 14 03:41:21.127: INFO: Pod pod-secrets-ccc02b7f-629e-4537-9e05-f63409e5f5b0 no longer exists +[AfterEach] [sig-storage] Secrets + test/e2e/framework/node/init/init.go:32 +Jan 14 03:41:21.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Secrets + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Secrets + tear down framework | framework.go:193 +STEP: Destroying namespace "secrets-7333" for this suite. 01/14/23 03:41:21.132 +------------------------------ +• [4.079 seconds] +[sig-storage] Secrets +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:68 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Secrets + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:41:17.059 + Jan 14 03:41:17.059: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename secrets 01/14/23 03:41:17.06 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:41:17.075 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:41:17.077 + [BeforeEach] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:68 + STEP: Creating secret with name secret-test-c760d42b-f135-42eb-a033-f2dba1ba911a 01/14/23 03:41:17.079 + STEP: Creating a pod to test consume secrets 01/14/23 03:41:17.083 + Jan 14 03:41:17.093: INFO: Waiting up to 5m0s for pod "pod-secrets-ccc02b7f-629e-4537-9e05-f63409e5f5b0" in namespace "secrets-7333" to be "Succeeded or Failed" + Jan 14 03:41:17.095: INFO: Pod "pod-secrets-ccc02b7f-629e-4537-9e05-f63409e5f5b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.693316ms + Jan 14 03:41:19.099: INFO: Pod "pod-secrets-ccc02b7f-629e-4537-9e05-f63409e5f5b0": Phase="Running", Reason="", readiness=false. Elapsed: 2.006586569s + Jan 14 03:41:21.101: INFO: Pod "pod-secrets-ccc02b7f-629e-4537-9e05-f63409e5f5b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008100017s + STEP: Saw pod success 01/14/23 03:41:21.101 + Jan 14 03:41:21.101: INFO: Pod "pod-secrets-ccc02b7f-629e-4537-9e05-f63409e5f5b0" satisfied condition "Succeeded or Failed" + Jan 14 03:41:21.104: INFO: Trying to get logs from node 10.0.1.106 pod pod-secrets-ccc02b7f-629e-4537-9e05-f63409e5f5b0 container secret-volume-test: + STEP: delete the pod 01/14/23 03:41:21.111 + Jan 14 03:41:21.124: INFO: Waiting for pod pod-secrets-ccc02b7f-629e-4537-9e05-f63409e5f5b0 to disappear + Jan 14 03:41:21.127: INFO: Pod pod-secrets-ccc02b7f-629e-4537-9e05-f63409e5f5b0 no longer exists + [AfterEach] [sig-storage] Secrets + test/e2e/framework/node/init/init.go:32 + Jan 14 03:41:21.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Secrets + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Secrets + tear down framework | framework.go:193 + STEP: Destroying namespace "secrets-7333" for this suite. 01/14/23 03:41:21.132 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should update a ServiceAccount [Conformance] + test/e2e/auth/service_accounts.go:810 +[BeforeEach] [sig-auth] ServiceAccounts + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:41:21.139 +Jan 14 03:41:21.140: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename svcaccounts 01/14/23 03:41:21.14 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:41:21.154 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:41:21.156 +[BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:31 +[It] should update a ServiceAccount [Conformance] + test/e2e/auth/service_accounts.go:810 +STEP: Creating ServiceAccount "e2e-sa-9nxbh" 01/14/23 03:41:21.158 +Jan 14 03:41:21.162: INFO: AutomountServiceAccountToken: false +STEP: Updating ServiceAccount "e2e-sa-9nxbh" 01/14/23 03:41:21.162 +Jan 14 03:41:21.171: INFO: AutomountServiceAccountToken: true +[AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/node/init/init.go:32 +Jan 14 03:41:21.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + tear down framework | framework.go:193 +STEP: Destroying namespace "svcaccounts-1008" for this suite. 01/14/23 03:41:21.176 +------------------------------ +• [0.042 seconds] +[sig-auth] ServiceAccounts +test/e2e/auth/framework.go:23 + should update a ServiceAccount [Conformance] + test/e2e/auth/service_accounts.go:810 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-auth] ServiceAccounts + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:41:21.139 + Jan 14 03:41:21.140: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename svcaccounts 01/14/23 03:41:21.14 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:41:21.154 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:41:21.156 + [BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:31 + [It] should update a ServiceAccount [Conformance] + test/e2e/auth/service_accounts.go:810 + STEP: Creating ServiceAccount "e2e-sa-9nxbh" 01/14/23 03:41:21.158 + Jan 14 03:41:21.162: INFO: AutomountServiceAccountToken: false + STEP: Updating ServiceAccount "e2e-sa-9nxbh" 01/14/23 03:41:21.162 + Jan 14 03:41:21.171: INFO: AutomountServiceAccountToken: true + [AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/node/init/init.go:32 + Jan 14 03:41:21.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + tear down framework | framework.go:193 + STEP: Destroying namespace "svcaccounts-1008" for this suite. 01/14/23 03:41:21.176 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny custom resource creation, update and deletion [Conformance] + test/e2e/apimachinery/webhook.go:221 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:41:21.182 +Jan 14 03:41:21.182: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename webhook 01/14/23 03:41:21.183 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:41:21.196 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:41:21.198 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 01/14/23 03:41:21.213 +STEP: Create role binding to let webhook read extension-apiserver-authentication 01/14/23 03:41:21.598 +STEP: Deploying the webhook pod 01/14/23 03:41:21.608 +STEP: Wait for the deployment to be ready 01/14/23 03:41:21.623 +Jan 14 03:41:21.631: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 01/14/23 03:41:23.642 +STEP: Verifying the service has paired with the endpoint 01/14/23 03:41:23.651 +Jan 14 03:41:24.651: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny custom resource creation, update and deletion [Conformance] + test/e2e/apimachinery/webhook.go:221 +Jan 14 03:41:24.655: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Registering the custom resource webhook via the AdmissionRegistration API 01/14/23 03:41:25.165 +STEP: Creating a custom resource that should be denied by the webhook 01/14/23 03:41:25.18 +STEP: Creating a custom resource whose deletion would be denied by the webhook 01/14/23 03:41:27.21 +STEP: Updating the custom resource with disallowed data should be denied 01/14/23 03:41:27.216 +STEP: Deleting the custom resource should be denied 01/14/23 03:41:27.224 +STEP: Remove the offending key and value from the custom resource data 01/14/23 03:41:27.231 +STEP: Deleting the updated custom resource should be successful 01/14/23 03:41:27.24 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jan 14 03:41:27.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "webhook-970" for this suite. 01/14/23 03:41:27.812 +STEP: Destroying namespace "webhook-970-markers" for this suite. 01/14/23 03:41:27.821 +------------------------------ +• [SLOW TEST] [6.647 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should be able to deny custom resource creation, update and deletion [Conformance] + test/e2e/apimachinery/webhook.go:221 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:41:21.182 + Jan 14 03:41:21.182: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename webhook 01/14/23 03:41:21.183 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:41:21.196 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:41:21.198 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 01/14/23 03:41:21.213 + STEP: Create role binding to let webhook read extension-apiserver-authentication 01/14/23 03:41:21.598 + STEP: Deploying the webhook pod 01/14/23 03:41:21.608 + STEP: Wait for the deployment to be ready 01/14/23 03:41:21.623 + Jan 14 03:41:21.631: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 01/14/23 03:41:23.642 + STEP: Verifying the service has paired with the endpoint 01/14/23 03:41:23.651 + Jan 14 03:41:24.651: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should be able to deny custom resource creation, update and deletion [Conformance] + test/e2e/apimachinery/webhook.go:221 + Jan 14 03:41:24.655: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Registering the custom resource webhook via the AdmissionRegistration API 01/14/23 03:41:25.165 + STEP: Creating a custom resource that should be denied by the webhook 01/14/23 03:41:25.18 + STEP: Creating a custom resource whose deletion would be denied by the webhook 01/14/23 03:41:27.21 + STEP: Updating the custom resource with disallowed data should be denied 01/14/23 03:41:27.216 + STEP: Deleting the custom resource should be denied 01/14/23 03:41:27.224 + STEP: Remove the offending key and value from the custom resource data 01/14/23 03:41:27.231 + STEP: Deleting the updated custom resource should be successful 01/14/23 03:41:27.24 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Jan 14 03:41:27.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "webhook-970" for this suite. 01/14/23 03:41:27.812 + STEP: Destroying namespace "webhook-970-markers" for this suite. 01/14/23 03:41:27.821 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl replace + should update a single-container pod's image [Conformance] + test/e2e/kubectl/kubectl.go:1747 +[BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:41:27.83 +Jan 14 03:41:27.830: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename kubectl 01/14/23 03:41:27.831 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:41:27.848 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:41:27.85 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[BeforeEach] Kubectl replace + test/e2e/kubectl/kubectl.go:1734 +[It] should update a single-container pod's image [Conformance] + test/e2e/kubectl/kubectl.go:1747 +STEP: running the image registry.k8s.io/e2e-test-images/httpd:2.4.38-4 01/14/23 03:41:27.853 +Jan 14 03:41:27.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-349 run e2e-test-httpd-pod --image=registry.k8s.io/e2e-test-images/httpd:2.4.38-4 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' +Jan 14 03:41:27.922: INFO: stderr: "" +Jan 14 03:41:27.922: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: verifying the pod e2e-test-httpd-pod is running 01/14/23 03:41:27.922 +STEP: verifying the pod e2e-test-httpd-pod was created 01/14/23 03:41:32.974 +Jan 14 03:41:32.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-349 get pod e2e-test-httpd-pod -o json' +Jan 14 03:41:33.035: INFO: stderr: "" +Jan 14 03:41:33.035: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"annotations\": {\n \"tke.cloud.tencent.com/networks-status\": \"[{\\n \\\"name\\\": \\\"tke-bridge\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.52.1.25\\\"\\n ],\\n \\\"mac\\\": \\\"c6:0f:23:ca:f0:20\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\"\n },\n \"creationTimestamp\": \"2023-01-14T03:41:27Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-349\",\n \"resourceVersion\": \"416538\",\n \"uid\": \"ff9df621-ef6f-437c-8654-368e2d424395\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"registry.k8s.io/e2e-test-images/httpd:2.4.38-4\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-9zszv\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"10.0.1.106\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-9zszv\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-14T03:41:27Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-14T03:41:28Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-14T03:41:28Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-14T03:41:27Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://519c7c5bff1c1818aa220578e036628953a6dfffffef74353a8c026425d98a78\",\n \"image\": \"registry.k8s.io/e2e-test-images/httpd:2.4.38-4\",\n \"imageID\": \"registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2023-01-14T03:41:28Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.0.1.106\",\n \"phase\": \"Running\",\n \"podIP\": \"10.52.1.25\",\n \"podIPs\": [\n {\n \"ip\": \"10.52.1.25\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2023-01-14T03:41:27Z\"\n }\n}\n" +STEP: replace the image in the pod 01/14/23 03:41:33.035 +Jan 14 03:41:33.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-349 replace -f -' +Jan 14 03:41:33.747: INFO: stderr: "" +Jan 14 03:41:33.747: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" +STEP: verifying the pod e2e-test-httpd-pod has the right image registry.k8s.io/e2e-test-images/busybox:1.29-4 01/14/23 03:41:33.747 +[AfterEach] Kubectl replace + test/e2e/kubectl/kubectl.go:1738 +Jan 14 03:41:33.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-349 delete pods e2e-test-httpd-pod' +Jan 14 03:41:35.701: INFO: stderr: "" +Jan 14 03:41:35.701: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 +Jan 14 03:41:35.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 +STEP: Destroying namespace "kubectl-349" for this suite. 01/14/23 03:41:35.707 +------------------------------ +• [SLOW TEST] [7.882 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl replace + test/e2e/kubectl/kubectl.go:1731 + should update a single-container pod's image [Conformance] + test/e2e/kubectl/kubectl.go:1747 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:41:27.83 + Jan 14 03:41:27.830: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename kubectl 01/14/23 03:41:27.831 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:41:27.848 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:41:27.85 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [BeforeEach] Kubectl replace + test/e2e/kubectl/kubectl.go:1734 + [It] should update a single-container pod's image [Conformance] + test/e2e/kubectl/kubectl.go:1747 + STEP: running the image registry.k8s.io/e2e-test-images/httpd:2.4.38-4 01/14/23 03:41:27.853 + Jan 14 03:41:27.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-349 run e2e-test-httpd-pod --image=registry.k8s.io/e2e-test-images/httpd:2.4.38-4 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' + Jan 14 03:41:27.922: INFO: stderr: "" + Jan 14 03:41:27.922: INFO: stdout: "pod/e2e-test-httpd-pod created\n" + STEP: verifying the pod e2e-test-httpd-pod is running 01/14/23 03:41:27.922 + STEP: verifying the pod e2e-test-httpd-pod was created 01/14/23 03:41:32.974 + Jan 14 03:41:32.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-349 get pod e2e-test-httpd-pod -o json' + Jan 14 03:41:33.035: INFO: stderr: "" + Jan 14 03:41:33.035: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"annotations\": {\n \"tke.cloud.tencent.com/networks-status\": \"[{\\n \\\"name\\\": \\\"tke-bridge\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.52.1.25\\\"\\n ],\\n \\\"mac\\\": \\\"c6:0f:23:ca:f0:20\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\"\n },\n \"creationTimestamp\": \"2023-01-14T03:41:27Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-349\",\n \"resourceVersion\": \"416538\",\n \"uid\": \"ff9df621-ef6f-437c-8654-368e2d424395\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"registry.k8s.io/e2e-test-images/httpd:2.4.38-4\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-9zszv\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"10.0.1.106\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-9zszv\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-14T03:41:27Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-14T03:41:28Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-14T03:41:28Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-14T03:41:27Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://519c7c5bff1c1818aa220578e036628953a6dfffffef74353a8c026425d98a78\",\n \"image\": \"registry.k8s.io/e2e-test-images/httpd:2.4.38-4\",\n \"imageID\": \"registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2023-01-14T03:41:28Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.0.1.106\",\n \"phase\": \"Running\",\n \"podIP\": \"10.52.1.25\",\n \"podIPs\": [\n {\n \"ip\": \"10.52.1.25\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2023-01-14T03:41:27Z\"\n }\n}\n" + STEP: replace the image in the pod 01/14/23 03:41:33.035 + Jan 14 03:41:33.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-349 replace -f -' + Jan 14 03:41:33.747: INFO: stderr: "" + Jan 14 03:41:33.747: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" + STEP: verifying the pod e2e-test-httpd-pod has the right image registry.k8s.io/e2e-test-images/busybox:1.29-4 01/14/23 03:41:33.747 + [AfterEach] Kubectl replace + test/e2e/kubectl/kubectl.go:1738 + Jan 14 03:41:33.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-349 delete pods e2e-test-httpd-pod' + Jan 14 03:41:35.701: INFO: stderr: "" + Jan 14 03:41:35.701: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 + Jan 14 03:41:35.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 + STEP: Destroying namespace "kubectl-349" for this suite. 01/14/23 03:41:35.707 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] + validates lower priority pod preemption by critical pod [Conformance] + test/e2e/scheduling/preemption.go:222 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:41:35.713 +Jan 14 03:41:35.713: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename sched-preemption 01/14/23 03:41:35.713 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:41:35.732 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:41:35.734 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:96 +Jan 14 03:41:35.750: INFO: Waiting up to 1m0s for all nodes to be ready +Jan 14 03:42:35.790: INFO: Waiting for terminating namespaces to be deleted... +[It] validates lower priority pod preemption by critical pod [Conformance] + test/e2e/scheduling/preemption.go:222 +STEP: Create pods that use 4/5 of node resources. 01/14/23 03:42:35.793 +Jan 14 03:42:35.812: INFO: Created pod: pod0-0-sched-preemption-low-priority +Jan 14 03:42:35.820: INFO: Created pod: pod0-1-sched-preemption-medium-priority +Jan 14 03:42:35.837: INFO: Created pod: pod1-0-sched-preemption-medium-priority +Jan 14 03:42:35.845: INFO: Created pod: pod1-1-sched-preemption-medium-priority +Jan 14 03:42:35.858: INFO: Created pod: pod2-0-sched-preemption-medium-priority +Jan 14 03:42:35.864: INFO: Created pod: pod2-1-sched-preemption-medium-priority +STEP: Wait for pods to be scheduled. 01/14/23 03:42:35.864 +Jan 14 03:42:35.864: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-6887" to be "running" +Jan 14 03:42:35.870: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 5.902539ms +Jan 14 03:42:37.874: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.009704977s +Jan 14 03:42:37.874: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" +Jan 14 03:42:37.874: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-6887" to be "running" +Jan 14 03:42:37.877: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.754521ms +Jan 14 03:42:37.877: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" +Jan 14 03:42:37.877: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-6887" to be "running" +Jan 14 03:42:37.879: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.574334ms +Jan 14 03:42:37.879: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" +Jan 14 03:42:37.879: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-6887" to be "running" +Jan 14 03:42:37.882: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.534385ms +Jan 14 03:42:37.882: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" +Jan 14 03:42:37.882: INFO: Waiting up to 5m0s for pod "pod2-0-sched-preemption-medium-priority" in namespace "sched-preemption-6887" to be "running" +Jan 14 03:42:37.884: INFO: Pod "pod2-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.418698ms +Jan 14 03:42:37.884: INFO: Pod "pod2-0-sched-preemption-medium-priority" satisfied condition "running" +Jan 14 03:42:37.884: INFO: Waiting up to 5m0s for pod "pod2-1-sched-preemption-medium-priority" in namespace "sched-preemption-6887" to be "running" +Jan 14 03:42:37.887: INFO: Pod "pod2-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.33395ms +Jan 14 03:42:37.887: INFO: Pod "pod2-1-sched-preemption-medium-priority" satisfied condition "running" +STEP: Run a critical pod that use same resources as that of a lower priority pod 01/14/23 03:42:37.887 +Jan 14 03:42:37.898: INFO: Waiting up to 2m0s for pod "critical-pod" in namespace "kube-system" to be "running" +Jan 14 03:42:37.903: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.261426ms +Jan 14 03:42:39.907: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008348363s +Jan 14 03:42:41.908: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009177056s +Jan 14 03:42:43.908: INFO: Pod "critical-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.00933582s +Jan 14 03:42:43.908: INFO: Pod "critical-pod" satisfied condition "running" +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/node/init/init.go:32 +Jan 14 03:42:43.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:84 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "sched-preemption-6887" for this suite. 01/14/23 03:42:43.991 +------------------------------ +• [SLOW TEST] [68.291 seconds] +[sig-scheduling] SchedulerPreemption [Serial] +test/e2e/scheduling/framework.go:40 + validates lower priority pod preemption by critical pod [Conformance] + test/e2e/scheduling/preemption.go:222 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:41:35.713 + Jan 14 03:41:35.713: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename sched-preemption 01/14/23 03:41:35.713 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:41:35.732 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:41:35.734 + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:96 + Jan 14 03:41:35.750: INFO: Waiting up to 1m0s for all nodes to be ready + Jan 14 03:42:35.790: INFO: Waiting for terminating namespaces to be deleted... + [It] validates lower priority pod preemption by critical pod [Conformance] + test/e2e/scheduling/preemption.go:222 + STEP: Create pods that use 4/5 of node resources. 01/14/23 03:42:35.793 + Jan 14 03:42:35.812: INFO: Created pod: pod0-0-sched-preemption-low-priority + Jan 14 03:42:35.820: INFO: Created pod: pod0-1-sched-preemption-medium-priority + Jan 14 03:42:35.837: INFO: Created pod: pod1-0-sched-preemption-medium-priority + Jan 14 03:42:35.845: INFO: Created pod: pod1-1-sched-preemption-medium-priority + Jan 14 03:42:35.858: INFO: Created pod: pod2-0-sched-preemption-medium-priority + Jan 14 03:42:35.864: INFO: Created pod: pod2-1-sched-preemption-medium-priority + STEP: Wait for pods to be scheduled. 01/14/23 03:42:35.864 + Jan 14 03:42:35.864: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-6887" to be "running" + Jan 14 03:42:35.870: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 5.902539ms + Jan 14 03:42:37.874: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.009704977s + Jan 14 03:42:37.874: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" + Jan 14 03:42:37.874: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-6887" to be "running" + Jan 14 03:42:37.877: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.754521ms + Jan 14 03:42:37.877: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" + Jan 14 03:42:37.877: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-6887" to be "running" + Jan 14 03:42:37.879: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.574334ms + Jan 14 03:42:37.879: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" + Jan 14 03:42:37.879: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-6887" to be "running" + Jan 14 03:42:37.882: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.534385ms + Jan 14 03:42:37.882: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" + Jan 14 03:42:37.882: INFO: Waiting up to 5m0s for pod "pod2-0-sched-preemption-medium-priority" in namespace "sched-preemption-6887" to be "running" + Jan 14 03:42:37.884: INFO: Pod "pod2-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.418698ms + Jan 14 03:42:37.884: INFO: Pod "pod2-0-sched-preemption-medium-priority" satisfied condition "running" + Jan 14 03:42:37.884: INFO: Waiting up to 5m0s for pod "pod2-1-sched-preemption-medium-priority" in namespace "sched-preemption-6887" to be "running" + Jan 14 03:42:37.887: INFO: Pod "pod2-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.33395ms + Jan 14 03:42:37.887: INFO: Pod "pod2-1-sched-preemption-medium-priority" satisfied condition "running" + STEP: Run a critical pod that use same resources as that of a lower priority pod 01/14/23 03:42:37.887 + Jan 14 03:42:37.898: INFO: Waiting up to 2m0s for pod "critical-pod" in namespace "kube-system" to be "running" + Jan 14 03:42:37.903: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.261426ms + Jan 14 03:42:39.907: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008348363s + Jan 14 03:42:41.908: INFO: Pod "critical-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009177056s + Jan 14 03:42:43.908: INFO: Pod "critical-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.00933582s + Jan 14 03:42:43.908: INFO: Pod "critical-pod" satisfied condition "running" + [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/node/init/init.go:32 + Jan 14 03:42:43.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:84 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "sched-preemption-6887" for this suite. 01/14/23 03:42:43.991 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-storage] Downward API volume + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:84 +[BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:42:44.004 +Jan 14 03:42:44.004: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename downward-api 01/14/23 03:42:44.005 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:42:44.017 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:42:44.02 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 +[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:84 +STEP: Creating a pod to test downward API volume plugin 01/14/23 03:42:44.022 +Jan 14 03:42:44.032: INFO: Waiting up to 5m0s for pod "downwardapi-volume-578eb116-2e5d-46de-9215-a9a089a8a01b" in namespace "downward-api-2839" to be "Succeeded or Failed" +Jan 14 03:42:44.036: INFO: Pod "downwardapi-volume-578eb116-2e5d-46de-9215-a9a089a8a01b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.293819ms +Jan 14 03:42:46.041: INFO: Pod "downwardapi-volume-578eb116-2e5d-46de-9215-a9a089a8a01b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008666864s +Jan 14 03:42:48.041: INFO: Pod "downwardapi-volume-578eb116-2e5d-46de-9215-a9a089a8a01b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008721105s +STEP: Saw pod success 01/14/23 03:42:48.041 +Jan 14 03:42:48.041: INFO: Pod "downwardapi-volume-578eb116-2e5d-46de-9215-a9a089a8a01b" satisfied condition "Succeeded or Failed" +Jan 14 03:42:48.044: INFO: Trying to get logs from node 10.0.1.106 pod downwardapi-volume-578eb116-2e5d-46de-9215-a9a089a8a01b container client-container: +STEP: delete the pod 01/14/23 03:42:48.05 +Jan 14 03:42:48.067: INFO: Waiting for pod downwardapi-volume-578eb116-2e5d-46de-9215-a9a089a8a01b to disappear +Jan 14 03:42:48.070: INFO: Pod downwardapi-volume-578eb116-2e5d-46de-9215-a9a089a8a01b no longer exists +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 +Jan 14 03:42:48.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 +STEP: Destroying namespace "downward-api-2839" for this suite. 01/14/23 03:42:48.075 +------------------------------ +• [4.079 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:84 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:42:44.004 + Jan 14 03:42:44.004: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename downward-api 01/14/23 03:42:44.005 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:42:44.017 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:42:44.02 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 + [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:84 + STEP: Creating a pod to test downward API volume plugin 01/14/23 03:42:44.022 + Jan 14 03:42:44.032: INFO: Waiting up to 5m0s for pod "downwardapi-volume-578eb116-2e5d-46de-9215-a9a089a8a01b" in namespace "downward-api-2839" to be "Succeeded or Failed" + Jan 14 03:42:44.036: INFO: Pod "downwardapi-volume-578eb116-2e5d-46de-9215-a9a089a8a01b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.293819ms + Jan 14 03:42:46.041: INFO: Pod "downwardapi-volume-578eb116-2e5d-46de-9215-a9a089a8a01b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008666864s + Jan 14 03:42:48.041: INFO: Pod "downwardapi-volume-578eb116-2e5d-46de-9215-a9a089a8a01b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008721105s + STEP: Saw pod success 01/14/23 03:42:48.041 + Jan 14 03:42:48.041: INFO: Pod "downwardapi-volume-578eb116-2e5d-46de-9215-a9a089a8a01b" satisfied condition "Succeeded or Failed" + Jan 14 03:42:48.044: INFO: Trying to get logs from node 10.0.1.106 pod downwardapi-volume-578eb116-2e5d-46de-9215-a9a089a8a01b container client-container: + STEP: delete the pod 01/14/23 03:42:48.05 + Jan 14 03:42:48.067: INFO: Waiting for pod downwardapi-volume-578eb116-2e5d-46de-9215-a9a089a8a01b to disappear + Jan 14 03:42:48.070: INFO: Pod downwardapi-volume-578eb116-2e5d-46de-9215-a9a089a8a01b no longer exists + [AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 + Jan 14 03:42:48.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 + STEP: Destroying namespace "downward-api-2839" for this suite. 01/14/23 03:42:48.075 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should ensure that all services are removed when a namespace is deleted [Conformance] + test/e2e/apimachinery/namespace.go:251 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:42:48.085 +Jan 14 03:42:48.085: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename namespaces 01/14/23 03:42:48.085 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:42:48.096 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:42:48.098 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:31 +[It] should ensure that all services are removed when a namespace is deleted [Conformance] + test/e2e/apimachinery/namespace.go:251 +STEP: Creating a test namespace 01/14/23 03:42:48.1 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:42:48.124 +STEP: Creating a service in the namespace 01/14/23 03:42:48.126 +STEP: Deleting the namespace 01/14/23 03:42:48.138 +STEP: Waiting for the namespace to be removed. 01/14/23 03:42:48.145 +STEP: Recreating the namespace 01/14/23 03:42:54.15 +STEP: Verifying there is no service in the namespace 01/14/23 03:42:54.163 +[AfterEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/node/init/init.go:32 +Jan 14 03:42:54.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "namespaces-6306" for this suite. 01/14/23 03:42:54.17 +STEP: Destroying namespace "nsdeletetest-9520" for this suite. 01/14/23 03:42:54.175 +Jan 14 03:42:54.177: INFO: Namespace nsdeletetest-9520 was already deleted +STEP: Destroying namespace "nsdeletetest-5375" for this suite. 01/14/23 03:42:54.177 +------------------------------ +• [SLOW TEST] [6.099 seconds] +[sig-api-machinery] Namespaces [Serial] +test/e2e/apimachinery/framework.go:23 + should ensure that all services are removed when a namespace is deleted [Conformance] + test/e2e/apimachinery/namespace.go:251 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Namespaces [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:42:48.085 + Jan 14 03:42:48.085: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename namespaces 01/14/23 03:42:48.085 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:42:48.096 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:42:48.098 + [BeforeEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:31 + [It] should ensure that all services are removed when a namespace is deleted [Conformance] + test/e2e/apimachinery/namespace.go:251 + STEP: Creating a test namespace 01/14/23 03:42:48.1 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:42:48.124 + STEP: Creating a service in the namespace 01/14/23 03:42:48.126 + STEP: Deleting the namespace 01/14/23 03:42:48.138 + STEP: Waiting for the namespace to be removed. 01/14/23 03:42:48.145 + STEP: Recreating the namespace 01/14/23 03:42:54.15 + STEP: Verifying there is no service in the namespace 01/14/23 03:42:54.163 + [AfterEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/node/init/init.go:32 + Jan 14 03:42:54.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "namespaces-6306" for this suite. 01/14/23 03:42:54.17 + STEP: Destroying namespace "nsdeletetest-9520" for this suite. 01/14/23 03:42:54.175 + Jan 14 03:42:54.177: INFO: Namespace nsdeletetest-9520 was already deleted + STEP: Destroying namespace "nsdeletetest-5375" for this suite. 01/14/23 03:42:54.177 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-storage] CSIInlineVolumes + should support ephemeral VolumeLifecycleMode in CSIDriver API [Conformance] + test/e2e/storage/csi_inline.go:46 +[BeforeEach] [sig-storage] CSIInlineVolumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:42:54.184 +Jan 14 03:42:54.184: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename csiinlinevolumes 01/14/23 03:42:54.185 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:42:54.198 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:42:54.2 +[BeforeEach] [sig-storage] CSIInlineVolumes + test/e2e/framework/metrics/init/init.go:31 +[It] should support ephemeral VolumeLifecycleMode in CSIDriver API [Conformance] + test/e2e/storage/csi_inline.go:46 +STEP: creating 01/14/23 03:42:54.202 +STEP: getting 01/14/23 03:42:54.219 +STEP: listing 01/14/23 03:42:54.224 +STEP: deleting 01/14/23 03:42:54.227 +[AfterEach] [sig-storage] CSIInlineVolumes + test/e2e/framework/node/init/init.go:32 +Jan 14 03:42:54.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] CSIInlineVolumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] CSIInlineVolumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] CSIInlineVolumes + tear down framework | framework.go:193 +STEP: Destroying namespace "csiinlinevolumes-1749" for this suite. 01/14/23 03:42:54.245 +------------------------------ +• [0.066 seconds] +[sig-storage] CSIInlineVolumes +test/e2e/storage/utils/framework.go:23 + should support ephemeral VolumeLifecycleMode in CSIDriver API [Conformance] + test/e2e/storage/csi_inline.go:46 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] CSIInlineVolumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:42:54.184 + Jan 14 03:42:54.184: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename csiinlinevolumes 01/14/23 03:42:54.185 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:42:54.198 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:42:54.2 + [BeforeEach] [sig-storage] CSIInlineVolumes + test/e2e/framework/metrics/init/init.go:31 + [It] should support ephemeral VolumeLifecycleMode in CSIDriver API [Conformance] + test/e2e/storage/csi_inline.go:46 + STEP: creating 01/14/23 03:42:54.202 + STEP: getting 01/14/23 03:42:54.219 + STEP: listing 01/14/23 03:42:54.224 + STEP: deleting 01/14/23 03:42:54.227 + [AfterEach] [sig-storage] CSIInlineVolumes + test/e2e/framework/node/init/init.go:32 + Jan 14 03:42:54.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] CSIInlineVolumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] CSIInlineVolumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] CSIInlineVolumes + tear down framework | framework.go:193 + STEP: Destroying namespace "csiinlinevolumes-1749" for this suite. 01/14/23 03:42:54.245 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + test/e2e/scheduling/predicates.go:704 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:42:54.251 +Jan 14 03:42:54.251: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename sched-pred 01/14/23 03:42:54.252 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:42:54.263 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:42:54.265 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:97 +Jan 14 03:42:54.267: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Jan 14 03:42:54.277: INFO: Waiting for terminating namespaces to be deleted... +Jan 14 03:42:54.280: INFO: +Logging pods the apiserver thinks is on node 10.0.1.106 before test +Jan 14 03:42:54.290: INFO: kubernetes-proxy-544fb566b4-fh64j from default started at 2023-01-14 03:20:34 +0000 UTC (1 container statuses recorded) +Jan 14 03:42:54.290: INFO: Container kubernetes-proxy ready: false, restart count 9 +Jan 14 03:42:54.290: INFO: csi-cbs-node-5wf2s from kube-system started at 2023-01-13 08:11:20 +0000 UTC (2 container statuses recorded) +Jan 14 03:42:54.290: INFO: Container cbs-csi ready: true, restart count 1 +Jan 14 03:42:54.290: INFO: Container driver-registrar ready: true, restart count 0 +Jan 14 03:42:54.290: INFO: ip-masq-agent-rx9k6 from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) +Jan 14 03:42:54.290: INFO: Container ip-masq-agent ready: true, restart count 0 +Jan 14 03:42:54.290: INFO: kube-proxy-s6xxg from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) +Jan 14 03:42:54.290: INFO: Container kube-proxy ready: true, restart count 0 +Jan 14 03:42:54.290: INFO: tke-bridge-agent-frbcm from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) +Jan 14 03:42:54.290: INFO: Container tke-bridge-agent ready: true, restart count 1 +Jan 14 03:42:54.290: INFO: tke-cni-agent-nv7pn from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) +Jan 14 03:42:54.290: INFO: Container tke-cni-agent ready: true, restart count 0 +Jan 14 03:42:54.290: INFO: tke-monitor-agent-xhdhg from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) +Jan 14 03:42:54.290: INFO: Container tke-monitor-agent ready: true, restart count 0 +Jan 14 03:42:54.290: INFO: sonobuoy-systemd-logs-daemon-set-0fc25bd2d6604eac-jkjnf from sonobuoy started at 2023-01-14 03:34:54 +0000 UTC (2 container statuses recorded) +Jan 14 03:42:54.290: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jan 14 03:42:54.290: INFO: Container systemd-logs ready: true, restart count 0 +Jan 14 03:42:54.290: INFO: ss2-1 from statefulset-8862 started at 2023-01-14 02:33:11 +0000 UTC (1 container statuses recorded) +Jan 14 03:42:54.290: INFO: Container webserver ready: true, restart count 0 +Jan 14 03:42:54.290: INFO: +Logging pods the apiserver thinks is on node 10.0.1.212 before test +Jan 14 03:42:54.301: INFO: kubernetes-proxy-544fb566b4-zpvz8 from default started at 2023-01-14 03:20:34 +0000 UTC (1 container statuses recorded) +Jan 14 03:42:54.301: INFO: Container kubernetes-proxy ready: false, restart count 9 +Jan 14 03:42:54.301: INFO: csi-cbs-node-ddpcx from kube-system started at 2023-01-13 08:11:16 +0000 UTC (2 container statuses recorded) +Jan 14 03:42:54.301: INFO: Container cbs-csi ready: true, restart count 0 +Jan 14 03:42:54.301: INFO: Container driver-registrar ready: true, restart count 0 +Jan 14 03:42:54.301: INFO: ip-masq-agent-8wxxs from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) +Jan 14 03:42:54.301: INFO: Container ip-masq-agent ready: true, restart count 0 +Jan 14 03:42:54.301: INFO: kube-proxy-npt42 from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) +Jan 14 03:42:54.301: INFO: Container kube-proxy ready: true, restart count 0 +Jan 14 03:42:54.301: INFO: tke-bridge-agent-4rffd from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) +Jan 14 03:42:54.301: INFO: Container tke-bridge-agent ready: true, restart count 1 +Jan 14 03:42:54.301: INFO: tke-cni-agent-xmggs from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) +Jan 14 03:42:54.301: INFO: Container tke-cni-agent ready: true, restart count 0 +Jan 14 03:42:54.301: INFO: tke-monitor-agent-6gtt6 from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) +Jan 14 03:42:54.301: INFO: Container tke-monitor-agent ready: true, restart count 0 +Jan 14 03:42:54.301: INFO: sonobuoy-e2e-job-1b1a46fb40e34267 from sonobuoy started at 2023-01-14 03:34:54 +0000 UTC (2 container statuses recorded) +Jan 14 03:42:54.301: INFO: Container e2e ready: true, restart count 0 +Jan 14 03:42:54.301: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jan 14 03:42:54.301: INFO: sonobuoy-systemd-logs-daemon-set-0fc25bd2d6604eac-lgxk7 from sonobuoy started at 2023-01-14 03:34:54 +0000 UTC (2 container statuses recorded) +Jan 14 03:42:54.301: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jan 14 03:42:54.301: INFO: Container systemd-logs ready: true, restart count 0 +Jan 14 03:42:54.301: INFO: ss2-2 from statefulset-8862 started at 2023-01-14 02:33:30 +0000 UTC (1 container statuses recorded) +Jan 14 03:42:54.301: INFO: Container webserver ready: true, restart count 0 +Jan 14 03:42:54.301: INFO: +Logging pods the apiserver thinks is on node 10.0.1.99 before test +Jan 14 03:42:54.313: INFO: csi-cbs-node-q4l9b from kube-system started at 2023-01-13 08:11:43 +0000 UTC (2 container statuses recorded) +Jan 14 03:42:54.313: INFO: Container cbs-csi ready: true, restart count 0 +Jan 14 03:42:54.313: INFO: Container driver-registrar ready: true, restart count 0 +Jan 14 03:42:54.313: INFO: ip-masq-agent-kmrrk from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) +Jan 14 03:42:54.313: INFO: Container ip-masq-agent ready: true, restart count 0 +Jan 14 03:42:54.313: INFO: kube-proxy-g4qjh from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) +Jan 14 03:42:54.313: INFO: Container kube-proxy ready: true, restart count 0 +Jan 14 03:42:54.313: INFO: tke-bridge-agent-hzv6c from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) +Jan 14 03:42:54.313: INFO: Container tke-bridge-agent ready: true, restart count 1 +Jan 14 03:42:54.313: INFO: tke-cni-agent-7mk9b from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) +Jan 14 03:42:54.313: INFO: Container tke-cni-agent ready: true, restart count 0 +Jan 14 03:42:54.313: INFO: tke-monitor-agent-g27mp from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) +Jan 14 03:42:54.313: INFO: Container tke-monitor-agent ready: true, restart count 0 +Jan 14 03:42:54.313: INFO: sonobuoy from sonobuoy started at 2023-01-14 03:34:53 +0000 UTC (1 container statuses recorded) +Jan 14 03:42:54.313: INFO: Container kube-sonobuoy ready: true, restart count 0 +Jan 14 03:42:54.313: INFO: sonobuoy-systemd-logs-daemon-set-0fc25bd2d6604eac-nwh62 from sonobuoy started at 2023-01-14 03:34:54 +0000 UTC (2 container statuses recorded) +Jan 14 03:42:54.313: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jan 14 03:42:54.313: INFO: Container systemd-logs ready: true, restart count 0 +Jan 14 03:42:54.313: INFO: ss2-0 from statefulset-8862 started at 2023-01-14 02:33:10 +0000 UTC (1 container statuses recorded) +Jan 14 03:42:54.313: INFO: Container webserver ready: true, restart count 0 +[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + test/e2e/scheduling/predicates.go:704 +STEP: Trying to launch a pod without a label to get a node which can launch it. 01/14/23 03:42:54.313 +Jan 14 03:42:54.322: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-4500" to be "running" +Jan 14 03:42:54.325: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.611947ms +Jan 14 03:42:56.330: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007976518s +Jan 14 03:42:56.331: INFO: Pod "without-label" satisfied condition "running" +STEP: Explicitly delete pod here to free the resource it takes. 01/14/23 03:42:56.333 +STEP: Trying to apply a random label on the found node. 01/14/23 03:42:56.349 +STEP: verifying the node has the label kubernetes.io/e2e-cad82982-5428-4e35-bc41-f3eabd8c472b 95 01/14/23 03:42:56.358 +STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled 01/14/23 03:42:56.361 +Jan 14 03:42:56.371: INFO: Waiting up to 5m0s for pod "pod4" in namespace "sched-pred-4500" to be "not pending" +Jan 14 03:42:56.374: INFO: Pod "pod4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.49838ms +Jan 14 03:42:58.378: INFO: Pod "pod4": Phase="Running", Reason="", readiness=true. Elapsed: 2.006740122s +Jan 14 03:42:58.378: INFO: Pod "pod4" satisfied condition "not pending" +STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 10.0.1.106 on the node which pod4 resides and expect not scheduled 01/14/23 03:42:58.378 +Jan 14 03:42:58.385: INFO: Waiting up to 5m0s for pod "pod5" in namespace "sched-pred-4500" to be "not pending" +Jan 14 03:42:58.388: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.819488ms +Jan 14 03:43:00.392: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006884275s +Jan 14 03:43:02.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008809481s +Jan 14 03:43:04.392: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.007007328s +Jan 14 03:43:06.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.00864792s +Jan 14 03:43:08.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.008721728s +Jan 14 03:43:10.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.007899647s +Jan 14 03:43:12.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.007990403s +Jan 14 03:43:14.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.007778802s +Jan 14 03:43:16.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.007807879s +Jan 14 03:43:18.395: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.009507273s +Jan 14 03:43:20.392: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.007320529s +Jan 14 03:43:22.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.008089878s +Jan 14 03:43:24.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.007854474s +Jan 14 03:43:26.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.008003308s +Jan 14 03:43:28.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.007875501s +Jan 14 03:43:30.392: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.006928234s +Jan 14 03:43:32.392: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.007127195s +Jan 14 03:43:34.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 36.007729095s +Jan 14 03:43:36.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 38.007749676s +Jan 14 03:43:38.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 40.008567489s +Jan 14 03:43:40.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 42.007805221s +Jan 14 03:43:42.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 44.008803435s +Jan 14 03:43:44.395: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 46.009624553s +Jan 14 03:43:46.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 48.007620853s +Jan 14 03:43:48.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 50.008382456s +Jan 14 03:43:50.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 52.007552245s +Jan 14 03:43:52.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 54.008852422s +Jan 14 03:43:54.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 56.009001151s +Jan 14 03:43:56.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 58.008253385s +Jan 14 03:43:58.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.007761664s +Jan 14 03:44:00.392: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.00686855s +Jan 14 03:44:02.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.00861809s +Jan 14 03:44:04.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.008758434s +Jan 14 03:44:06.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.008143734s +Jan 14 03:44:08.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.007944948s +Jan 14 03:44:10.392: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.006993139s +Jan 14 03:44:12.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.007947057s +Jan 14 03:44:14.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.008329642s +Jan 14 03:44:16.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.008410154s +Jan 14 03:44:18.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.008547111s +Jan 14 03:44:20.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.00742554s +Jan 14 03:44:22.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.007518452s +Jan 14 03:44:24.392: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.00692583s +Jan 14 03:44:26.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.008182512s +Jan 14 03:44:28.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.007567469s +Jan 14 03:44:30.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.00768315s +Jan 14 03:44:32.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.00886463s +Jan 14 03:44:34.392: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.007172547s +Jan 14 03:44:36.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.008194904s +Jan 14 03:44:38.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.008000975s +Jan 14 03:44:40.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.008204904s +Jan 14 03:44:42.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.008103975s +Jan 14 03:44:44.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.007789946s +Jan 14 03:44:46.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.008486694s +Jan 14 03:44:48.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.008116964s +Jan 14 03:44:50.392: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.006986698s +Jan 14 03:44:52.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.007651853s +Jan 14 03:44:54.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.008458835s +Jan 14 03:44:56.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.007742897s +Jan 14 03:44:58.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.00892191s +Jan 14 03:45:00.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.007427877s +Jan 14 03:45:02.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.008576738s +Jan 14 03:45:04.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.008336515s +Jan 14 03:45:06.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.008882263s +Jan 14 03:45:08.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.008670616s +Jan 14 03:45:10.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.007663514s +Jan 14 03:45:12.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.008885185s +Jan 14 03:45:14.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.008727306s +Jan 14 03:45:16.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.00780095s +Jan 14 03:45:18.396: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.010536946s +Jan 14 03:45:20.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.007753336s +Jan 14 03:45:22.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.008319692s +Jan 14 03:45:24.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.008010716s +Jan 14 03:45:26.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.007412468s +Jan 14 03:45:28.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.008425775s +Jan 14 03:45:30.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.007445902s +Jan 14 03:45:32.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.007677871s +Jan 14 03:45:34.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.008935869s +Jan 14 03:45:36.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.00768404s +Jan 14 03:45:38.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.008444081s +Jan 14 03:45:40.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.007670474s +Jan 14 03:45:42.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.008830881s +Jan 14 03:45:44.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.00866987s +Jan 14 03:45:46.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.007901655s +Jan 14 03:45:48.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.008776068s +Jan 14 03:45:50.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.007803457s +Jan 14 03:45:52.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.008559892s +Jan 14 03:45:54.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.008585203s +Jan 14 03:45:56.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.007706907s +Jan 14 03:45:58.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.008487855s +Jan 14 03:46:00.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.007721936s +Jan 14 03:46:02.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.009049986s +Jan 14 03:46:04.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.007911006s +Jan 14 03:46:06.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.008049174s +Jan 14 03:46:08.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.0078601s +Jan 14 03:46:10.392: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.00673993s +Jan 14 03:46:12.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.008451986s +Jan 14 03:46:14.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.007454163s +Jan 14 03:46:16.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.007681032s +Jan 14 03:46:18.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.008376058s +Jan 14 03:46:20.392: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.007117774s +Jan 14 03:46:22.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.008103643s +Jan 14 03:46:24.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.007907095s +Jan 14 03:46:26.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.008486956s +Jan 14 03:46:28.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.008401095s +Jan 14 03:46:30.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.008403541s +Jan 14 03:46:32.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.008354266s +Jan 14 03:46:34.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.008076968s +Jan 14 03:46:36.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.008186061s +Jan 14 03:46:38.396: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.01062589s +Jan 14 03:46:40.392: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.007300674s +Jan 14 03:46:42.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.008303849s +Jan 14 03:46:44.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.008149254s +Jan 14 03:46:46.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.00833503s +Jan 14 03:46:48.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.008108459s +Jan 14 03:46:50.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.008382276s +Jan 14 03:46:52.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.008470369s +Jan 14 03:46:54.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.008241795s +Jan 14 03:46:56.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.008568439s +Jan 14 03:46:58.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.008693962s +Jan 14 03:47:00.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.007735229s +Jan 14 03:47:02.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.008882526s +Jan 14 03:47:04.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.008978856s +Jan 14 03:47:06.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.008215905s +Jan 14 03:47:08.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.008651622s +Jan 14 03:47:10.392: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.007339545s +Jan 14 03:47:12.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.008840376s +Jan 14 03:47:14.397: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.011814786s +Jan 14 03:47:16.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.008007181s +Jan 14 03:47:18.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.00796039s +Jan 14 03:47:20.392: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.006880581s +Jan 14 03:47:22.395: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.009484047s +Jan 14 03:47:24.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.008101278s +Jan 14 03:47:26.396: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.011214768s +Jan 14 03:47:28.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.007870632s +Jan 14 03:47:30.392: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.006849184s +Jan 14 03:47:32.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.008842773s +Jan 14 03:47:34.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.008844343s +Jan 14 03:47:36.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.008018887s +Jan 14 03:47:38.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.00774169s +Jan 14 03:47:40.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.007734318s +Jan 14 03:47:42.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.009388239s +Jan 14 03:47:44.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.007973434s +Jan 14 03:47:46.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.007672684s +Jan 14 03:47:48.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.008139396s +Jan 14 03:47:50.392: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.006967363s +Jan 14 03:47:52.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.007852109s +Jan 14 03:47:54.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.009020859s +Jan 14 03:47:56.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.0084692s +Jan 14 03:47:58.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.008198561s +Jan 14 03:47:58.396: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.011021017s +STEP: removing the label kubernetes.io/e2e-cad82982-5428-4e35-bc41-f3eabd8c472b off the node 10.0.1.106 01/14/23 03:47:58.396 +STEP: verifying the node doesn't have the label kubernetes.io/e2e-cad82982-5428-4e35-bc41-f3eabd8c472b 01/14/23 03:47:58.408 +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/node/init/init.go:32 +Jan 14 03:47:58.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:88 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "sched-pred-4500" for this suite. 01/14/23 03:47:58.415 +------------------------------ +• [SLOW TEST] [304.170 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +test/e2e/scheduling/framework.go:40 + validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + test/e2e/scheduling/predicates.go:704 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:42:54.251 + Jan 14 03:42:54.251: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename sched-pred 01/14/23 03:42:54.252 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:42:54.263 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:42:54.265 + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:97 + Jan 14 03:42:54.267: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready + Jan 14 03:42:54.277: INFO: Waiting for terminating namespaces to be deleted... + Jan 14 03:42:54.280: INFO: + Logging pods the apiserver thinks is on node 10.0.1.106 before test + Jan 14 03:42:54.290: INFO: kubernetes-proxy-544fb566b4-fh64j from default started at 2023-01-14 03:20:34 +0000 UTC (1 container statuses recorded) + Jan 14 03:42:54.290: INFO: Container kubernetes-proxy ready: false, restart count 9 + Jan 14 03:42:54.290: INFO: csi-cbs-node-5wf2s from kube-system started at 2023-01-13 08:11:20 +0000 UTC (2 container statuses recorded) + Jan 14 03:42:54.290: INFO: Container cbs-csi ready: true, restart count 1 + Jan 14 03:42:54.290: INFO: Container driver-registrar ready: true, restart count 0 + Jan 14 03:42:54.290: INFO: ip-masq-agent-rx9k6 from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) + Jan 14 03:42:54.290: INFO: Container ip-masq-agent ready: true, restart count 0 + Jan 14 03:42:54.290: INFO: kube-proxy-s6xxg from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) + Jan 14 03:42:54.290: INFO: Container kube-proxy ready: true, restart count 0 + Jan 14 03:42:54.290: INFO: tke-bridge-agent-frbcm from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) + Jan 14 03:42:54.290: INFO: Container tke-bridge-agent ready: true, restart count 1 + Jan 14 03:42:54.290: INFO: tke-cni-agent-nv7pn from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) + Jan 14 03:42:54.290: INFO: Container tke-cni-agent ready: true, restart count 0 + Jan 14 03:42:54.290: INFO: tke-monitor-agent-xhdhg from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) + Jan 14 03:42:54.290: INFO: Container tke-monitor-agent ready: true, restart count 0 + Jan 14 03:42:54.290: INFO: sonobuoy-systemd-logs-daemon-set-0fc25bd2d6604eac-jkjnf from sonobuoy started at 2023-01-14 03:34:54 +0000 UTC (2 container statuses recorded) + Jan 14 03:42:54.290: INFO: Container sonobuoy-worker ready: true, restart count 0 + Jan 14 03:42:54.290: INFO: Container systemd-logs ready: true, restart count 0 + Jan 14 03:42:54.290: INFO: ss2-1 from statefulset-8862 started at 2023-01-14 02:33:11 +0000 UTC (1 container statuses recorded) + Jan 14 03:42:54.290: INFO: Container webserver ready: true, restart count 0 + Jan 14 03:42:54.290: INFO: + Logging pods the apiserver thinks is on node 10.0.1.212 before test + Jan 14 03:42:54.301: INFO: kubernetes-proxy-544fb566b4-zpvz8 from default started at 2023-01-14 03:20:34 +0000 UTC (1 container statuses recorded) + Jan 14 03:42:54.301: INFO: Container kubernetes-proxy ready: false, restart count 9 + Jan 14 03:42:54.301: INFO: csi-cbs-node-ddpcx from kube-system started at 2023-01-13 08:11:16 +0000 UTC (2 container statuses recorded) + Jan 14 03:42:54.301: INFO: Container cbs-csi ready: true, restart count 0 + Jan 14 03:42:54.301: INFO: Container driver-registrar ready: true, restart count 0 + Jan 14 03:42:54.301: INFO: ip-masq-agent-8wxxs from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) + Jan 14 03:42:54.301: INFO: Container ip-masq-agent ready: true, restart count 0 + Jan 14 03:42:54.301: INFO: kube-proxy-npt42 from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) + Jan 14 03:42:54.301: INFO: Container kube-proxy ready: true, restart count 0 + Jan 14 03:42:54.301: INFO: tke-bridge-agent-4rffd from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) + Jan 14 03:42:54.301: INFO: Container tke-bridge-agent ready: true, restart count 1 + Jan 14 03:42:54.301: INFO: tke-cni-agent-xmggs from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) + Jan 14 03:42:54.301: INFO: Container tke-cni-agent ready: true, restart count 0 + Jan 14 03:42:54.301: INFO: tke-monitor-agent-6gtt6 from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) + Jan 14 03:42:54.301: INFO: Container tke-monitor-agent ready: true, restart count 0 + Jan 14 03:42:54.301: INFO: sonobuoy-e2e-job-1b1a46fb40e34267 from sonobuoy started at 2023-01-14 03:34:54 +0000 UTC (2 container statuses recorded) + Jan 14 03:42:54.301: INFO: Container e2e ready: true, restart count 0 + Jan 14 03:42:54.301: INFO: Container sonobuoy-worker ready: true, restart count 0 + Jan 14 03:42:54.301: INFO: sonobuoy-systemd-logs-daemon-set-0fc25bd2d6604eac-lgxk7 from sonobuoy started at 2023-01-14 03:34:54 +0000 UTC (2 container statuses recorded) + Jan 14 03:42:54.301: INFO: Container sonobuoy-worker ready: true, restart count 0 + Jan 14 03:42:54.301: INFO: Container systemd-logs ready: true, restart count 0 + Jan 14 03:42:54.301: INFO: ss2-2 from statefulset-8862 started at 2023-01-14 02:33:30 +0000 UTC (1 container statuses recorded) + Jan 14 03:42:54.301: INFO: Container webserver ready: true, restart count 0 + Jan 14 03:42:54.301: INFO: + Logging pods the apiserver thinks is on node 10.0.1.99 before test + Jan 14 03:42:54.313: INFO: csi-cbs-node-q4l9b from kube-system started at 2023-01-13 08:11:43 +0000 UTC (2 container statuses recorded) + Jan 14 03:42:54.313: INFO: Container cbs-csi ready: true, restart count 0 + Jan 14 03:42:54.313: INFO: Container driver-registrar ready: true, restart count 0 + Jan 14 03:42:54.313: INFO: ip-masq-agent-kmrrk from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) + Jan 14 03:42:54.313: INFO: Container ip-masq-agent ready: true, restart count 0 + Jan 14 03:42:54.313: INFO: kube-proxy-g4qjh from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) + Jan 14 03:42:54.313: INFO: Container kube-proxy ready: true, restart count 0 + Jan 14 03:42:54.313: INFO: tke-bridge-agent-hzv6c from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) + Jan 14 03:42:54.313: INFO: Container tke-bridge-agent ready: true, restart count 1 + Jan 14 03:42:54.313: INFO: tke-cni-agent-7mk9b from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) + Jan 14 03:42:54.313: INFO: Container tke-cni-agent ready: true, restart count 0 + Jan 14 03:42:54.313: INFO: tke-monitor-agent-g27mp from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) + Jan 14 03:42:54.313: INFO: Container tke-monitor-agent ready: true, restart count 0 + Jan 14 03:42:54.313: INFO: sonobuoy from sonobuoy started at 2023-01-14 03:34:53 +0000 UTC (1 container statuses recorded) + Jan 14 03:42:54.313: INFO: Container kube-sonobuoy ready: true, restart count 0 + Jan 14 03:42:54.313: INFO: sonobuoy-systemd-logs-daemon-set-0fc25bd2d6604eac-nwh62 from sonobuoy started at 2023-01-14 03:34:54 +0000 UTC (2 container statuses recorded) + Jan 14 03:42:54.313: INFO: Container sonobuoy-worker ready: true, restart count 0 + Jan 14 03:42:54.313: INFO: Container systemd-logs ready: true, restart count 0 + Jan 14 03:42:54.313: INFO: ss2-0 from statefulset-8862 started at 2023-01-14 02:33:10 +0000 UTC (1 container statuses recorded) + Jan 14 03:42:54.313: INFO: Container webserver ready: true, restart count 0 + [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] + test/e2e/scheduling/predicates.go:704 + STEP: Trying to launch a pod without a label to get a node which can launch it. 01/14/23 03:42:54.313 + Jan 14 03:42:54.322: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-4500" to be "running" + Jan 14 03:42:54.325: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 2.611947ms + Jan 14 03:42:56.330: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007976518s + Jan 14 03:42:56.331: INFO: Pod "without-label" satisfied condition "running" + STEP: Explicitly delete pod here to free the resource it takes. 01/14/23 03:42:56.333 + STEP: Trying to apply a random label on the found node. 01/14/23 03:42:56.349 + STEP: verifying the node has the label kubernetes.io/e2e-cad82982-5428-4e35-bc41-f3eabd8c472b 95 01/14/23 03:42:56.358 + STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled 01/14/23 03:42:56.361 + Jan 14 03:42:56.371: INFO: Waiting up to 5m0s for pod "pod4" in namespace "sched-pred-4500" to be "not pending" + Jan 14 03:42:56.374: INFO: Pod "pod4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.49838ms + Jan 14 03:42:58.378: INFO: Pod "pod4": Phase="Running", Reason="", readiness=true. Elapsed: 2.006740122s + Jan 14 03:42:58.378: INFO: Pod "pod4" satisfied condition "not pending" + STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 10.0.1.106 on the node which pod4 resides and expect not scheduled 01/14/23 03:42:58.378 + Jan 14 03:42:58.385: INFO: Waiting up to 5m0s for pod "pod5" in namespace "sched-pred-4500" to be "not pending" + Jan 14 03:42:58.388: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.819488ms + Jan 14 03:43:00.392: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006884275s + Jan 14 03:43:02.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008809481s + Jan 14 03:43:04.392: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.007007328s + Jan 14 03:43:06.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.00864792s + Jan 14 03:43:08.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.008721728s + Jan 14 03:43:10.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.007899647s + Jan 14 03:43:12.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.007990403s + Jan 14 03:43:14.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.007778802s + Jan 14 03:43:16.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.007807879s + Jan 14 03:43:18.395: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.009507273s + Jan 14 03:43:20.392: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.007320529s + Jan 14 03:43:22.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.008089878s + Jan 14 03:43:24.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.007854474s + Jan 14 03:43:26.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.008003308s + Jan 14 03:43:28.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 30.007875501s + Jan 14 03:43:30.392: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.006928234s + Jan 14 03:43:32.392: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.007127195s + Jan 14 03:43:34.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 36.007729095s + Jan 14 03:43:36.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 38.007749676s + Jan 14 03:43:38.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 40.008567489s + Jan 14 03:43:40.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 42.007805221s + Jan 14 03:43:42.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 44.008803435s + Jan 14 03:43:44.395: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 46.009624553s + Jan 14 03:43:46.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 48.007620853s + Jan 14 03:43:48.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 50.008382456s + Jan 14 03:43:50.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 52.007552245s + Jan 14 03:43:52.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 54.008852422s + Jan 14 03:43:54.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 56.009001151s + Jan 14 03:43:56.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 58.008253385s + Jan 14 03:43:58.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.007761664s + Jan 14 03:44:00.392: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.00686855s + Jan 14 03:44:02.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.00861809s + Jan 14 03:44:04.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.008758434s + Jan 14 03:44:06.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.008143734s + Jan 14 03:44:08.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.007944948s + Jan 14 03:44:10.392: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.006993139s + Jan 14 03:44:12.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.007947057s + Jan 14 03:44:14.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.008329642s + Jan 14 03:44:16.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.008410154s + Jan 14 03:44:18.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.008547111s + Jan 14 03:44:20.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.00742554s + Jan 14 03:44:22.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.007518452s + Jan 14 03:44:24.392: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.00692583s + Jan 14 03:44:26.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.008182512s + Jan 14 03:44:28.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.007567469s + Jan 14 03:44:30.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.00768315s + Jan 14 03:44:32.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.00886463s + Jan 14 03:44:34.392: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.007172547s + Jan 14 03:44:36.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.008194904s + Jan 14 03:44:38.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.008000975s + Jan 14 03:44:40.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.008204904s + Jan 14 03:44:42.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.008103975s + Jan 14 03:44:44.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.007789946s + Jan 14 03:44:46.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.008486694s + Jan 14 03:44:48.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.008116964s + Jan 14 03:44:50.392: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.006986698s + Jan 14 03:44:52.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.007651853s + Jan 14 03:44:54.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.008458835s + Jan 14 03:44:56.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.007742897s + Jan 14 03:44:58.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.00892191s + Jan 14 03:45:00.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.007427877s + Jan 14 03:45:02.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.008576738s + Jan 14 03:45:04.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.008336515s + Jan 14 03:45:06.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.008882263s + Jan 14 03:45:08.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.008670616s + Jan 14 03:45:10.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m12.007663514s + Jan 14 03:45:12.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m14.008885185s + Jan 14 03:45:14.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m16.008727306s + Jan 14 03:45:16.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m18.00780095s + Jan 14 03:45:18.396: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m20.010536946s + Jan 14 03:45:20.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m22.007753336s + Jan 14 03:45:22.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.008319692s + Jan 14 03:45:24.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.008010716s + Jan 14 03:45:26.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.007412468s + Jan 14 03:45:28.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.008425775s + Jan 14 03:45:30.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.007445902s + Jan 14 03:45:32.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.007677871s + Jan 14 03:45:34.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.008935869s + Jan 14 03:45:36.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m38.00768404s + Jan 14 03:45:38.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m40.008444081s + Jan 14 03:45:40.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m42.007670474s + Jan 14 03:45:42.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m44.008830881s + Jan 14 03:45:44.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m46.00866987s + Jan 14 03:45:46.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m48.007901655s + Jan 14 03:45:48.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.008776068s + Jan 14 03:45:50.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.007803457s + Jan 14 03:45:52.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.008559892s + Jan 14 03:45:54.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.008585203s + Jan 14 03:45:56.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.007706907s + Jan 14 03:45:58.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.008487855s + Jan 14 03:46:00.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.007721936s + Jan 14 03:46:02.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.009049986s + Jan 14 03:46:04.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m6.007911006s + Jan 14 03:46:06.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m8.008049174s + Jan 14 03:46:08.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m10.0078601s + Jan 14 03:46:10.392: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m12.00673993s + Jan 14 03:46:12.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.008451986s + Jan 14 03:46:14.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.007454163s + Jan 14 03:46:16.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.007681032s + Jan 14 03:46:18.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.008376058s + Jan 14 03:46:20.392: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.007117774s + Jan 14 03:46:22.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.008103643s + Jan 14 03:46:24.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.007907095s + Jan 14 03:46:26.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.008486956s + Jan 14 03:46:28.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.008401095s + Jan 14 03:46:30.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.008403541s + Jan 14 03:46:32.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.008354266s + Jan 14 03:46:34.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.008076968s + Jan 14 03:46:36.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m38.008186061s + Jan 14 03:46:38.396: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.01062589s + Jan 14 03:46:40.392: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.007300674s + Jan 14 03:46:42.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.008303849s + Jan 14 03:46:44.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.008149254s + Jan 14 03:46:46.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.00833503s + Jan 14 03:46:48.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.008108459s + Jan 14 03:46:50.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.008382276s + Jan 14 03:46:52.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.008470369s + Jan 14 03:46:54.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.008241795s + Jan 14 03:46:56.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.008568439s + Jan 14 03:46:58.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.008693962s + Jan 14 03:47:00.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.007735229s + Jan 14 03:47:02.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.008882526s + Jan 14 03:47:04.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.008978856s + Jan 14 03:47:06.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.008215905s + Jan 14 03:47:08.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.008651622s + Jan 14 03:47:10.392: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.007339545s + Jan 14 03:47:12.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.008840376s + Jan 14 03:47:14.397: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.011814786s + Jan 14 03:47:16.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.008007181s + Jan 14 03:47:18.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.00796039s + Jan 14 03:47:20.392: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.006880581s + Jan 14 03:47:22.395: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.009484047s + Jan 14 03:47:24.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.008101278s + Jan 14 03:47:26.396: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.011214768s + Jan 14 03:47:28.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.007870632s + Jan 14 03:47:30.392: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.006849184s + Jan 14 03:47:32.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.008842773s + Jan 14 03:47:34.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.008844343s + Jan 14 03:47:36.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.008018887s + Jan 14 03:47:38.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m40.00774169s + Jan 14 03:47:40.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m42.007734318s + Jan 14 03:47:42.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m44.009388239s + Jan 14 03:47:44.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m46.007973434s + Jan 14 03:47:46.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.007672684s + Jan 14 03:47:48.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.008139396s + Jan 14 03:47:50.392: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.006967363s + Jan 14 03:47:52.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.007852109s + Jan 14 03:47:54.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.009020859s + Jan 14 03:47:56.394: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.0084692s + Jan 14 03:47:58.393: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.008198561s + Jan 14 03:47:58.396: INFO: Pod "pod5": Phase="Pending", Reason="", readiness=false. Elapsed: 5m0.011021017s + STEP: removing the label kubernetes.io/e2e-cad82982-5428-4e35-bc41-f3eabd8c472b off the node 10.0.1.106 01/14/23 03:47:58.396 + STEP: verifying the node doesn't have the label kubernetes.io/e2e-cad82982-5428-4e35-bc41-f3eabd8c472b 01/14/23 03:47:58.408 + [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/node/init/init.go:32 + Jan 14 03:47:58.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:88 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "sched-pred-4500" for this suite. 01/14/23 03:47:58.415 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-apps] ReplicationController + should serve a basic image on each replica with a public image [Conformance] + test/e2e/apps/rc.go:67 +[BeforeEach] [sig-apps] ReplicationController + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:47:58.421 +Jan 14 03:47:58.421: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename replication-controller 01/14/23 03:47:58.422 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:47:58.435 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:47:58.437 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:57 +[It] should serve a basic image on each replica with a public image [Conformance] + test/e2e/apps/rc.go:67 +STEP: Creating replication controller my-hostname-basic-838d05e4-d9ab-4fba-a446-978293797417 01/14/23 03:47:58.44 +Jan 14 03:47:58.463: INFO: Pod name my-hostname-basic-838d05e4-d9ab-4fba-a446-978293797417: Found 0 pods out of 1 +Jan 14 03:48:03.470: INFO: Pod name my-hostname-basic-838d05e4-d9ab-4fba-a446-978293797417: Found 1 pods out of 1 +Jan 14 03:48:03.470: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-838d05e4-d9ab-4fba-a446-978293797417" are running +Jan 14 03:48:03.470: INFO: Waiting up to 5m0s for pod "my-hostname-basic-838d05e4-d9ab-4fba-a446-978293797417-7l6c2" in namespace "replication-controller-3246" to be "running" +Jan 14 03:48:03.473: INFO: Pod "my-hostname-basic-838d05e4-d9ab-4fba-a446-978293797417-7l6c2": Phase="Running", Reason="", readiness=true. Elapsed: 2.670164ms +Jan 14 03:48:03.473: INFO: Pod "my-hostname-basic-838d05e4-d9ab-4fba-a446-978293797417-7l6c2" satisfied condition "running" +Jan 14 03:48:03.473: INFO: Pod "my-hostname-basic-838d05e4-d9ab-4fba-a446-978293797417-7l6c2" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-14 03:47:58 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-14 03:47:59 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-14 03:47:59 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-14 03:47:58 +0000 UTC Reason: Message:}]) +Jan 14 03:48:03.473: INFO: Trying to dial the pod +Jan 14 03:48:08.484: INFO: Controller my-hostname-basic-838d05e4-d9ab-4fba-a446-978293797417: Got expected result from replica 1 [my-hostname-basic-838d05e4-d9ab-4fba-a446-978293797417-7l6c2]: "my-hostname-basic-838d05e4-d9ab-4fba-a446-978293797417-7l6c2", 1 of 1 required successes so far +[AfterEach] [sig-apps] ReplicationController + test/e2e/framework/node/init/init.go:32 +Jan 14 03:48:08.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] ReplicationController + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] ReplicationController + tear down framework | framework.go:193 +STEP: Destroying namespace "replication-controller-3246" for this suite. 01/14/23 03:48:08.489 +------------------------------ +• [SLOW TEST] [10.073 seconds] +[sig-apps] ReplicationController +test/e2e/apps/framework.go:23 + should serve a basic image on each replica with a public image [Conformance] + test/e2e/apps/rc.go:67 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ReplicationController + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:47:58.421 + Jan 14 03:47:58.421: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename replication-controller 01/14/23 03:47:58.422 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:47:58.435 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:47:58.437 + [BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:57 + [It] should serve a basic image on each replica with a public image [Conformance] + test/e2e/apps/rc.go:67 + STEP: Creating replication controller my-hostname-basic-838d05e4-d9ab-4fba-a446-978293797417 01/14/23 03:47:58.44 + Jan 14 03:47:58.463: INFO: Pod name my-hostname-basic-838d05e4-d9ab-4fba-a446-978293797417: Found 0 pods out of 1 + Jan 14 03:48:03.470: INFO: Pod name my-hostname-basic-838d05e4-d9ab-4fba-a446-978293797417: Found 1 pods out of 1 + Jan 14 03:48:03.470: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-838d05e4-d9ab-4fba-a446-978293797417" are running + Jan 14 03:48:03.470: INFO: Waiting up to 5m0s for pod "my-hostname-basic-838d05e4-d9ab-4fba-a446-978293797417-7l6c2" in namespace "replication-controller-3246" to be "running" + Jan 14 03:48:03.473: INFO: Pod "my-hostname-basic-838d05e4-d9ab-4fba-a446-978293797417-7l6c2": Phase="Running", Reason="", readiness=true. Elapsed: 2.670164ms + Jan 14 03:48:03.473: INFO: Pod "my-hostname-basic-838d05e4-d9ab-4fba-a446-978293797417-7l6c2" satisfied condition "running" + Jan 14 03:48:03.473: INFO: Pod "my-hostname-basic-838d05e4-d9ab-4fba-a446-978293797417-7l6c2" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-14 03:47:58 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-14 03:47:59 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-14 03:47:59 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-14 03:47:58 +0000 UTC Reason: Message:}]) + Jan 14 03:48:03.473: INFO: Trying to dial the pod + Jan 14 03:48:08.484: INFO: Controller my-hostname-basic-838d05e4-d9ab-4fba-a446-978293797417: Got expected result from replica 1 [my-hostname-basic-838d05e4-d9ab-4fba-a446-978293797417-7l6c2]: "my-hostname-basic-838d05e4-d9ab-4fba-a446-978293797417-7l6c2", 1 of 1 required successes so far + [AfterEach] [sig-apps] ReplicationController + test/e2e/framework/node/init/init.go:32 + Jan 14 03:48:08.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] ReplicationController + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] ReplicationController + tear down framework | framework.go:193 + STEP: Destroying namespace "replication-controller-3246" for this suite. 01/14/23 03:48:08.489 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl api-versions + should check if v1 is in available api versions [Conformance] + test/e2e/kubectl/kubectl.go:824 +[BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:48:08.495 +Jan 14 03:48:08.496: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename kubectl 01/14/23 03:48:08.496 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:48:08.51 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:48:08.512 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[It] should check if v1 is in available api versions [Conformance] + test/e2e/kubectl/kubectl.go:824 +STEP: validating api versions 01/14/23 03:48:08.514 +Jan 14 03:48:08.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-7129 api-versions' +Jan 14 03:48:08.582: INFO: stderr: "" +Jan 14 03:48:08.582: INFO: stdout: "admissionregistration.k8s.io/v1\napiextensions.k8s.io/v1\napiregistration.k8s.io/v1\napps/v1\nauthentication.k8s.io/v1\nauthorization.k8s.io/v1\nautoscaling/v1\nautoscaling/v2\nbatch/v1\ncertificates.k8s.io/v1\ncloud.tencent.com/v1alpha1\ncoordination.k8s.io/v1\ncustom.metrics.k8s.io/v1beta1\ndiscovery.k8s.io/v1\nevents.k8s.io/v1\nflowcontrol.apiserver.k8s.io/v1beta2\nflowcontrol.apiserver.k8s.io/v1beta3\nmetrics.k8s.io/v1beta1\nmonitor.tencent.io/v1alpha1\nnetworking.k8s.io/v1\nnetworking.tke.cloud.tencent.com/v1alpha1\nnode.k8s.io/v1\npolicy/v1\nrbac.authorization.k8s.io/v1\nscheduling.k8s.io/v1\nsnapshot.storage.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 +Jan 14 03:48:08.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 +STEP: Destroying namespace "kubectl-7129" for this suite. 01/14/23 03:48:08.587 +------------------------------ +• [0.097 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl api-versions + test/e2e/kubectl/kubectl.go:818 + should check if v1 is in available api versions [Conformance] + test/e2e/kubectl/kubectl.go:824 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:48:08.495 + Jan 14 03:48:08.496: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename kubectl 01/14/23 03:48:08.496 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:48:08.51 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:48:08.512 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [It] should check if v1 is in available api versions [Conformance] + test/e2e/kubectl/kubectl.go:824 + STEP: validating api versions 01/14/23 03:48:08.514 + Jan 14 03:48:08.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-7129 api-versions' + Jan 14 03:48:08.582: INFO: stderr: "" + Jan 14 03:48:08.582: INFO: stdout: "admissionregistration.k8s.io/v1\napiextensions.k8s.io/v1\napiregistration.k8s.io/v1\napps/v1\nauthentication.k8s.io/v1\nauthorization.k8s.io/v1\nautoscaling/v1\nautoscaling/v2\nbatch/v1\ncertificates.k8s.io/v1\ncloud.tencent.com/v1alpha1\ncoordination.k8s.io/v1\ncustom.metrics.k8s.io/v1beta1\ndiscovery.k8s.io/v1\nevents.k8s.io/v1\nflowcontrol.apiserver.k8s.io/v1beta2\nflowcontrol.apiserver.k8s.io/v1beta3\nmetrics.k8s.io/v1beta1\nmonitor.tencent.io/v1alpha1\nnetworking.k8s.io/v1\nnetworking.tke.cloud.tencent.com/v1alpha1\nnode.k8s.io/v1\npolicy/v1\nrbac.authorization.k8s.io/v1\nscheduling.k8s.io/v1\nsnapshot.storage.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 + Jan 14 03:48:08.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 + STEP: Destroying namespace "kubectl-7129" for this suite. 01/14/23 03:48:08.587 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-node] Security Context + should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + test/e2e/node/security_context.go:129 +[BeforeEach] [sig-node] Security Context + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:48:08.593 +Jan 14 03:48:08.593: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename security-context 01/14/23 03:48:08.594 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:48:08.606 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:48:08.608 +[BeforeEach] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:31 +[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + test/e2e/node/security_context.go:129 +STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser 01/14/23 03:48:08.61 +Jan 14 03:48:08.618: INFO: Waiting up to 5m0s for pod "security-context-2dd14294-09a1-499a-9003-9f8b79c4db43" in namespace "security-context-5803" to be "Succeeded or Failed" +Jan 14 03:48:08.620: INFO: Pod "security-context-2dd14294-09a1-499a-9003-9f8b79c4db43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.604533ms +Jan 14 03:48:10.624: INFO: Pod "security-context-2dd14294-09a1-499a-9003-9f8b79c4db43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006685665s +Jan 14 03:48:12.626: INFO: Pod "security-context-2dd14294-09a1-499a-9003-9f8b79c4db43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008654815s +STEP: Saw pod success 01/14/23 03:48:12.626 +Jan 14 03:48:12.627: INFO: Pod "security-context-2dd14294-09a1-499a-9003-9f8b79c4db43" satisfied condition "Succeeded or Failed" +Jan 14 03:48:12.630: INFO: Trying to get logs from node 10.0.1.106 pod security-context-2dd14294-09a1-499a-9003-9f8b79c4db43 container test-container: +STEP: delete the pod 01/14/23 03:48:12.643 +Jan 14 03:48:12.653: INFO: Waiting for pod security-context-2dd14294-09a1-499a-9003-9f8b79c4db43 to disappear +Jan 14 03:48:12.656: INFO: Pod security-context-2dd14294-09a1-499a-9003-9f8b79c4db43 no longer exists +[AfterEach] [sig-node] Security Context + test/e2e/framework/node/init/init.go:32 +Jan 14 03:48:12.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Security Context + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Security Context + tear down framework | framework.go:193 +STEP: Destroying namespace "security-context-5803" for this suite. 01/14/23 03:48:12.661 +------------------------------ +• [4.073 seconds] +[sig-node] Security Context +test/e2e/node/framework.go:23 + should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + test/e2e/node/security_context.go:129 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Security Context + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:48:08.593 + Jan 14 03:48:08.593: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename security-context 01/14/23 03:48:08.594 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:48:08.606 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:48:08.608 + [BeforeEach] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:31 + [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + test/e2e/node/security_context.go:129 + STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser 01/14/23 03:48:08.61 + Jan 14 03:48:08.618: INFO: Waiting up to 5m0s for pod "security-context-2dd14294-09a1-499a-9003-9f8b79c4db43" in namespace "security-context-5803" to be "Succeeded or Failed" + Jan 14 03:48:08.620: INFO: Pod "security-context-2dd14294-09a1-499a-9003-9f8b79c4db43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.604533ms + Jan 14 03:48:10.624: INFO: Pod "security-context-2dd14294-09a1-499a-9003-9f8b79c4db43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006685665s + Jan 14 03:48:12.626: INFO: Pod "security-context-2dd14294-09a1-499a-9003-9f8b79c4db43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008654815s + STEP: Saw pod success 01/14/23 03:48:12.626 + Jan 14 03:48:12.627: INFO: Pod "security-context-2dd14294-09a1-499a-9003-9f8b79c4db43" satisfied condition "Succeeded or Failed" + Jan 14 03:48:12.630: INFO: Trying to get logs from node 10.0.1.106 pod security-context-2dd14294-09a1-499a-9003-9f8b79c4db43 container test-container: + STEP: delete the pod 01/14/23 03:48:12.643 + Jan 14 03:48:12.653: INFO: Waiting for pod security-context-2dd14294-09a1-499a-9003-9f8b79c4db43 to disappear + Jan 14 03:48:12.656: INFO: Pod security-context-2dd14294-09a1-499a-9003-9f8b79c4db43 no longer exists + [AfterEach] [sig-node] Security Context + test/e2e/framework/node/init/init.go:32 + Jan 14 03:48:12.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Security Context + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Security Context + tear down framework | framework.go:193 + STEP: Destroying namespace "security-context-5803" for this suite. 01/14/23 03:48:12.661 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-node] Probing container + should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:184 +[BeforeEach] [sig-node] Probing container + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:48:12.667 +Jan 14 03:48:12.667: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename container-probe 01/14/23 03:48:12.668 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:48:12.681 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:48:12.683 +[BeforeEach] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 +[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:184 +STEP: Creating pod liveness-de259c23-7ce0-440c-965c-cba44214806c in namespace container-probe-6335 01/14/23 03:48:12.685 +Jan 14 03:48:12.694: INFO: Waiting up to 5m0s for pod "liveness-de259c23-7ce0-440c-965c-cba44214806c" in namespace "container-probe-6335" to be "not pending" +Jan 14 03:48:12.697: INFO: Pod "liveness-de259c23-7ce0-440c-965c-cba44214806c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.173205ms +Jan 14 03:48:14.701: INFO: Pod "liveness-de259c23-7ce0-440c-965c-cba44214806c": Phase="Running", Reason="", readiness=true. Elapsed: 2.006670971s +Jan 14 03:48:14.701: INFO: Pod "liveness-de259c23-7ce0-440c-965c-cba44214806c" satisfied condition "not pending" +Jan 14 03:48:14.701: INFO: Started pod liveness-de259c23-7ce0-440c-965c-cba44214806c in namespace container-probe-6335 +STEP: checking the pod's current state and verifying that restartCount is present 01/14/23 03:48:14.701 +Jan 14 03:48:14.704: INFO: Initial restart count of pod liveness-de259c23-7ce0-440c-965c-cba44214806c is 0 +STEP: deleting the pod 01/14/23 03:52:15.277 +[AfterEach] [sig-node] Probing container + test/e2e/framework/node/init/init.go:32 +Jan 14 03:52:15.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Probing container + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Probing container + tear down framework | framework.go:193 +STEP: Destroying namespace "container-probe-6335" for this suite. 01/14/23 03:52:15.297 +------------------------------ +• [SLOW TEST] [242.638 seconds] +[sig-node] Probing container +test/e2e/common/node/framework.go:23 + should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:184 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Probing container + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:48:12.667 + Jan 14 03:48:12.667: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename container-probe 01/14/23 03:48:12.668 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:48:12.681 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:48:12.683 + [BeforeEach] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 + [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:184 + STEP: Creating pod liveness-de259c23-7ce0-440c-965c-cba44214806c in namespace container-probe-6335 01/14/23 03:48:12.685 + Jan 14 03:48:12.694: INFO: Waiting up to 5m0s for pod "liveness-de259c23-7ce0-440c-965c-cba44214806c" in namespace "container-probe-6335" to be "not pending" + Jan 14 03:48:12.697: INFO: Pod "liveness-de259c23-7ce0-440c-965c-cba44214806c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.173205ms + Jan 14 03:48:14.701: INFO: Pod "liveness-de259c23-7ce0-440c-965c-cba44214806c": Phase="Running", Reason="", readiness=true. Elapsed: 2.006670971s + Jan 14 03:48:14.701: INFO: Pod "liveness-de259c23-7ce0-440c-965c-cba44214806c" satisfied condition "not pending" + Jan 14 03:48:14.701: INFO: Started pod liveness-de259c23-7ce0-440c-965c-cba44214806c in namespace container-probe-6335 + STEP: checking the pod's current state and verifying that restartCount is present 01/14/23 03:48:14.701 + Jan 14 03:48:14.704: INFO: Initial restart count of pod liveness-de259c23-7ce0-440c-965c-cba44214806c is 0 + STEP: deleting the pod 01/14/23 03:52:15.277 + [AfterEach] [sig-node] Probing container + test/e2e/framework/node/init/init.go:32 + Jan 14 03:52:15.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Probing container + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Probing container + tear down framework | framework.go:193 + STEP: Destroying namespace "container-probe-6335" for this suite. 01/14/23 03:52:15.297 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl run pod + should create a pod from an image when restart is Never [Conformance] + test/e2e/kubectl/kubectl.go:1713 +[BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:52:15.305 +Jan 14 03:52:15.305: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename kubectl 01/14/23 03:52:15.306 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:52:15.321 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:52:15.324 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[BeforeEach] Kubectl run pod + test/e2e/kubectl/kubectl.go:1700 +[It] should create a pod from an image when restart is Never [Conformance] + test/e2e/kubectl/kubectl.go:1713 +STEP: running the image registry.k8s.io/e2e-test-images/httpd:2.4.38-4 01/14/23 03:52:15.326 +Jan 14 03:52:15.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-7970 run e2e-test-httpd-pod --restart=Never --pod-running-timeout=2m0s --image=registry.k8s.io/e2e-test-images/httpd:2.4.38-4' +Jan 14 03:52:15.407: INFO: stderr: "" +Jan 14 03:52:15.407: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: verifying the pod e2e-test-httpd-pod was created 01/14/23 03:52:15.407 +[AfterEach] Kubectl run pod + test/e2e/kubectl/kubectl.go:1704 +Jan 14 03:52:15.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-7970 delete pods e2e-test-httpd-pod' +Jan 14 03:52:17.830: INFO: stderr: "" +Jan 14 03:52:17.830: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 +Jan 14 03:52:17.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 +STEP: Destroying namespace "kubectl-7970" for this suite. 01/14/23 03:52:17.836 +------------------------------ +• [2.538 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl run pod + test/e2e/kubectl/kubectl.go:1697 + should create a pod from an image when restart is Never [Conformance] + test/e2e/kubectl/kubectl.go:1713 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:52:15.305 + Jan 14 03:52:15.305: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename kubectl 01/14/23 03:52:15.306 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:52:15.321 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:52:15.324 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [BeforeEach] Kubectl run pod + test/e2e/kubectl/kubectl.go:1700 + [It] should create a pod from an image when restart is Never [Conformance] + test/e2e/kubectl/kubectl.go:1713 + STEP: running the image registry.k8s.io/e2e-test-images/httpd:2.4.38-4 01/14/23 03:52:15.326 + Jan 14 03:52:15.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-7970 run e2e-test-httpd-pod --restart=Never --pod-running-timeout=2m0s --image=registry.k8s.io/e2e-test-images/httpd:2.4.38-4' + Jan 14 03:52:15.407: INFO: stderr: "" + Jan 14 03:52:15.407: INFO: stdout: "pod/e2e-test-httpd-pod created\n" + STEP: verifying the pod e2e-test-httpd-pod was created 01/14/23 03:52:15.407 + [AfterEach] Kubectl run pod + test/e2e/kubectl/kubectl.go:1704 + Jan 14 03:52:15.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-7970 delete pods e2e-test-httpd-pod' + Jan 14 03:52:17.830: INFO: stderr: "" + Jan 14 03:52:17.830: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 + Jan 14 03:52:17.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 + STEP: Destroying namespace "kubectl-7970" for this suite. 01/14/23 03:52:17.836 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:88 +[BeforeEach] [sig-storage] Projected secret + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:52:17.845 +Jan 14 03:52:17.845: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename projected 01/14/23 03:52:17.846 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:52:17.857 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:52:17.859 +[BeforeEach] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:88 +STEP: Creating projection with secret that has name projected-secret-test-map-26f75daf-013c-45cb-969c-c7cb33acc5fb 01/14/23 03:52:17.861 +STEP: Creating a pod to test consume secrets 01/14/23 03:52:17.867 +Jan 14 03:52:17.880: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5b302cb4-ea50-40e2-8e87-78aeceb427da" in namespace "projected-9899" to be "Succeeded or Failed" +Jan 14 03:52:17.883: INFO: Pod "pod-projected-secrets-5b302cb4-ea50-40e2-8e87-78aeceb427da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.874165ms +Jan 14 03:52:19.887: INFO: Pod "pod-projected-secrets-5b302cb4-ea50-40e2-8e87-78aeceb427da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007134347s +Jan 14 03:52:21.888: INFO: Pod "pod-projected-secrets-5b302cb4-ea50-40e2-8e87-78aeceb427da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007644388s +STEP: Saw pod success 01/14/23 03:52:21.888 +Jan 14 03:52:21.888: INFO: Pod "pod-projected-secrets-5b302cb4-ea50-40e2-8e87-78aeceb427da" satisfied condition "Succeeded or Failed" +Jan 14 03:52:21.891: INFO: Trying to get logs from node 10.0.1.106 pod pod-projected-secrets-5b302cb4-ea50-40e2-8e87-78aeceb427da container projected-secret-volume-test: +STEP: delete the pod 01/14/23 03:52:21.902 +Jan 14 03:52:21.911: INFO: Waiting for pod pod-projected-secrets-5b302cb4-ea50-40e2-8e87-78aeceb427da to disappear +Jan 14 03:52:21.914: INFO: Pod pod-projected-secrets-5b302cb4-ea50-40e2-8e87-78aeceb427da no longer exists +[AfterEach] [sig-storage] Projected secret + test/e2e/framework/node/init/init.go:32 +Jan 14 03:52:21.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected secret + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected secret + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-9899" for this suite. 01/14/23 03:52:21.918 +------------------------------ +• [4.079 seconds] +[sig-storage] Projected secret +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:88 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected secret + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:52:17.845 + Jan 14 03:52:17.845: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename projected 01/14/23 03:52:17.846 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:52:17.857 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:52:17.859 + [BeforeEach] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:88 + STEP: Creating projection with secret that has name projected-secret-test-map-26f75daf-013c-45cb-969c-c7cb33acc5fb 01/14/23 03:52:17.861 + STEP: Creating a pod to test consume secrets 01/14/23 03:52:17.867 + Jan 14 03:52:17.880: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5b302cb4-ea50-40e2-8e87-78aeceb427da" in namespace "projected-9899" to be "Succeeded or Failed" + Jan 14 03:52:17.883: INFO: Pod "pod-projected-secrets-5b302cb4-ea50-40e2-8e87-78aeceb427da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.874165ms + Jan 14 03:52:19.887: INFO: Pod "pod-projected-secrets-5b302cb4-ea50-40e2-8e87-78aeceb427da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007134347s + Jan 14 03:52:21.888: INFO: Pod "pod-projected-secrets-5b302cb4-ea50-40e2-8e87-78aeceb427da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007644388s + STEP: Saw pod success 01/14/23 03:52:21.888 + Jan 14 03:52:21.888: INFO: Pod "pod-projected-secrets-5b302cb4-ea50-40e2-8e87-78aeceb427da" satisfied condition "Succeeded or Failed" + Jan 14 03:52:21.891: INFO: Trying to get logs from node 10.0.1.106 pod pod-projected-secrets-5b302cb4-ea50-40e2-8e87-78aeceb427da container projected-secret-volume-test: + STEP: delete the pod 01/14/23 03:52:21.902 + Jan 14 03:52:21.911: INFO: Waiting for pod pod-projected-secrets-5b302cb4-ea50-40e2-8e87-78aeceb427da to disappear + Jan 14 03:52:21.914: INFO: Pod pod-projected-secrets-5b302cb4-ea50-40e2-8e87-78aeceb427da no longer exists + [AfterEach] [sig-storage] Projected secret + test/e2e/framework/node/init/init.go:32 + Jan 14 03:52:21.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected secret + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected secret + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-9899" for this suite. 01/14/23 03:52:21.918 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD with validation schema [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:69 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:52:21.925 +Jan 14 03:52:21.925: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename crd-publish-openapi 01/14/23 03:52:21.926 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:52:21.937 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:52:21.939 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[It] works for CRD with validation schema [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:69 +Jan 14 03:52:21.941: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: kubectl validation (kubectl create and apply) allows request with known and required properties 01/14/23 03:52:23.749 +Jan 14 03:52:23.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-1096 --namespace=crd-publish-openapi-1096 create -f -' +Jan 14 03:52:24.331: INFO: stderr: "" +Jan 14 03:52:24.331: INFO: stdout: "e2e-test-crd-publish-openapi-7943-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" +Jan 14 03:52:24.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-1096 --namespace=crd-publish-openapi-1096 delete e2e-test-crd-publish-openapi-7943-crds test-foo' +Jan 14 03:52:24.401: INFO: stderr: "" +Jan 14 03:52:24.401: INFO: stdout: "e2e-test-crd-publish-openapi-7943-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" +Jan 14 03:52:24.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-1096 --namespace=crd-publish-openapi-1096 apply -f -' +Jan 14 03:52:24.899: INFO: stderr: "" +Jan 14 03:52:24.899: INFO: stdout: "e2e-test-crd-publish-openapi-7943-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" +Jan 14 03:52:24.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-1096 --namespace=crd-publish-openapi-1096 delete e2e-test-crd-publish-openapi-7943-crds test-foo' +Jan 14 03:52:24.965: INFO: stderr: "" +Jan 14 03:52:24.965: INFO: stdout: "e2e-test-crd-publish-openapi-7943-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" +STEP: kubectl validation (kubectl create and apply) rejects request with value outside defined enum values 01/14/23 03:52:24.965 +Jan 14 03:52:24.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-1096 --namespace=crd-publish-openapi-1096 create -f -' +Jan 14 03:52:25.134: INFO: rc: 1 +STEP: kubectl validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema 01/14/23 03:52:25.134 +Jan 14 03:52:25.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-1096 --namespace=crd-publish-openapi-1096 create -f -' +Jan 14 03:52:25.611: INFO: rc: 1 +Jan 14 03:52:25.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-1096 --namespace=crd-publish-openapi-1096 apply -f -' +Jan 14 03:52:25.793: INFO: rc: 1 +STEP: kubectl validation (kubectl create and apply) rejects request without required properties 01/14/23 03:52:25.793 +Jan 14 03:52:25.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-1096 --namespace=crd-publish-openapi-1096 create -f -' +Jan 14 03:52:25.960: INFO: rc: 1 +Jan 14 03:52:25.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-1096 --namespace=crd-publish-openapi-1096 apply -f -' +Jan 14 03:52:26.137: INFO: rc: 1 +STEP: kubectl explain works to explain CR properties 01/14/23 03:52:26.137 +Jan 14 03:52:26.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-1096 explain e2e-test-crd-publish-openapi-7943-crds' +Jan 14 03:52:26.308: INFO: stderr: "" +Jan 14 03:52:26.308: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-7943-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" +STEP: kubectl explain works to explain CR properties recursively 01/14/23 03:52:26.309 +Jan 14 03:52:26.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-1096 explain e2e-test-crd-publish-openapi-7943-crds.metadata' +Jan 14 03:52:26.477: INFO: stderr: "" +Jan 14 03:52:26.477: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-7943-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n return a 409.\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n Deprecated: selfLink is a legacy read-only field that is no longer\n populated by the system.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" +Jan 14 03:52:26.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-1096 explain e2e-test-crd-publish-openapi-7943-crds.spec' +Jan 14 03:52:26.648: INFO: stderr: "" +Jan 14 03:52:26.648: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-7943-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" +Jan 14 03:52:26.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-1096 explain e2e-test-crd-publish-openapi-7943-crds.spec.bars' +Jan 14 03:52:26.821: INFO: stderr: "" +Jan 14 03:52:26.821: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-7943-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n feeling\t\n Whether Bar is feeling great.\n\n name\t -required-\n Name of Bar.\n\n" +STEP: kubectl explain works to return error when explain is called on property that doesn't exist 01/14/23 03:52:26.821 +Jan 14 03:52:26.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-1096 explain e2e-test-crd-publish-openapi-7943-crds.spec.bars2' +Jan 14 03:52:26.992: INFO: rc: 1 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jan 14 03:52:28.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "crd-publish-openapi-1096" for this suite. 01/14/23 03:52:28.787 +------------------------------ +• [SLOW TEST] [6.868 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + works for CRD with validation schema [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:69 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:52:21.925 + Jan 14 03:52:21.925: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename crd-publish-openapi 01/14/23 03:52:21.926 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:52:21.937 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:52:21.939 + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [It] works for CRD with validation schema [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:69 + Jan 14 03:52:21.941: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: kubectl validation (kubectl create and apply) allows request with known and required properties 01/14/23 03:52:23.749 + Jan 14 03:52:23.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-1096 --namespace=crd-publish-openapi-1096 create -f -' + Jan 14 03:52:24.331: INFO: stderr: "" + Jan 14 03:52:24.331: INFO: stdout: "e2e-test-crd-publish-openapi-7943-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" + Jan 14 03:52:24.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-1096 --namespace=crd-publish-openapi-1096 delete e2e-test-crd-publish-openapi-7943-crds test-foo' + Jan 14 03:52:24.401: INFO: stderr: "" + Jan 14 03:52:24.401: INFO: stdout: "e2e-test-crd-publish-openapi-7943-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" + Jan 14 03:52:24.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-1096 --namespace=crd-publish-openapi-1096 apply -f -' + Jan 14 03:52:24.899: INFO: stderr: "" + Jan 14 03:52:24.899: INFO: stdout: "e2e-test-crd-publish-openapi-7943-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" + Jan 14 03:52:24.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-1096 --namespace=crd-publish-openapi-1096 delete e2e-test-crd-publish-openapi-7943-crds test-foo' + Jan 14 03:52:24.965: INFO: stderr: "" + Jan 14 03:52:24.965: INFO: stdout: "e2e-test-crd-publish-openapi-7943-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" + STEP: kubectl validation (kubectl create and apply) rejects request with value outside defined enum values 01/14/23 03:52:24.965 + Jan 14 03:52:24.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-1096 --namespace=crd-publish-openapi-1096 create -f -' + Jan 14 03:52:25.134: INFO: rc: 1 + STEP: kubectl validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema 01/14/23 03:52:25.134 + Jan 14 03:52:25.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-1096 --namespace=crd-publish-openapi-1096 create -f -' + Jan 14 03:52:25.611: INFO: rc: 1 + Jan 14 03:52:25.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-1096 --namespace=crd-publish-openapi-1096 apply -f -' + Jan 14 03:52:25.793: INFO: rc: 1 + STEP: kubectl validation (kubectl create and apply) rejects request without required properties 01/14/23 03:52:25.793 + Jan 14 03:52:25.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-1096 --namespace=crd-publish-openapi-1096 create -f -' + Jan 14 03:52:25.960: INFO: rc: 1 + Jan 14 03:52:25.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-1096 --namespace=crd-publish-openapi-1096 apply -f -' + Jan 14 03:52:26.137: INFO: rc: 1 + STEP: kubectl explain works to explain CR properties 01/14/23 03:52:26.137 + Jan 14 03:52:26.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-1096 explain e2e-test-crd-publish-openapi-7943-crds' + Jan 14 03:52:26.308: INFO: stderr: "" + Jan 14 03:52:26.308: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-7943-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" + STEP: kubectl explain works to explain CR properties recursively 01/14/23 03:52:26.309 + Jan 14 03:52:26.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-1096 explain e2e-test-crd-publish-openapi-7943-crds.metadata' + Jan 14 03:52:26.477: INFO: stderr: "" + Jan 14 03:52:26.477: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-7943-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n return a 409.\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n Deprecated: selfLink is a legacy read-only field that is no longer\n populated by the system.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" + Jan 14 03:52:26.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-1096 explain e2e-test-crd-publish-openapi-7943-crds.spec' + Jan 14 03:52:26.648: INFO: stderr: "" + Jan 14 03:52:26.648: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-7943-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" + Jan 14 03:52:26.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-1096 explain e2e-test-crd-publish-openapi-7943-crds.spec.bars' + Jan 14 03:52:26.821: INFO: stderr: "" + Jan 14 03:52:26.821: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-7943-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n feeling\t\n Whether Bar is feeling great.\n\n name\t -required-\n Name of Bar.\n\n" + STEP: kubectl explain works to return error when explain is called on property that doesn't exist 01/14/23 03:52:26.821 + Jan 14 03:52:26.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-1096 explain e2e-test-crd-publish-openapi-7943-crds.spec.bars2' + Jan 14 03:52:26.992: INFO: rc: 1 + [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Jan 14 03:52:28.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "crd-publish-openapi-1096" for this suite. 01/14/23 03:52:28.787 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Job + should manage the lifecycle of a job [Conformance] + test/e2e/apps/job.go:703 +[BeforeEach] [sig-apps] Job + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:52:28.794 +Jan 14 03:52:28.794: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename job 01/14/23 03:52:28.794 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:52:28.808 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:52:28.811 +[BeforeEach] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:31 +[It] should manage the lifecycle of a job [Conformance] + test/e2e/apps/job.go:703 +STEP: Creating a suspended job 01/14/23 03:52:28.815 +STEP: Patching the Job 01/14/23 03:52:28.82 +STEP: Watching for Job to be patched 01/14/23 03:52:28.835 +Jan 14 03:52:28.836: INFO: Event ADDED observed for Job e2e-z2rv4 in namespace job-7710 with labels: map[e2e-job-label:e2e-z2rv4] and annotations: map[batch.kubernetes.io/job-tracking:] +Jan 14 03:52:28.836: INFO: Event MODIFIED observed for Job e2e-z2rv4 in namespace job-7710 with labels: map[e2e-job-label:e2e-z2rv4] and annotations: map[batch.kubernetes.io/job-tracking:] +Jan 14 03:52:28.836: INFO: Event MODIFIED found for Job e2e-z2rv4 in namespace job-7710 with labels: map[e2e-job-label:e2e-z2rv4 e2e-z2rv4:patched] and annotations: map[batch.kubernetes.io/job-tracking:] +STEP: Updating the job 01/14/23 03:52:28.836 +STEP: Watching for Job to be updated 01/14/23 03:52:28.845 +Jan 14 03:52:28.846: INFO: Event MODIFIED found for Job e2e-z2rv4 in namespace job-7710 with labels: map[e2e-job-label:e2e-z2rv4 e2e-z2rv4:patched] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] +Jan 14 03:52:28.846: INFO: Found Job annotations: map[string]string{"batch.kubernetes.io/job-tracking":"", "updated":"true"} +STEP: Listing all Jobs with LabelSelector 01/14/23 03:52:28.846 +Jan 14 03:52:28.848: INFO: Job: e2e-z2rv4 as labels: map[e2e-job-label:e2e-z2rv4 e2e-z2rv4:patched] +STEP: Waiting for job to complete 01/14/23 03:52:28.849 +STEP: Delete a job collection with a labelselector 01/14/23 03:52:36.854 +STEP: Watching for Job to be deleted 01/14/23 03:52:36.861 +Jan 14 03:52:36.862: INFO: Event MODIFIED observed for Job e2e-z2rv4 in namespace job-7710 with labels: map[e2e-job-label:e2e-z2rv4 e2e-z2rv4:patched] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] +Jan 14 03:52:36.862: INFO: Event MODIFIED observed for Job e2e-z2rv4 in namespace job-7710 with labels: map[e2e-job-label:e2e-z2rv4 e2e-z2rv4:patched] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] +Jan 14 03:52:36.862: INFO: Event MODIFIED observed for Job e2e-z2rv4 in namespace job-7710 with labels: map[e2e-job-label:e2e-z2rv4 e2e-z2rv4:patched] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] +Jan 14 03:52:36.862: INFO: Event MODIFIED observed for Job e2e-z2rv4 in namespace job-7710 with labels: map[e2e-job-label:e2e-z2rv4 e2e-z2rv4:patched] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] +Jan 14 03:52:36.862: INFO: Event MODIFIED observed for Job e2e-z2rv4 in namespace job-7710 with labels: map[e2e-job-label:e2e-z2rv4 e2e-z2rv4:patched] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] +Jan 14 03:52:36.862: INFO: Event DELETED found for Job e2e-z2rv4 in namespace job-7710 with labels: map[e2e-job-label:e2e-z2rv4 e2e-z2rv4:patched] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] +STEP: Relist jobs to confirm deletion 01/14/23 03:52:36.863 +[AfterEach] [sig-apps] Job + test/e2e/framework/node/init/init.go:32 +Jan 14 03:52:36.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Job + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Job + tear down framework | framework.go:193 +STEP: Destroying namespace "job-7710" for this suite. 01/14/23 03:52:36.87 +------------------------------ +• [SLOW TEST] [8.081 seconds] +[sig-apps] Job +test/e2e/apps/framework.go:23 + should manage the lifecycle of a job [Conformance] + test/e2e/apps/job.go:703 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Job + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:52:28.794 + Jan 14 03:52:28.794: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename job 01/14/23 03:52:28.794 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:52:28.808 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:52:28.811 + [BeforeEach] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:31 + [It] should manage the lifecycle of a job [Conformance] + test/e2e/apps/job.go:703 + STEP: Creating a suspended job 01/14/23 03:52:28.815 + STEP: Patching the Job 01/14/23 03:52:28.82 + STEP: Watching for Job to be patched 01/14/23 03:52:28.835 + Jan 14 03:52:28.836: INFO: Event ADDED observed for Job e2e-z2rv4 in namespace job-7710 with labels: map[e2e-job-label:e2e-z2rv4] and annotations: map[batch.kubernetes.io/job-tracking:] + Jan 14 03:52:28.836: INFO: Event MODIFIED observed for Job e2e-z2rv4 in namespace job-7710 with labels: map[e2e-job-label:e2e-z2rv4] and annotations: map[batch.kubernetes.io/job-tracking:] + Jan 14 03:52:28.836: INFO: Event MODIFIED found for Job e2e-z2rv4 in namespace job-7710 with labels: map[e2e-job-label:e2e-z2rv4 e2e-z2rv4:patched] and annotations: map[batch.kubernetes.io/job-tracking:] + STEP: Updating the job 01/14/23 03:52:28.836 + STEP: Watching for Job to be updated 01/14/23 03:52:28.845 + Jan 14 03:52:28.846: INFO: Event MODIFIED found for Job e2e-z2rv4 in namespace job-7710 with labels: map[e2e-job-label:e2e-z2rv4 e2e-z2rv4:patched] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] + Jan 14 03:52:28.846: INFO: Found Job annotations: map[string]string{"batch.kubernetes.io/job-tracking":"", "updated":"true"} + STEP: Listing all Jobs with LabelSelector 01/14/23 03:52:28.846 + Jan 14 03:52:28.848: INFO: Job: e2e-z2rv4 as labels: map[e2e-job-label:e2e-z2rv4 e2e-z2rv4:patched] + STEP: Waiting for job to complete 01/14/23 03:52:28.849 + STEP: Delete a job collection with a labelselector 01/14/23 03:52:36.854 + STEP: Watching for Job to be deleted 01/14/23 03:52:36.861 + Jan 14 03:52:36.862: INFO: Event MODIFIED observed for Job e2e-z2rv4 in namespace job-7710 with labels: map[e2e-job-label:e2e-z2rv4 e2e-z2rv4:patched] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] + Jan 14 03:52:36.862: INFO: Event MODIFIED observed for Job e2e-z2rv4 in namespace job-7710 with labels: map[e2e-job-label:e2e-z2rv4 e2e-z2rv4:patched] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] + Jan 14 03:52:36.862: INFO: Event MODIFIED observed for Job e2e-z2rv4 in namespace job-7710 with labels: map[e2e-job-label:e2e-z2rv4 e2e-z2rv4:patched] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] + Jan 14 03:52:36.862: INFO: Event MODIFIED observed for Job e2e-z2rv4 in namespace job-7710 with labels: map[e2e-job-label:e2e-z2rv4 e2e-z2rv4:patched] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] + Jan 14 03:52:36.862: INFO: Event MODIFIED observed for Job e2e-z2rv4 in namespace job-7710 with labels: map[e2e-job-label:e2e-z2rv4 e2e-z2rv4:patched] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] + Jan 14 03:52:36.862: INFO: Event DELETED found for Job e2e-z2rv4 in namespace job-7710 with labels: map[e2e-job-label:e2e-z2rv4 e2e-z2rv4:patched] and annotations: map[batch.kubernetes.io/job-tracking: updated:true] + STEP: Relist jobs to confirm deletion 01/14/23 03:52:36.863 + [AfterEach] [sig-apps] Job + test/e2e/framework/node/init/init.go:32 + Jan 14 03:52:36.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Job + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Job + tear down framework | framework.go:193 + STEP: Destroying namespace "job-7710" for this suite. 01/14/23 03:52:36.87 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should surface a failure condition on a common issue like exceeded quota [Conformance] + test/e2e/apps/rc.go:83 +[BeforeEach] [sig-apps] ReplicationController + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:52:36.875 +Jan 14 03:52:36.875: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename replication-controller 01/14/23 03:52:36.876 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:52:36.893 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:52:36.895 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:57 +[It] should surface a failure condition on a common issue like exceeded quota [Conformance] + test/e2e/apps/rc.go:83 +Jan 14 03:52:36.897: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace +STEP: Creating rc "condition-test" that asks for more than the allowed pod quota 01/14/23 03:52:37.908 +STEP: Checking rc "condition-test" has the desired failure condition set 01/14/23 03:52:37.915 +STEP: Scaling down rc "condition-test" to satisfy pod quota 01/14/23 03:52:38.922 +Jan 14 03:52:38.930: INFO: Updating replication controller "condition-test" +STEP: Checking rc "condition-test" has no failure condition set 01/14/23 03:52:38.93 +[AfterEach] [sig-apps] ReplicationController + test/e2e/framework/node/init/init.go:32 +Jan 14 03:52:39.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] ReplicationController + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] ReplicationController + tear down framework | framework.go:193 +STEP: Destroying namespace "replication-controller-290" for this suite. 01/14/23 03:52:39.943 +------------------------------ +• [3.074 seconds] +[sig-apps] ReplicationController +test/e2e/apps/framework.go:23 + should surface a failure condition on a common issue like exceeded quota [Conformance] + test/e2e/apps/rc.go:83 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ReplicationController + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:52:36.875 + Jan 14 03:52:36.875: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename replication-controller 01/14/23 03:52:36.876 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:52:36.893 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:52:36.895 + [BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:57 + [It] should surface a failure condition on a common issue like exceeded quota [Conformance] + test/e2e/apps/rc.go:83 + Jan 14 03:52:36.897: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace + STEP: Creating rc "condition-test" that asks for more than the allowed pod quota 01/14/23 03:52:37.908 + STEP: Checking rc "condition-test" has the desired failure condition set 01/14/23 03:52:37.915 + STEP: Scaling down rc "condition-test" to satisfy pod quota 01/14/23 03:52:38.922 + Jan 14 03:52:38.930: INFO: Updating replication controller "condition-test" + STEP: Checking rc "condition-test" has no failure condition set 01/14/23 03:52:38.93 + [AfterEach] [sig-apps] ReplicationController + test/e2e/framework/node/init/init.go:32 + Jan 14 03:52:39.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] ReplicationController + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] ReplicationController + tear down framework | framework.go:193 + STEP: Destroying namespace "replication-controller-290" for this suite. 01/14/23 03:52:39.943 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-apps] ReplicationController + should release no longer matching pods [Conformance] + test/e2e/apps/rc.go:101 +[BeforeEach] [sig-apps] ReplicationController + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:52:39.949 +Jan 14 03:52:39.949: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename replication-controller 01/14/23 03:52:39.95 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:52:39.969 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:52:39.971 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:57 +[It] should release no longer matching pods [Conformance] + test/e2e/apps/rc.go:101 +STEP: Given a ReplicationController is created 01/14/23 03:52:39.974 +STEP: When the matched label of one of its pods change 01/14/23 03:52:39.979 +Jan 14 03:52:39.982: INFO: Pod name pod-release: Found 0 pods out of 1 +Jan 14 03:52:44.986: INFO: Pod name pod-release: Found 1 pods out of 1 +STEP: Then the pod is released 01/14/23 03:52:44.997 +[AfterEach] [sig-apps] ReplicationController + test/e2e/framework/node/init/init.go:32 +Jan 14 03:52:46.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] ReplicationController + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] ReplicationController + tear down framework | framework.go:193 +STEP: Destroying namespace "replication-controller-8100" for this suite. 01/14/23 03:52:46.01 +------------------------------ +• [SLOW TEST] [6.066 seconds] +[sig-apps] ReplicationController +test/e2e/apps/framework.go:23 + should release no longer matching pods [Conformance] + test/e2e/apps/rc.go:101 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ReplicationController + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:52:39.949 + Jan 14 03:52:39.949: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename replication-controller 01/14/23 03:52:39.95 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:52:39.969 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:52:39.971 + [BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:57 + [It] should release no longer matching pods [Conformance] + test/e2e/apps/rc.go:101 + STEP: Given a ReplicationController is created 01/14/23 03:52:39.974 + STEP: When the matched label of one of its pods change 01/14/23 03:52:39.979 + Jan 14 03:52:39.982: INFO: Pod name pod-release: Found 0 pods out of 1 + Jan 14 03:52:44.986: INFO: Pod name pod-release: Found 1 pods out of 1 + STEP: Then the pod is released 01/14/23 03:52:44.997 + [AfterEach] [sig-apps] ReplicationController + test/e2e/framework/node/init/init.go:32 + Jan 14 03:52:46.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] ReplicationController + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] ReplicationController + tear down framework | framework.go:193 + STEP: Destroying namespace "replication-controller-8100" for this suite. 01/14/23 03:52:46.01 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSS +------------------------------ +[sig-network] Services + should provide secure master service [Conformance] + test/e2e/network/service.go:777 +[BeforeEach] [sig-network] Services + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:52:46.016 +Jan 14 03:52:46.016: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename services 01/14/23 03:52:46.017 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:52:46.038 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:52:46.04 +[BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should provide secure master service [Conformance] + test/e2e/network/service.go:777 +[AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 +Jan 14 03:52:46.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 +STEP: Destroying namespace "services-9326" for this suite. 01/14/23 03:52:46.049 +------------------------------ +• [0.038 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should provide secure master service [Conformance] + test/e2e/network/service.go:777 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:52:46.016 + Jan 14 03:52:46.016: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename services 01/14/23 03:52:46.017 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:52:46.038 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:52:46.04 + [BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should provide secure master service [Conformance] + test/e2e/network/service.go:777 + [AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 + Jan 14 03:52:46.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 + STEP: Destroying namespace "services-9326" for this suite. 01/14/23 03:52:46.049 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-node] Security Context When creating a pod with readOnlyRootFilesystem + should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:486 +[BeforeEach] [sig-node] Security Context + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:52:46.054 +Jan 14 03:52:46.054: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename security-context-test 01/14/23 03:52:46.055 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:52:46.07 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:52:46.072 +[BeforeEach] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Security Context + test/e2e/common/node/security_context.go:50 +[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:486 +Jan 14 03:52:46.088: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-18e1dc66-d04f-4b75-9ada-00854faee915" in namespace "security-context-test-4731" to be "Succeeded or Failed" +Jan 14 03:52:46.091: INFO: Pod "busybox-readonly-false-18e1dc66-d04f-4b75-9ada-00854faee915": Phase="Pending", Reason="", readiness=false. Elapsed: 3.123251ms +Jan 14 03:52:48.096: INFO: Pod "busybox-readonly-false-18e1dc66-d04f-4b75-9ada-00854faee915": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007836379s +Jan 14 03:52:50.096: INFO: Pod "busybox-readonly-false-18e1dc66-d04f-4b75-9ada-00854faee915": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008080078s +Jan 14 03:52:50.096: INFO: Pod "busybox-readonly-false-18e1dc66-d04f-4b75-9ada-00854faee915" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context + test/e2e/framework/node/init/init.go:32 +Jan 14 03:52:50.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Security Context + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Security Context + tear down framework | framework.go:193 +STEP: Destroying namespace "security-context-test-4731" for this suite. 01/14/23 03:52:50.101 +------------------------------ +• [4.052 seconds] +[sig-node] Security Context +test/e2e/common/node/framework.go:23 + When creating a pod with readOnlyRootFilesystem + test/e2e/common/node/security_context.go:430 + should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:486 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Security Context + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:52:46.054 + Jan 14 03:52:46.054: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename security-context-test 01/14/23 03:52:46.055 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:52:46.07 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:52:46.072 + [BeforeEach] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Security Context + test/e2e/common/node/security_context.go:50 + [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:486 + Jan 14 03:52:46.088: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-18e1dc66-d04f-4b75-9ada-00854faee915" in namespace "security-context-test-4731" to be "Succeeded or Failed" + Jan 14 03:52:46.091: INFO: Pod "busybox-readonly-false-18e1dc66-d04f-4b75-9ada-00854faee915": Phase="Pending", Reason="", readiness=false. Elapsed: 3.123251ms + Jan 14 03:52:48.096: INFO: Pod "busybox-readonly-false-18e1dc66-d04f-4b75-9ada-00854faee915": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007836379s + Jan 14 03:52:50.096: INFO: Pod "busybox-readonly-false-18e1dc66-d04f-4b75-9ada-00854faee915": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008080078s + Jan 14 03:52:50.096: INFO: Pod "busybox-readonly-false-18e1dc66-d04f-4b75-9ada-00854faee915" satisfied condition "Succeeded or Failed" + [AfterEach] [sig-node] Security Context + test/e2e/framework/node/init/init.go:32 + Jan 14 03:52:50.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Security Context + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Security Context + tear down framework | framework.go:193 + STEP: Destroying namespace "security-context-test-4731" for this suite. 01/14/23 03:52:50.101 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-apps] Daemon set [Serial] + should list and delete a collection of DaemonSets [Conformance] + test/e2e/apps/daemon_set.go:823 +[BeforeEach] [sig-apps] Daemon set [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:52:50.107 +Jan 14 03:52:50.107: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename daemonsets 01/14/23 03:52:50.108 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:52:50.122 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:52:50.124 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 +[It] should list and delete a collection of DaemonSets [Conformance] + test/e2e/apps/daemon_set.go:823 +STEP: Creating simple DaemonSet "daemon-set" 01/14/23 03:52:50.144 +STEP: Check that daemon pods launch on every node of the cluster. 01/14/23 03:52:50.15 +Jan 14 03:52:50.156: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 03:52:50.156: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 03:52:50.156: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 03:52:50.159: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jan 14 03:52:50.159: INFO: Node 10.0.1.106 is running 0 daemon pod, expected 1 +Jan 14 03:52:51.165: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 03:52:51.165: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 03:52:51.165: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 03:52:51.169: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Jan 14 03:52:51.169: INFO: Node 10.0.1.99 is running 0 daemon pod, expected 1 +Jan 14 03:52:52.166: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 03:52:52.166: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 03:52:52.166: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 03:52:52.170: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 +Jan 14 03:52:52.170: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set +STEP: listing all DeamonSets 01/14/23 03:52:52.173 +STEP: DeleteCollection of the DaemonSets 01/14/23 03:52:52.176 +STEP: Verify that ReplicaSets have been deleted 01/14/23 03:52:52.184 +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 +Jan 14 03:52:52.194: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"420599"},"items":null} + +Jan 14 03:52:52.198: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"420599"},"items":[{"metadata":{"name":"daemon-set-bn9w8","generateName":"daemon-set-","namespace":"daemonsets-8242","uid":"b00719fc-cc06-4739-bf22-ee6754050cbf","resourceVersion":"420558","creationTimestamp":"2023-01-14T03:52:50Z","labels":{"controller-revision-hash":"6cff669f8c","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"tke.cloud.tencent.com/networks-status":"[{\n \"name\": \"tke-bridge\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.52.1.40\"\n ],\n \"mac\": \"4e:27:a6:5a:a4:c9\",\n \"default\": true,\n \"dns\": {}\n}]"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"a9c5adff-e649-487e-a62a-5c3cebbbe4ff","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T03:52:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a9c5adff-e649-487e-a62a-5c3cebbbe4ff\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T03:52:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.1.40\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"},{"manager":"multus","operation":"Update","apiVersion":"v1","time":"2023-01-14T03:52:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-nvxnm","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-nvxnm","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"10.0.1.106","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["10.0.1.106"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T03:52:50Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T03:52:50Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T03:52:50Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T03:52:50Z"}],"hostIP":"10.0.1.106","podIP":"10.52.1.40","podIPs":[{"ip":"10.52.1.40"}],"startTime":"2023-01-14T03:52:50Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-01-14T03:52:50Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22","containerID":"containerd://ec74d115299becf95d762a89d5d9e8b951a25aebb85e360f19a3f4acb0e0891c","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-j5kcd","generateName":"daemon-set-","namespace":"daemonsets-8242","uid":"7bf8f4fa-39d0-472d-a192-0123be200304","resourceVersion":"420594","creationTimestamp":"2023-01-14T03:52:50Z","labels":{"controller-revision-hash":"6cff669f8c","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"tke.cloud.tencent.com/networks-status":"[{\n \"name\": \"tke-bridge\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.52.1.121\"\n ],\n \"mac\": \"da:a4:32:4a:76:26\",\n \"default\": true,\n \"dns\": {}\n}]"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"a9c5adff-e649-487e-a62a-5c3cebbbe4ff","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T03:52:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a9c5adff-e649-487e-a62a-5c3cebbbe4ff\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"multus","operation":"Update","apiVersion":"v1","time":"2023-01-14T03:52:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}},"subresource":"status"},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T03:52:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.1.121\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-shvjp","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-shvjp","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"10.0.1.99","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["10.0.1.99"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T03:52:50Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T03:52:51Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T03:52:51Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T03:52:50Z"}],"hostIP":"10.0.1.99","podIP":"10.52.1.121","podIPs":[{"ip":"10.52.1.121"}],"startTime":"2023-01-14T03:52:50Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-01-14T03:52:50Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22","containerID":"containerd://694fbc5119c84a73b7592e98270c146bd444ced53925ebfded55d12cb8f91d29","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-q8xr6","generateName":"daemon-set-","namespace":"daemonsets-8242","uid":"80ee74ec-87c0-4c5f-9c63-8f58cdb0611e","resourceVersion":"420559","creationTimestamp":"2023-01-14T03:52:50Z","labels":{"controller-revision-hash":"6cff669f8c","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"tke.cloud.tencent.com/networks-status":"[{\n \"name\": \"tke-bridge\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.52.0.232\"\n ],\n \"mac\": \"36:d6:a0:89:46:80\",\n \"default\": true,\n \"dns\": {}\n}]"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"a9c5adff-e649-487e-a62a-5c3cebbbe4ff","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T03:52:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a9c5adff-e649-487e-a62a-5c3cebbbe4ff\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T03:52:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.0.232\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"},{"manager":"multus","operation":"Update","apiVersion":"v1","time":"2023-01-14T03:52:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-p5sp9","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-p5sp9","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"10.0.1.212","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["10.0.1.212"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T03:52:50Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T03:52:50Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T03:52:50Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T03:52:50Z"}],"hostIP":"10.0.1.212","podIP":"10.52.0.232","podIPs":[{"ip":"10.52.0.232"}],"startTime":"2023-01-14T03:52:50Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-01-14T03:52:50Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22","containerID":"containerd://3c8b896811a453d00ccabb68c87ce209947420c880dd4321d08223e51cf70ae2","started":true}],"qosClass":"BestEffort"}}]} + +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/node/init/init.go:32 +Jan 14 03:52:52.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "daemonsets-8242" for this suite. 01/14/23 03:52:52.215 +------------------------------ +• [2.118 seconds] +[sig-apps] Daemon set [Serial] +test/e2e/apps/framework.go:23 + should list and delete a collection of DaemonSets [Conformance] + test/e2e/apps/daemon_set.go:823 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Daemon set [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:52:50.107 + Jan 14 03:52:50.107: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename daemonsets 01/14/23 03:52:50.108 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:52:50.122 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:52:50.124 + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 + [It] should list and delete a collection of DaemonSets [Conformance] + test/e2e/apps/daemon_set.go:823 + STEP: Creating simple DaemonSet "daemon-set" 01/14/23 03:52:50.144 + STEP: Check that daemon pods launch on every node of the cluster. 01/14/23 03:52:50.15 + Jan 14 03:52:50.156: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 03:52:50.156: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 03:52:50.156: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 03:52:50.159: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jan 14 03:52:50.159: INFO: Node 10.0.1.106 is running 0 daemon pod, expected 1 + Jan 14 03:52:51.165: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 03:52:51.165: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 03:52:51.165: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 03:52:51.169: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Jan 14 03:52:51.169: INFO: Node 10.0.1.99 is running 0 daemon pod, expected 1 + Jan 14 03:52:52.166: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 03:52:52.166: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 03:52:52.166: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 03:52:52.170: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 + Jan 14 03:52:52.170: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set + STEP: listing all DeamonSets 01/14/23 03:52:52.173 + STEP: DeleteCollection of the DaemonSets 01/14/23 03:52:52.176 + STEP: Verify that ReplicaSets have been deleted 01/14/23 03:52:52.184 + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 + Jan 14 03:52:52.194: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"420599"},"items":null} + + Jan 14 03:52:52.198: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"420599"},"items":[{"metadata":{"name":"daemon-set-bn9w8","generateName":"daemon-set-","namespace":"daemonsets-8242","uid":"b00719fc-cc06-4739-bf22-ee6754050cbf","resourceVersion":"420558","creationTimestamp":"2023-01-14T03:52:50Z","labels":{"controller-revision-hash":"6cff669f8c","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"tke.cloud.tencent.com/networks-status":"[{\n \"name\": \"tke-bridge\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.52.1.40\"\n ],\n \"mac\": \"4e:27:a6:5a:a4:c9\",\n \"default\": true,\n \"dns\": {}\n}]"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"a9c5adff-e649-487e-a62a-5c3cebbbe4ff","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T03:52:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a9c5adff-e649-487e-a62a-5c3cebbbe4ff\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T03:52:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.1.40\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"},{"manager":"multus","operation":"Update","apiVersion":"v1","time":"2023-01-14T03:52:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-nvxnm","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-nvxnm","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"10.0.1.106","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["10.0.1.106"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T03:52:50Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T03:52:50Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T03:52:50Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T03:52:50Z"}],"hostIP":"10.0.1.106","podIP":"10.52.1.40","podIPs":[{"ip":"10.52.1.40"}],"startTime":"2023-01-14T03:52:50Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-01-14T03:52:50Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22","containerID":"containerd://ec74d115299becf95d762a89d5d9e8b951a25aebb85e360f19a3f4acb0e0891c","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-j5kcd","generateName":"daemon-set-","namespace":"daemonsets-8242","uid":"7bf8f4fa-39d0-472d-a192-0123be200304","resourceVersion":"420594","creationTimestamp":"2023-01-14T03:52:50Z","labels":{"controller-revision-hash":"6cff669f8c","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"tke.cloud.tencent.com/networks-status":"[{\n \"name\": \"tke-bridge\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.52.1.121\"\n ],\n \"mac\": \"da:a4:32:4a:76:26\",\n \"default\": true,\n \"dns\": {}\n}]"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"a9c5adff-e649-487e-a62a-5c3cebbbe4ff","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T03:52:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a9c5adff-e649-487e-a62a-5c3cebbbe4ff\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"multus","operation":"Update","apiVersion":"v1","time":"2023-01-14T03:52:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}},"subresource":"status"},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T03:52:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.1.121\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-shvjp","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-shvjp","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"10.0.1.99","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["10.0.1.99"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T03:52:50Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T03:52:51Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T03:52:51Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T03:52:50Z"}],"hostIP":"10.0.1.99","podIP":"10.52.1.121","podIPs":[{"ip":"10.52.1.121"}],"startTime":"2023-01-14T03:52:50Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-01-14T03:52:50Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22","containerID":"containerd://694fbc5119c84a73b7592e98270c146bd444ced53925ebfded55d12cb8f91d29","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-q8xr6","generateName":"daemon-set-","namespace":"daemonsets-8242","uid":"80ee74ec-87c0-4c5f-9c63-8f58cdb0611e","resourceVersion":"420559","creationTimestamp":"2023-01-14T03:52:50Z","labels":{"controller-revision-hash":"6cff669f8c","daemonset-name":"daemon-set","pod-template-generation":"1"},"annotations":{"tke.cloud.tencent.com/networks-status":"[{\n \"name\": \"tke-bridge\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.52.0.232\"\n ],\n \"mac\": \"36:d6:a0:89:46:80\",\n \"default\": true,\n \"dns\": {}\n}]"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"a9c5adff-e649-487e-a62a-5c3cebbbe4ff","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-14T03:52:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a9c5adff-e649-487e-a62a-5c3cebbbe4ff\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-14T03:52:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.0.232\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"},{"manager":"multus","operation":"Update","apiVersion":"v1","time":"2023-01-14T03:52:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-p5sp9","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-p5sp9","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"10.0.1.212","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["10.0.1.212"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T03:52:50Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T03:52:50Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T03:52:50Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-14T03:52:50Z"}],"hostIP":"10.0.1.212","podIP":"10.52.0.232","podIPs":[{"ip":"10.52.0.232"}],"startTime":"2023-01-14T03:52:50Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2023-01-14T03:52:50Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"registry.k8s.io/e2e-test-images/httpd:2.4.38-4","imageID":"registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22","containerID":"containerd://3c8b896811a453d00ccabb68c87ce209947420c880dd4321d08223e51cf70ae2","started":true}],"qosClass":"BestEffort"}}]} + + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/node/init/init.go:32 + Jan 14 03:52:52.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "daemonsets-8242" for this suite. 01/14/23 03:52:52.215 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should be consumable via environment variable [NodeConformance] [Conformance] + test/e2e/common/node/configmap.go:45 +[BeforeEach] [sig-node] ConfigMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:52:52.225 +Jan 14 03:52:52.225: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename configmap 01/14/23 03:52:52.226 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:52:52.242 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:52:52.244 +[BeforeEach] [sig-node] ConfigMap + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable via environment variable [NodeConformance] [Conformance] + test/e2e/common/node/configmap.go:45 +STEP: Creating configMap configmap-4757/configmap-test-fd7816e6-1db9-42b7-8f98-d7c8ed7eae60 01/14/23 03:52:52.246 +STEP: Creating a pod to test consume configMaps 01/14/23 03:52:52.251 +Jan 14 03:52:52.261: INFO: Waiting up to 5m0s for pod "pod-configmaps-8c037985-40aa-46ed-b97d-1c99cee79dc8" in namespace "configmap-4757" to be "Succeeded or Failed" +Jan 14 03:52:52.264: INFO: Pod "pod-configmaps-8c037985-40aa-46ed-b97d-1c99cee79dc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.972851ms +Jan 14 03:52:54.269: INFO: Pod "pod-configmaps-8c037985-40aa-46ed-b97d-1c99cee79dc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008218385s +Jan 14 03:52:56.269: INFO: Pod "pod-configmaps-8c037985-40aa-46ed-b97d-1c99cee79dc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008430103s +STEP: Saw pod success 01/14/23 03:52:56.269 +Jan 14 03:52:56.269: INFO: Pod "pod-configmaps-8c037985-40aa-46ed-b97d-1c99cee79dc8" satisfied condition "Succeeded or Failed" +Jan 14 03:52:56.273: INFO: Trying to get logs from node 10.0.1.99 pod pod-configmaps-8c037985-40aa-46ed-b97d-1c99cee79dc8 container env-test: +STEP: delete the pod 01/14/23 03:52:56.288 +Jan 14 03:52:56.302: INFO: Waiting for pod pod-configmaps-8c037985-40aa-46ed-b97d-1c99cee79dc8 to disappear +Jan 14 03:52:56.305: INFO: Pod pod-configmaps-8c037985-40aa-46ed-b97d-1c99cee79dc8 no longer exists +[AfterEach] [sig-node] ConfigMap + test/e2e/framework/node/init/init.go:32 +Jan 14 03:52:56.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] ConfigMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] ConfigMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] ConfigMap + tear down framework | framework.go:193 +STEP: Destroying namespace "configmap-4757" for this suite. 01/14/23 03:52:56.309 +------------------------------ +• [4.089 seconds] +[sig-node] ConfigMap +test/e2e/common/node/framework.go:23 + should be consumable via environment variable [NodeConformance] [Conformance] + test/e2e/common/node/configmap.go:45 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] ConfigMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:52:52.225 + Jan 14 03:52:52.225: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename configmap 01/14/23 03:52:52.226 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:52:52.242 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:52:52.244 + [BeforeEach] [sig-node] ConfigMap + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable via environment variable [NodeConformance] [Conformance] + test/e2e/common/node/configmap.go:45 + STEP: Creating configMap configmap-4757/configmap-test-fd7816e6-1db9-42b7-8f98-d7c8ed7eae60 01/14/23 03:52:52.246 + STEP: Creating a pod to test consume configMaps 01/14/23 03:52:52.251 + Jan 14 03:52:52.261: INFO: Waiting up to 5m0s for pod "pod-configmaps-8c037985-40aa-46ed-b97d-1c99cee79dc8" in namespace "configmap-4757" to be "Succeeded or Failed" + Jan 14 03:52:52.264: INFO: Pod "pod-configmaps-8c037985-40aa-46ed-b97d-1c99cee79dc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.972851ms + Jan 14 03:52:54.269: INFO: Pod "pod-configmaps-8c037985-40aa-46ed-b97d-1c99cee79dc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008218385s + Jan 14 03:52:56.269: INFO: Pod "pod-configmaps-8c037985-40aa-46ed-b97d-1c99cee79dc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008430103s + STEP: Saw pod success 01/14/23 03:52:56.269 + Jan 14 03:52:56.269: INFO: Pod "pod-configmaps-8c037985-40aa-46ed-b97d-1c99cee79dc8" satisfied condition "Succeeded or Failed" + Jan 14 03:52:56.273: INFO: Trying to get logs from node 10.0.1.99 pod pod-configmaps-8c037985-40aa-46ed-b97d-1c99cee79dc8 container env-test: + STEP: delete the pod 01/14/23 03:52:56.288 + Jan 14 03:52:56.302: INFO: Waiting for pod pod-configmaps-8c037985-40aa-46ed-b97d-1c99cee79dc8 to disappear + Jan 14 03:52:56.305: INFO: Pod pod-configmaps-8c037985-40aa-46ed-b97d-1c99cee79dc8 no longer exists + [AfterEach] [sig-node] ConfigMap + test/e2e/framework/node/init/init.go:32 + Jan 14 03:52:56.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] ConfigMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] ConfigMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] ConfigMap + tear down framework | framework.go:193 + STEP: Destroying namespace "configmap-4757" for this suite. 01/14/23 03:52:56.309 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-node] Pods + should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:398 +[BeforeEach] [sig-node] Pods + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:52:56.314 +Jan 14 03:52:56.314: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename pods 01/14/23 03:52:56.315 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:52:56.334 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:52:56.337 +[BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 +[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:398 +STEP: creating the pod 01/14/23 03:52:56.339 +STEP: submitting the pod to kubernetes 01/14/23 03:52:56.339 +Jan 14 03:52:56.348: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-c1b9c2b7-4559-4de0-88b0-f44938e46f74" in namespace "pods-2583" to be "running and ready" +Jan 14 03:52:56.351: INFO: Pod "pod-update-activedeadlineseconds-c1b9c2b7-4559-4de0-88b0-f44938e46f74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.697935ms +Jan 14 03:52:56.351: INFO: The phase of Pod pod-update-activedeadlineseconds-c1b9c2b7-4559-4de0-88b0-f44938e46f74 is Pending, waiting for it to be Running (with Ready = true) +Jan 14 03:52:58.356: INFO: Pod "pod-update-activedeadlineseconds-c1b9c2b7-4559-4de0-88b0-f44938e46f74": Phase="Running", Reason="", readiness=true. Elapsed: 2.007814636s +Jan 14 03:52:58.356: INFO: The phase of Pod pod-update-activedeadlineseconds-c1b9c2b7-4559-4de0-88b0-f44938e46f74 is Running (Ready = true) +Jan 14 03:52:58.356: INFO: Pod "pod-update-activedeadlineseconds-c1b9c2b7-4559-4de0-88b0-f44938e46f74" satisfied condition "running and ready" +STEP: verifying the pod is in kubernetes 01/14/23 03:52:58.359 +STEP: updating the pod 01/14/23 03:52:58.362 +Jan 14 03:52:58.874: INFO: Successfully updated pod "pod-update-activedeadlineseconds-c1b9c2b7-4559-4de0-88b0-f44938e46f74" +Jan 14 03:52:58.874: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-c1b9c2b7-4559-4de0-88b0-f44938e46f74" in namespace "pods-2583" to be "terminated with reason DeadlineExceeded" +Jan 14 03:52:58.877: INFO: Pod "pod-update-activedeadlineseconds-c1b9c2b7-4559-4de0-88b0-f44938e46f74": Phase="Running", Reason="", readiness=true. Elapsed: 2.834233ms +Jan 14 03:53:00.881: INFO: Pod "pod-update-activedeadlineseconds-c1b9c2b7-4559-4de0-88b0-f44938e46f74": Phase="Running", Reason="", readiness=true. Elapsed: 2.007451567s +Jan 14 03:53:02.882: INFO: Pod "pod-update-activedeadlineseconds-c1b9c2b7-4559-4de0-88b0-f44938e46f74": Phase="Running", Reason="", readiness=false. Elapsed: 4.007489912s +Jan 14 03:53:04.882: INFO: Pod "pod-update-activedeadlineseconds-c1b9c2b7-4559-4de0-88b0-f44938e46f74": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 6.007959784s +Jan 14 03:53:04.882: INFO: Pod "pod-update-activedeadlineseconds-c1b9c2b7-4559-4de0-88b0-f44938e46f74" satisfied condition "terminated with reason DeadlineExceeded" +[AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 +Jan 14 03:53:04.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 +STEP: Destroying namespace "pods-2583" for this suite. 01/14/23 03:53:04.887 +------------------------------ +• [SLOW TEST] [8.581 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:398 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Pods + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:52:56.314 + Jan 14 03:52:56.314: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename pods 01/14/23 03:52:56.315 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:52:56.334 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:52:56.337 + [BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 + [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:398 + STEP: creating the pod 01/14/23 03:52:56.339 + STEP: submitting the pod to kubernetes 01/14/23 03:52:56.339 + Jan 14 03:52:56.348: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-c1b9c2b7-4559-4de0-88b0-f44938e46f74" in namespace "pods-2583" to be "running and ready" + Jan 14 03:52:56.351: INFO: Pod "pod-update-activedeadlineseconds-c1b9c2b7-4559-4de0-88b0-f44938e46f74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.697935ms + Jan 14 03:52:56.351: INFO: The phase of Pod pod-update-activedeadlineseconds-c1b9c2b7-4559-4de0-88b0-f44938e46f74 is Pending, waiting for it to be Running (with Ready = true) + Jan 14 03:52:58.356: INFO: Pod "pod-update-activedeadlineseconds-c1b9c2b7-4559-4de0-88b0-f44938e46f74": Phase="Running", Reason="", readiness=true. Elapsed: 2.007814636s + Jan 14 03:52:58.356: INFO: The phase of Pod pod-update-activedeadlineseconds-c1b9c2b7-4559-4de0-88b0-f44938e46f74 is Running (Ready = true) + Jan 14 03:52:58.356: INFO: Pod "pod-update-activedeadlineseconds-c1b9c2b7-4559-4de0-88b0-f44938e46f74" satisfied condition "running and ready" + STEP: verifying the pod is in kubernetes 01/14/23 03:52:58.359 + STEP: updating the pod 01/14/23 03:52:58.362 + Jan 14 03:52:58.874: INFO: Successfully updated pod "pod-update-activedeadlineseconds-c1b9c2b7-4559-4de0-88b0-f44938e46f74" + Jan 14 03:52:58.874: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-c1b9c2b7-4559-4de0-88b0-f44938e46f74" in namespace "pods-2583" to be "terminated with reason DeadlineExceeded" + Jan 14 03:52:58.877: INFO: Pod "pod-update-activedeadlineseconds-c1b9c2b7-4559-4de0-88b0-f44938e46f74": Phase="Running", Reason="", readiness=true. Elapsed: 2.834233ms + Jan 14 03:53:00.881: INFO: Pod "pod-update-activedeadlineseconds-c1b9c2b7-4559-4de0-88b0-f44938e46f74": Phase="Running", Reason="", readiness=true. Elapsed: 2.007451567s + Jan 14 03:53:02.882: INFO: Pod "pod-update-activedeadlineseconds-c1b9c2b7-4559-4de0-88b0-f44938e46f74": Phase="Running", Reason="", readiness=false. Elapsed: 4.007489912s + Jan 14 03:53:04.882: INFO: Pod "pod-update-activedeadlineseconds-c1b9c2b7-4559-4de0-88b0-f44938e46f74": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 6.007959784s + Jan 14 03:53:04.882: INFO: Pod "pod-update-activedeadlineseconds-c1b9c2b7-4559-4de0-88b0-f44938e46f74" satisfied condition "terminated with reason DeadlineExceeded" + [AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 + Jan 14 03:53:04.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 + STEP: Destroying namespace "pods-2583" for this suite. 01/14/23 03:53:04.887 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] DisruptionController + should block an eviction until the PDB is updated to allow it [Conformance] + test/e2e/apps/disruption.go:347 +[BeforeEach] [sig-apps] DisruptionController + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:53:04.896 +Jan 14 03:53:04.896: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename disruption 01/14/23 03:53:04.897 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:53:04.91 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:53:04.912 +[BeforeEach] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:72 +[It] should block an eviction until the PDB is updated to allow it [Conformance] + test/e2e/apps/disruption.go:347 +STEP: Creating a pdb that targets all three pods in a test replica set 01/14/23 03:53:04.914 +STEP: Waiting for the pdb to be processed 01/14/23 03:53:04.92 +STEP: First trying to evict a pod which shouldn't be evictable 01/14/23 03:53:06.934 +STEP: Waiting for all pods to be running 01/14/23 03:53:06.934 +Jan 14 03:53:06.937: INFO: pods: 0 < 3 +STEP: locating a running pod 01/14/23 03:53:08.941 +STEP: Updating the pdb to allow a pod to be evicted 01/14/23 03:53:08.95 +STEP: Waiting for the pdb to be processed 01/14/23 03:53:08.958 +STEP: Trying to evict the same pod we tried earlier which should now be evictable 01/14/23 03:53:10.964 +STEP: Waiting for all pods to be running 01/14/23 03:53:10.964 +STEP: Waiting for the pdb to observed all healthy pods 01/14/23 03:53:10.968 +STEP: Patching the pdb to disallow a pod to be evicted 01/14/23 03:53:10.997 +STEP: Waiting for the pdb to be processed 01/14/23 03:53:11.027 +STEP: Waiting for all pods to be running 01/14/23 03:53:13.034 +STEP: locating a running pod 01/14/23 03:53:13.038 +STEP: Deleting the pdb to allow a pod to be evicted 01/14/23 03:53:13.058 +STEP: Waiting for the pdb to be deleted 01/14/23 03:53:13.065 +STEP: Trying to evict the same pod we tried earlier which should now be evictable 01/14/23 03:53:13.067 +STEP: Waiting for all pods to be running 01/14/23 03:53:13.067 +[AfterEach] [sig-apps] DisruptionController + test/e2e/framework/node/init/init.go:32 +Jan 14 03:53:13.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] DisruptionController + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] DisruptionController + tear down framework | framework.go:193 +STEP: Destroying namespace "disruption-9269" for this suite. 01/14/23 03:53:13.092 +------------------------------ +• [SLOW TEST] [8.201 seconds] +[sig-apps] DisruptionController +test/e2e/apps/framework.go:23 + should block an eviction until the PDB is updated to allow it [Conformance] + test/e2e/apps/disruption.go:347 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] DisruptionController + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:53:04.896 + Jan 14 03:53:04.896: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename disruption 01/14/23 03:53:04.897 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:53:04.91 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:53:04.912 + [BeforeEach] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:72 + [It] should block an eviction until the PDB is updated to allow it [Conformance] + test/e2e/apps/disruption.go:347 + STEP: Creating a pdb that targets all three pods in a test replica set 01/14/23 03:53:04.914 + STEP: Waiting for the pdb to be processed 01/14/23 03:53:04.92 + STEP: First trying to evict a pod which shouldn't be evictable 01/14/23 03:53:06.934 + STEP: Waiting for all pods to be running 01/14/23 03:53:06.934 + Jan 14 03:53:06.937: INFO: pods: 0 < 3 + STEP: locating a running pod 01/14/23 03:53:08.941 + STEP: Updating the pdb to allow a pod to be evicted 01/14/23 03:53:08.95 + STEP: Waiting for the pdb to be processed 01/14/23 03:53:08.958 + STEP: Trying to evict the same pod we tried earlier which should now be evictable 01/14/23 03:53:10.964 + STEP: Waiting for all pods to be running 01/14/23 03:53:10.964 + STEP: Waiting for the pdb to observed all healthy pods 01/14/23 03:53:10.968 + STEP: Patching the pdb to disallow a pod to be evicted 01/14/23 03:53:10.997 + STEP: Waiting for the pdb to be processed 01/14/23 03:53:11.027 + STEP: Waiting for all pods to be running 01/14/23 03:53:13.034 + STEP: locating a running pod 01/14/23 03:53:13.038 + STEP: Deleting the pdb to allow a pod to be evicted 01/14/23 03:53:13.058 + STEP: Waiting for the pdb to be deleted 01/14/23 03:53:13.065 + STEP: Trying to evict the same pod we tried earlier which should now be evictable 01/14/23 03:53:13.067 + STEP: Waiting for all pods to be running 01/14/23 03:53:13.067 + [AfterEach] [sig-apps] DisruptionController + test/e2e/framework/node/init/init.go:32 + Jan 14 03:53:13.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] DisruptionController + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] DisruptionController + tear down framework | framework.go:193 + STEP: Destroying namespace "disruption-9269" for this suite. 01/14/23 03:53:13.092 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should apply an update to a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:366 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:53:13.098 +Jan 14 03:53:13.098: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename namespaces 01/14/23 03:53:13.099 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:53:13.114 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:53:13.116 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:31 +[It] should apply an update to a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:366 +STEP: Updating Namespace "namespaces-9640" 01/14/23 03:53:13.118 +Jan 14 03:53:13.124: INFO: Namespace "namespaces-9640" now has labels, map[string]string{"e2e-framework":"namespaces", "e2e-run":"6e25d48d-27da-41bb-9e53-5ea04a25720d", "kubernetes.io/metadata.name":"namespaces-9640", "namespaces-9640":"updated", "pod-security.kubernetes.io/enforce":"baseline"} +[AfterEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/node/init/init.go:32 +Jan 14 03:53:13.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "namespaces-9640" for this suite. 01/14/23 03:53:13.129 +------------------------------ +• [0.035 seconds] +[sig-api-machinery] Namespaces [Serial] +test/e2e/apimachinery/framework.go:23 + should apply an update to a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:366 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Namespaces [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:53:13.098 + Jan 14 03:53:13.098: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename namespaces 01/14/23 03:53:13.099 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:53:13.114 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:53:13.116 + [BeforeEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:31 + [It] should apply an update to a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:366 + STEP: Updating Namespace "namespaces-9640" 01/14/23 03:53:13.118 + Jan 14 03:53:13.124: INFO: Namespace "namespaces-9640" now has labels, map[string]string{"e2e-framework":"namespaces", "e2e-run":"6e25d48d-27da-41bb-9e53-5ea04a25720d", "kubernetes.io/metadata.name":"namespaces-9640", "namespaces-9640":"updated", "pod-security.kubernetes.io/enforce":"baseline"} + [AfterEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/node/init/init.go:32 + Jan 14 03:53:13.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "namespaces-9640" for this suite. 01/14/23 03:53:13.129 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should not be blocked by dependency circle [Conformance] + test/e2e/apimachinery/garbage_collector.go:849 +[BeforeEach] [sig-api-machinery] Garbage collector + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:53:13.134 +Jan 14 03:53:13.134: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename gc 01/14/23 03:53:13.135 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:53:13.148 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:53:13.15 +[BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:31 +[It] should not be blocked by dependency circle [Conformance] + test/e2e/apimachinery/garbage_collector.go:849 +Jan 14 03:53:13.179: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"7574aa0c-91a1-47e4-9313-fa52cecd77fe", Controller:(*bool)(0xc005aaf58a), BlockOwnerDeletion:(*bool)(0xc005aaf58b)}} +Jan 14 03:53:13.196: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"664823e0-805e-4939-b2a8-9abb8668c293", Controller:(*bool)(0xc005aaf7d2), BlockOwnerDeletion:(*bool)(0xc005aaf7d3)}} +Jan 14 03:53:13.203: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"fa1a9753-aa1c-40ad-a0ec-4f8f5b1fc371", Controller:(*bool)(0xc0057521b2), BlockOwnerDeletion:(*bool)(0xc0057521b3)}} +[AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/node/init/init.go:32 +Jan 14 03:53:18.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + tear down framework | framework.go:193 +STEP: Destroying namespace "gc-9340" for this suite. 01/14/23 03:53:18.224 +------------------------------ +• [SLOW TEST] [5.095 seconds] +[sig-api-machinery] Garbage collector +test/e2e/apimachinery/framework.go:23 + should not be blocked by dependency circle [Conformance] + test/e2e/apimachinery/garbage_collector.go:849 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Garbage collector + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:53:13.134 + Jan 14 03:53:13.134: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename gc 01/14/23 03:53:13.135 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:53:13.148 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:53:13.15 + [BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:31 + [It] should not be blocked by dependency circle [Conformance] + test/e2e/apimachinery/garbage_collector.go:849 + Jan 14 03:53:13.179: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"7574aa0c-91a1-47e4-9313-fa52cecd77fe", Controller:(*bool)(0xc005aaf58a), BlockOwnerDeletion:(*bool)(0xc005aaf58b)}} + Jan 14 03:53:13.196: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"664823e0-805e-4939-b2a8-9abb8668c293", Controller:(*bool)(0xc005aaf7d2), BlockOwnerDeletion:(*bool)(0xc005aaf7d3)}} + Jan 14 03:53:13.203: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"fa1a9753-aa1c-40ad-a0ec-4f8f5b1fc371", Controller:(*bool)(0xc0057521b2), BlockOwnerDeletion:(*bool)(0xc0057521b3)}} + [AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/node/init/init.go:32 + Jan 14 03:53:18.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + tear down framework | framework.go:193 + STEP: Destroying namespace "gc-9340" for this suite. 01/14/23 03:53:18.224 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-node] Pods + should patch a pod status [Conformance] + test/e2e/common/node/pods.go:1083 +[BeforeEach] [sig-node] Pods + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:53:18.229 +Jan 14 03:53:18.229: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename pods 01/14/23 03:53:18.23 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:53:18.244 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:53:18.247 +[BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 +[It] should patch a pod status [Conformance] + test/e2e/common/node/pods.go:1083 +STEP: Create a pod 01/14/23 03:53:18.249 +Jan 14 03:53:18.259: INFO: Waiting up to 5m0s for pod "pod-2dcdk" in namespace "pods-6458" to be "running" +Jan 14 03:53:18.266: INFO: Pod "pod-2dcdk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.258335ms +Jan 14 03:53:20.269: INFO: Pod "pod-2dcdk": Phase="Running", Reason="", readiness=true. Elapsed: 2.009823701s +Jan 14 03:53:20.269: INFO: Pod "pod-2dcdk" satisfied condition "running" +STEP: patching /status 01/14/23 03:53:20.269 +Jan 14 03:53:20.279: INFO: Status Message: "Patched by e2e test" and Reason: "E2E" +[AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 +Jan 14 03:53:20.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 +STEP: Destroying namespace "pods-6458" for this suite. 01/14/23 03:53:20.285 +------------------------------ +• [2.061 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should patch a pod status [Conformance] + test/e2e/common/node/pods.go:1083 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Pods + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:53:18.229 + Jan 14 03:53:18.229: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename pods 01/14/23 03:53:18.23 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:53:18.244 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:53:18.247 + [BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 + [It] should patch a pod status [Conformance] + test/e2e/common/node/pods.go:1083 + STEP: Create a pod 01/14/23 03:53:18.249 + Jan 14 03:53:18.259: INFO: Waiting up to 5m0s for pod "pod-2dcdk" in namespace "pods-6458" to be "running" + Jan 14 03:53:18.266: INFO: Pod "pod-2dcdk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.258335ms + Jan 14 03:53:20.269: INFO: Pod "pod-2dcdk": Phase="Running", Reason="", readiness=true. Elapsed: 2.009823701s + Jan 14 03:53:20.269: INFO: Pod "pod-2dcdk" satisfied condition "running" + STEP: patching /status 01/14/23 03:53:20.269 + Jan 14 03:53:20.279: INFO: Status Message: "Patched by e2e test" and Reason: "E2E" + [AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 + Jan 14 03:53:20.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 + STEP: Destroying namespace "pods-6458" for this suite. 01/14/23 03:53:20.285 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl cluster-info + should check if Kubernetes control plane services is included in cluster-info [Conformance] + test/e2e/kubectl/kubectl.go:1250 +[BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:53:20.291 +Jan 14 03:53:20.291: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename kubectl 01/14/23 03:53:20.291 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:53:20.307 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:53:20.309 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[It] should check if Kubernetes control plane services is included in cluster-info [Conformance] + test/e2e/kubectl/kubectl.go:1250 +STEP: validating cluster-info 01/14/23 03:53:20.312 +Jan 14 03:53:20.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-2785 cluster-info' +Jan 14 03:53:20.375: INFO: stderr: "" +Jan 14 03:53:20.375: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://10.55.252.1:443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 +Jan 14 03:53:20.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 +STEP: Destroying namespace "kubectl-2785" for this suite. 01/14/23 03:53:20.381 +------------------------------ +• [0.095 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl cluster-info + test/e2e/kubectl/kubectl.go:1244 + should check if Kubernetes control plane services is included in cluster-info [Conformance] + test/e2e/kubectl/kubectl.go:1250 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:53:20.291 + Jan 14 03:53:20.291: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename kubectl 01/14/23 03:53:20.291 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:53:20.307 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:53:20.309 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] + test/e2e/kubectl/kubectl.go:1250 + STEP: validating cluster-info 01/14/23 03:53:20.312 + Jan 14 03:53:20.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-2785 cluster-info' + Jan 14 03:53:20.375: INFO: stderr: "" + Jan 14 03:53:20.375: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://10.55.252.1:443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 + Jan 14 03:53:20.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 + STEP: Destroying namespace "kubectl-2785" for this suite. 01/14/23 03:53:20.381 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with configmap pod [Conformance] + test/e2e/storage/subpath.go:70 +[BeforeEach] [sig-storage] Subpath + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:53:20.386 +Jan 14 03:53:20.386: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename subpath 01/14/23 03:53:20.387 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:53:20.401 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:53:20.403 +[BeforeEach] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 +STEP: Setting up data 01/14/23 03:53:20.405 +[It] should support subpaths with configmap pod [Conformance] + test/e2e/storage/subpath.go:70 +STEP: Creating pod pod-subpath-test-configmap-jg4j 01/14/23 03:53:20.413 +STEP: Creating a pod to test atomic-volume-subpath 01/14/23 03:53:20.413 +Jan 14 03:53:20.421: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-jg4j" in namespace "subpath-2805" to be "Succeeded or Failed" +Jan 14 03:53:20.423: INFO: Pod "pod-subpath-test-configmap-jg4j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.793265ms +Jan 14 03:53:22.428: INFO: Pod "pod-subpath-test-configmap-jg4j": Phase="Running", Reason="", readiness=true. Elapsed: 2.007628085s +Jan 14 03:53:24.429: INFO: Pod "pod-subpath-test-configmap-jg4j": Phase="Running", Reason="", readiness=true. Elapsed: 4.008496198s +Jan 14 03:53:26.429: INFO: Pod "pod-subpath-test-configmap-jg4j": Phase="Running", Reason="", readiness=true. Elapsed: 6.008007197s +Jan 14 03:53:28.429: INFO: Pod "pod-subpath-test-configmap-jg4j": Phase="Running", Reason="", readiness=true. Elapsed: 8.008048184s +Jan 14 03:53:30.429: INFO: Pod "pod-subpath-test-configmap-jg4j": Phase="Running", Reason="", readiness=true. Elapsed: 10.008052028s +Jan 14 03:53:32.430: INFO: Pod "pod-subpath-test-configmap-jg4j": Phase="Running", Reason="", readiness=true. Elapsed: 12.008920441s +Jan 14 03:53:34.428: INFO: Pod "pod-subpath-test-configmap-jg4j": Phase="Running", Reason="", readiness=true. Elapsed: 14.007877209s +Jan 14 03:53:36.429: INFO: Pod "pod-subpath-test-configmap-jg4j": Phase="Running", Reason="", readiness=true. Elapsed: 16.008722517s +Jan 14 03:53:38.430: INFO: Pod "pod-subpath-test-configmap-jg4j": Phase="Running", Reason="", readiness=true. Elapsed: 18.009217226s +Jan 14 03:53:40.428: INFO: Pod "pod-subpath-test-configmap-jg4j": Phase="Running", Reason="", readiness=true. Elapsed: 20.0076401s +Jan 14 03:53:42.429: INFO: Pod "pod-subpath-test-configmap-jg4j": Phase="Running", Reason="", readiness=false. Elapsed: 22.008809348s +Jan 14 03:53:44.429: INFO: Pod "pod-subpath-test-configmap-jg4j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.008815969s +STEP: Saw pod success 01/14/23 03:53:44.429 +Jan 14 03:53:44.430: INFO: Pod "pod-subpath-test-configmap-jg4j" satisfied condition "Succeeded or Failed" +Jan 14 03:53:44.433: INFO: Trying to get logs from node 10.0.1.99 pod pod-subpath-test-configmap-jg4j container test-container-subpath-configmap-jg4j: +STEP: delete the pod 01/14/23 03:53:44.439 +Jan 14 03:53:44.454: INFO: Waiting for pod pod-subpath-test-configmap-jg4j to disappear +Jan 14 03:53:44.457: INFO: Pod pod-subpath-test-configmap-jg4j no longer exists +STEP: Deleting pod pod-subpath-test-configmap-jg4j 01/14/23 03:53:44.457 +Jan 14 03:53:44.457: INFO: Deleting pod "pod-subpath-test-configmap-jg4j" in namespace "subpath-2805" +[AfterEach] [sig-storage] Subpath + test/e2e/framework/node/init/init.go:32 +Jan 14 03:53:44.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Subpath + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Subpath + tear down framework | framework.go:193 +STEP: Destroying namespace "subpath-2805" for this suite. 01/14/23 03:53:44.465 +------------------------------ +• [SLOW TEST] [24.084 seconds] +[sig-storage] Subpath +test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + test/e2e/storage/subpath.go:36 + should support subpaths with configmap pod [Conformance] + test/e2e/storage/subpath.go:70 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Subpath + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:53:20.386 + Jan 14 03:53:20.386: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename subpath 01/14/23 03:53:20.387 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:53:20.401 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:53:20.403 + [BeforeEach] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 + STEP: Setting up data 01/14/23 03:53:20.405 + [It] should support subpaths with configmap pod [Conformance] + test/e2e/storage/subpath.go:70 + STEP: Creating pod pod-subpath-test-configmap-jg4j 01/14/23 03:53:20.413 + STEP: Creating a pod to test atomic-volume-subpath 01/14/23 03:53:20.413 + Jan 14 03:53:20.421: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-jg4j" in namespace "subpath-2805" to be "Succeeded or Failed" + Jan 14 03:53:20.423: INFO: Pod "pod-subpath-test-configmap-jg4j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.793265ms + Jan 14 03:53:22.428: INFO: Pod "pod-subpath-test-configmap-jg4j": Phase="Running", Reason="", readiness=true. Elapsed: 2.007628085s + Jan 14 03:53:24.429: INFO: Pod "pod-subpath-test-configmap-jg4j": Phase="Running", Reason="", readiness=true. Elapsed: 4.008496198s + Jan 14 03:53:26.429: INFO: Pod "pod-subpath-test-configmap-jg4j": Phase="Running", Reason="", readiness=true. Elapsed: 6.008007197s + Jan 14 03:53:28.429: INFO: Pod "pod-subpath-test-configmap-jg4j": Phase="Running", Reason="", readiness=true. Elapsed: 8.008048184s + Jan 14 03:53:30.429: INFO: Pod "pod-subpath-test-configmap-jg4j": Phase="Running", Reason="", readiness=true. Elapsed: 10.008052028s + Jan 14 03:53:32.430: INFO: Pod "pod-subpath-test-configmap-jg4j": Phase="Running", Reason="", readiness=true. Elapsed: 12.008920441s + Jan 14 03:53:34.428: INFO: Pod "pod-subpath-test-configmap-jg4j": Phase="Running", Reason="", readiness=true. Elapsed: 14.007877209s + Jan 14 03:53:36.429: INFO: Pod "pod-subpath-test-configmap-jg4j": Phase="Running", Reason="", readiness=true. Elapsed: 16.008722517s + Jan 14 03:53:38.430: INFO: Pod "pod-subpath-test-configmap-jg4j": Phase="Running", Reason="", readiness=true. Elapsed: 18.009217226s + Jan 14 03:53:40.428: INFO: Pod "pod-subpath-test-configmap-jg4j": Phase="Running", Reason="", readiness=true. Elapsed: 20.0076401s + Jan 14 03:53:42.429: INFO: Pod "pod-subpath-test-configmap-jg4j": Phase="Running", Reason="", readiness=false. Elapsed: 22.008809348s + Jan 14 03:53:44.429: INFO: Pod "pod-subpath-test-configmap-jg4j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.008815969s + STEP: Saw pod success 01/14/23 03:53:44.429 + Jan 14 03:53:44.430: INFO: Pod "pod-subpath-test-configmap-jg4j" satisfied condition "Succeeded or Failed" + Jan 14 03:53:44.433: INFO: Trying to get logs from node 10.0.1.99 pod pod-subpath-test-configmap-jg4j container test-container-subpath-configmap-jg4j: + STEP: delete the pod 01/14/23 03:53:44.439 + Jan 14 03:53:44.454: INFO: Waiting for pod pod-subpath-test-configmap-jg4j to disappear + Jan 14 03:53:44.457: INFO: Pod pod-subpath-test-configmap-jg4j no longer exists + STEP: Deleting pod pod-subpath-test-configmap-jg4j 01/14/23 03:53:44.457 + Jan 14 03:53:44.457: INFO: Deleting pod "pod-subpath-test-configmap-jg4j" in namespace "subpath-2805" + [AfterEach] [sig-storage] Subpath + test/e2e/framework/node/init/init.go:32 + Jan 14 03:53:44.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Subpath + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Subpath + tear down framework | framework.go:193 + STEP: Destroying namespace "subpath-2805" for this suite. 01/14/23 03:53:44.465 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for intra-pod communication: udp [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:93 +[BeforeEach] [sig-network] Networking + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:53:44.472 +Jan 14 03:53:44.472: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename pod-network-test 01/14/23 03:53:44.473 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:53:44.487 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:53:44.489 +[BeforeEach] [sig-network] Networking + test/e2e/framework/metrics/init/init.go:31 +[It] should function for intra-pod communication: udp [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:93 +STEP: Performing setup for networking test in namespace pod-network-test-8104 01/14/23 03:53:44.492 +STEP: creating a selector 01/14/23 03:53:44.492 +STEP: Creating the service pods in kubernetes 01/14/23 03:53:44.492 +Jan 14 03:53:44.492: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Jan 14 03:53:44.526: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-8104" to be "running and ready" +Jan 14 03:53:44.534: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 7.352255ms +Jan 14 03:53:44.534: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Jan 14 03:53:46.538: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.01169447s +Jan 14 03:53:46.538: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jan 14 03:53:48.540: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.013172576s +Jan 14 03:53:48.540: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jan 14 03:53:50.539: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.01243916s +Jan 14 03:53:50.539: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jan 14 03:53:52.540: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.013552612s +Jan 14 03:53:52.540: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jan 14 03:53:54.538: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.011973084s +Jan 14 03:53:54.538: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jan 14 03:53:56.540: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.013253127s +Jan 14 03:53:56.540: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jan 14 03:53:58.540: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.013492953s +Jan 14 03:53:58.540: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jan 14 03:54:00.538: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.011964459s +Jan 14 03:54:00.538: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jan 14 03:54:02.540: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.013092515s +Jan 14 03:54:02.540: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jan 14 03:54:04.540: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.013179733s +Jan 14 03:54:04.540: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jan 14 03:54:06.540: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 22.013528494s +Jan 14 03:54:06.540: INFO: The phase of Pod netserver-0 is Running (Ready = true) +Jan 14 03:54:06.540: INFO: Pod "netserver-0" satisfied condition "running and ready" +Jan 14 03:54:06.545: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-8104" to be "running and ready" +Jan 14 03:54:06.548: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 3.293773ms +Jan 14 03:54:06.548: INFO: The phase of Pod netserver-1 is Running (Ready = true) +Jan 14 03:54:06.548: INFO: Pod "netserver-1" satisfied condition "running and ready" +Jan 14 03:54:06.551: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-8104" to be "running and ready" +Jan 14 03:54:06.554: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 2.946445ms +Jan 14 03:54:06.554: INFO: The phase of Pod netserver-2 is Running (Ready = true) +Jan 14 03:54:06.554: INFO: Pod "netserver-2" satisfied condition "running and ready" +STEP: Creating test pods 01/14/23 03:54:06.557 +Jan 14 03:54:06.563: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-8104" to be "running" +Jan 14 03:54:06.565: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.705787ms +Jan 14 03:54:08.570: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.007828072s +Jan 14 03:54:08.570: INFO: Pod "test-container-pod" satisfied condition "running" +Jan 14 03:54:08.573: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 +Jan 14 03:54:08.573: INFO: Breadth first check of 10.52.1.45 on host 10.0.1.106... +Jan 14 03:54:08.576: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.52.1.126:9080/dial?request=hostname&protocol=udp&host=10.52.1.45&port=8081&tries=1'] Namespace:pod-network-test-8104 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jan 14 03:54:08.576: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +Jan 14 03:54:08.577: INFO: ExecWithOptions: Clientset creation +Jan 14 03:54:08.577: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/pod-network-test-8104/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.52.1.126%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D10.52.1.45%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) +Jan 14 03:54:08.628: INFO: Waiting for responses: map[] +Jan 14 03:54:08.628: INFO: reached 10.52.1.45 after 0/1 tries +Jan 14 03:54:08.628: INFO: Breadth first check of 10.52.0.234 on host 10.0.1.212... +Jan 14 03:54:08.631: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.52.1.126:9080/dial?request=hostname&protocol=udp&host=10.52.0.234&port=8081&tries=1'] Namespace:pod-network-test-8104 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jan 14 03:54:08.631: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +Jan 14 03:54:08.632: INFO: ExecWithOptions: Clientset creation +Jan 14 03:54:08.632: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/pod-network-test-8104/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.52.1.126%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D10.52.0.234%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) +Jan 14 03:54:08.679: INFO: Waiting for responses: map[] +Jan 14 03:54:08.679: INFO: reached 10.52.0.234 after 0/1 tries +Jan 14 03:54:08.679: INFO: Breadth first check of 10.52.1.125 on host 10.0.1.99... +Jan 14 03:54:08.682: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.52.1.126:9080/dial?request=hostname&protocol=udp&host=10.52.1.125&port=8081&tries=1'] Namespace:pod-network-test-8104 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jan 14 03:54:08.682: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +Jan 14 03:54:08.683: INFO: ExecWithOptions: Clientset creation +Jan 14 03:54:08.683: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/pod-network-test-8104/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.52.1.126%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D10.52.1.125%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) +Jan 14 03:54:08.728: INFO: Waiting for responses: map[] +Jan 14 03:54:08.728: INFO: reached 10.52.1.125 after 0/1 tries +Jan 14 03:54:08.728: INFO: Going to retry 0 out of 3 pods.... +[AfterEach] [sig-network] Networking + test/e2e/framework/node/init/init.go:32 +Jan 14 03:54:08.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Networking + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Networking + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Networking + tear down framework | framework.go:193 +STEP: Destroying namespace "pod-network-test-8104" for this suite. 01/14/23 03:54:08.733 +------------------------------ +• [SLOW TEST] [24.266 seconds] +[sig-network] Networking +test/e2e/common/network/framework.go:23 + Granular Checks: Pods + test/e2e/common/network/networking.go:32 + should function for intra-pod communication: udp [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:93 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Networking + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:53:44.472 + Jan 14 03:53:44.472: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename pod-network-test 01/14/23 03:53:44.473 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:53:44.487 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:53:44.489 + [BeforeEach] [sig-network] Networking + test/e2e/framework/metrics/init/init.go:31 + [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:93 + STEP: Performing setup for networking test in namespace pod-network-test-8104 01/14/23 03:53:44.492 + STEP: creating a selector 01/14/23 03:53:44.492 + STEP: Creating the service pods in kubernetes 01/14/23 03:53:44.492 + Jan 14 03:53:44.492: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable + Jan 14 03:53:44.526: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-8104" to be "running and ready" + Jan 14 03:53:44.534: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 7.352255ms + Jan 14 03:53:44.534: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) + Jan 14 03:53:46.538: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.01169447s + Jan 14 03:53:46.538: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jan 14 03:53:48.540: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.013172576s + Jan 14 03:53:48.540: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jan 14 03:53:50.539: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.01243916s + Jan 14 03:53:50.539: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jan 14 03:53:52.540: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.013552612s + Jan 14 03:53:52.540: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jan 14 03:53:54.538: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.011973084s + Jan 14 03:53:54.538: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jan 14 03:53:56.540: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.013253127s + Jan 14 03:53:56.540: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jan 14 03:53:58.540: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.013492953s + Jan 14 03:53:58.540: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jan 14 03:54:00.538: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.011964459s + Jan 14 03:54:00.538: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jan 14 03:54:02.540: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.013092515s + Jan 14 03:54:02.540: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jan 14 03:54:04.540: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.013179733s + Jan 14 03:54:04.540: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jan 14 03:54:06.540: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 22.013528494s + Jan 14 03:54:06.540: INFO: The phase of Pod netserver-0 is Running (Ready = true) + Jan 14 03:54:06.540: INFO: Pod "netserver-0" satisfied condition "running and ready" + Jan 14 03:54:06.545: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-8104" to be "running and ready" + Jan 14 03:54:06.548: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 3.293773ms + Jan 14 03:54:06.548: INFO: The phase of Pod netserver-1 is Running (Ready = true) + Jan 14 03:54:06.548: INFO: Pod "netserver-1" satisfied condition "running and ready" + Jan 14 03:54:06.551: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-8104" to be "running and ready" + Jan 14 03:54:06.554: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 2.946445ms + Jan 14 03:54:06.554: INFO: The phase of Pod netserver-2 is Running (Ready = true) + Jan 14 03:54:06.554: INFO: Pod "netserver-2" satisfied condition "running and ready" + STEP: Creating test pods 01/14/23 03:54:06.557 + Jan 14 03:54:06.563: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-8104" to be "running" + Jan 14 03:54:06.565: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.705787ms + Jan 14 03:54:08.570: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.007828072s + Jan 14 03:54:08.570: INFO: Pod "test-container-pod" satisfied condition "running" + Jan 14 03:54:08.573: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 + Jan 14 03:54:08.573: INFO: Breadth first check of 10.52.1.45 on host 10.0.1.106... + Jan 14 03:54:08.576: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.52.1.126:9080/dial?request=hostname&protocol=udp&host=10.52.1.45&port=8081&tries=1'] Namespace:pod-network-test-8104 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jan 14 03:54:08.576: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + Jan 14 03:54:08.577: INFO: ExecWithOptions: Clientset creation + Jan 14 03:54:08.577: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/pod-network-test-8104/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.52.1.126%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D10.52.1.45%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) + Jan 14 03:54:08.628: INFO: Waiting for responses: map[] + Jan 14 03:54:08.628: INFO: reached 10.52.1.45 after 0/1 tries + Jan 14 03:54:08.628: INFO: Breadth first check of 10.52.0.234 on host 10.0.1.212... + Jan 14 03:54:08.631: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.52.1.126:9080/dial?request=hostname&protocol=udp&host=10.52.0.234&port=8081&tries=1'] Namespace:pod-network-test-8104 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jan 14 03:54:08.631: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + Jan 14 03:54:08.632: INFO: ExecWithOptions: Clientset creation + Jan 14 03:54:08.632: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/pod-network-test-8104/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.52.1.126%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D10.52.0.234%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) + Jan 14 03:54:08.679: INFO: Waiting for responses: map[] + Jan 14 03:54:08.679: INFO: reached 10.52.0.234 after 0/1 tries + Jan 14 03:54:08.679: INFO: Breadth first check of 10.52.1.125 on host 10.0.1.99... + Jan 14 03:54:08.682: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.52.1.126:9080/dial?request=hostname&protocol=udp&host=10.52.1.125&port=8081&tries=1'] Namespace:pod-network-test-8104 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jan 14 03:54:08.682: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + Jan 14 03:54:08.683: INFO: ExecWithOptions: Clientset creation + Jan 14 03:54:08.683: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/pod-network-test-8104/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.52.1.126%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D10.52.1.125%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) + Jan 14 03:54:08.728: INFO: Waiting for responses: map[] + Jan 14 03:54:08.728: INFO: reached 10.52.1.125 after 0/1 tries + Jan 14 03:54:08.728: INFO: Going to retry 0 out of 3 pods.... + [AfterEach] [sig-network] Networking + test/e2e/framework/node/init/init.go:32 + Jan 14 03:54:08.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Networking + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Networking + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Networking + tear down framework | framework.go:193 + STEP: Destroying namespace "pod-network-test-8104" for this suite. 01/14/23 03:54:08.733 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:57 +[BeforeEach] [sig-storage] Secrets + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:54:08.739 +Jan 14 03:54:08.739: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename secrets 01/14/23 03:54:08.74 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:54:08.759 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:54:08.762 +[BeforeEach] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:57 +STEP: Creating secret with name secret-test-d8940fb6-74dc-46e5-b321-ce7341854dab 01/14/23 03:54:08.764 +STEP: Creating a pod to test consume secrets 01/14/23 03:54:08.767 +Jan 14 03:54:08.776: INFO: Waiting up to 5m0s for pod "pod-secrets-bb648ffb-5388-42c1-9b23-578fe8850d35" in namespace "secrets-5458" to be "Succeeded or Failed" +Jan 14 03:54:08.779: INFO: Pod "pod-secrets-bb648ffb-5388-42c1-9b23-578fe8850d35": Phase="Pending", Reason="", readiness=false. Elapsed: 2.829706ms +Jan 14 03:54:10.784: INFO: Pod "pod-secrets-bb648ffb-5388-42c1-9b23-578fe8850d35": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007813253s +Jan 14 03:54:12.785: INFO: Pod "pod-secrets-bb648ffb-5388-42c1-9b23-578fe8850d35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008590842s +STEP: Saw pod success 01/14/23 03:54:12.785 +Jan 14 03:54:12.785: INFO: Pod "pod-secrets-bb648ffb-5388-42c1-9b23-578fe8850d35" satisfied condition "Succeeded or Failed" +Jan 14 03:54:12.788: INFO: Trying to get logs from node 10.0.1.106 pod pod-secrets-bb648ffb-5388-42c1-9b23-578fe8850d35 container secret-volume-test: +STEP: delete the pod 01/14/23 03:54:12.801 +Jan 14 03:54:12.814: INFO: Waiting for pod pod-secrets-bb648ffb-5388-42c1-9b23-578fe8850d35 to disappear +Jan 14 03:54:12.816: INFO: Pod pod-secrets-bb648ffb-5388-42c1-9b23-578fe8850d35 no longer exists +[AfterEach] [sig-storage] Secrets + test/e2e/framework/node/init/init.go:32 +Jan 14 03:54:12.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Secrets + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Secrets + tear down framework | framework.go:193 +STEP: Destroying namespace "secrets-5458" for this suite. 01/14/23 03:54:12.82 +------------------------------ +• [4.087 seconds] +[sig-storage] Secrets +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:57 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Secrets + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:54:08.739 + Jan 14 03:54:08.739: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename secrets 01/14/23 03:54:08.74 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:54:08.759 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:54:08.762 + [BeforeEach] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:57 + STEP: Creating secret with name secret-test-d8940fb6-74dc-46e5-b321-ce7341854dab 01/14/23 03:54:08.764 + STEP: Creating a pod to test consume secrets 01/14/23 03:54:08.767 + Jan 14 03:54:08.776: INFO: Waiting up to 5m0s for pod "pod-secrets-bb648ffb-5388-42c1-9b23-578fe8850d35" in namespace "secrets-5458" to be "Succeeded or Failed" + Jan 14 03:54:08.779: INFO: Pod "pod-secrets-bb648ffb-5388-42c1-9b23-578fe8850d35": Phase="Pending", Reason="", readiness=false. Elapsed: 2.829706ms + Jan 14 03:54:10.784: INFO: Pod "pod-secrets-bb648ffb-5388-42c1-9b23-578fe8850d35": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007813253s + Jan 14 03:54:12.785: INFO: Pod "pod-secrets-bb648ffb-5388-42c1-9b23-578fe8850d35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008590842s + STEP: Saw pod success 01/14/23 03:54:12.785 + Jan 14 03:54:12.785: INFO: Pod "pod-secrets-bb648ffb-5388-42c1-9b23-578fe8850d35" satisfied condition "Succeeded or Failed" + Jan 14 03:54:12.788: INFO: Trying to get logs from node 10.0.1.106 pod pod-secrets-bb648ffb-5388-42c1-9b23-578fe8850d35 container secret-volume-test: + STEP: delete the pod 01/14/23 03:54:12.801 + Jan 14 03:54:12.814: INFO: Waiting for pod pod-secrets-bb648ffb-5388-42c1-9b23-578fe8850d35 to disappear + Jan 14 03:54:12.816: INFO: Pod pod-secrets-bb648ffb-5388-42c1-9b23-578fe8850d35 no longer exists + [AfterEach] [sig-storage] Secrets + test/e2e/framework/node/init/init.go:32 + Jan 14 03:54:12.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Secrets + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Secrets + tear down framework | framework.go:193 + STEP: Destroying namespace "secrets-5458" for this suite. 01/14/23 03:54:12.82 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + should be able to convert from CR v1 to CR v2 [Conformance] + test/e2e/apimachinery/crd_conversion_webhook.go:149 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:54:12.827 +Jan 14 03:54:12.827: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename crd-webhook 01/14/23 03:54:12.828 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:54:12.841 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:54:12.843 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/crd_conversion_webhook.go:128 +STEP: Setting up server cert 01/14/23 03:54:12.845 +STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication 01/14/23 03:54:13.474 +STEP: Deploying the custom resource conversion webhook pod 01/14/23 03:54:13.484 +STEP: Wait for the deployment to be ready 01/14/23 03:54:13.494 +Jan 14 03:54:13.502: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set +Jan 14 03:54:15.513: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 3, 54, 13, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 3, 54, 13, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 3, 54, 13, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 3, 54, 13, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-74ff66dd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service 01/14/23 03:54:17.518 +STEP: Verifying the service has paired with the endpoint 01/14/23 03:54:17.528 +Jan 14 03:54:18.528: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 +[It] should be able to convert from CR v1 to CR v2 [Conformance] + test/e2e/apimachinery/crd_conversion_webhook.go:149 +Jan 14 03:54:18.533: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Creating a v1 custom resource 01/14/23 03:54:21.117 +STEP: v2 custom resource should be converted 01/14/23 03:54:21.121 +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jan 14 03:54:21.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/crd_conversion_webhook.go:139 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "crd-webhook-833" for this suite. 01/14/23 03:54:21.677 +------------------------------ +• [SLOW TEST] [8.855 seconds] +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should be able to convert from CR v1 to CR v2 [Conformance] + test/e2e/apimachinery/crd_conversion_webhook.go:149 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:54:12.827 + Jan 14 03:54:12.827: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename crd-webhook 01/14/23 03:54:12.828 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:54:12.841 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:54:12.843 + [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/crd_conversion_webhook.go:128 + STEP: Setting up server cert 01/14/23 03:54:12.845 + STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication 01/14/23 03:54:13.474 + STEP: Deploying the custom resource conversion webhook pod 01/14/23 03:54:13.484 + STEP: Wait for the deployment to be ready 01/14/23 03:54:13.494 + Jan 14 03:54:13.502: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set + Jan 14 03:54:15.513: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 3, 54, 13, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 3, 54, 13, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 3, 54, 13, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 3, 54, 13, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-74ff66dd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + STEP: Deploying the webhook service 01/14/23 03:54:17.518 + STEP: Verifying the service has paired with the endpoint 01/14/23 03:54:17.528 + Jan 14 03:54:18.528: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 + [It] should be able to convert from CR v1 to CR v2 [Conformance] + test/e2e/apimachinery/crd_conversion_webhook.go:149 + Jan 14 03:54:18.533: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Creating a v1 custom resource 01/14/23 03:54:21.117 + STEP: v2 custom resource should be converted 01/14/23 03:54:21.121 + [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Jan 14 03:54:21.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/crd_conversion_webhook.go:139 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "crd-webhook-833" for this suite. 01/14/23 03:54:21.677 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl patch + should add annotations for pods in rc [Conformance] + test/e2e/kubectl/kubectl.go:1652 +[BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:54:21.682 +Jan 14 03:54:21.682: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename kubectl 01/14/23 03:54:21.683 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:54:21.697 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:54:21.699 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[It] should add annotations for pods in rc [Conformance] + test/e2e/kubectl/kubectl.go:1652 +STEP: creating Agnhost RC 01/14/23 03:54:21.702 +Jan 14 03:54:21.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-2905 create -f -' +Jan 14 03:54:22.426: INFO: stderr: "" +Jan 14 03:54:22.426: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. 01/14/23 03:54:22.426 +Jan 14 03:54:23.431: INFO: Selector matched 1 pods for map[app:agnhost] +Jan 14 03:54:23.431: INFO: Found 0 / 1 +Jan 14 03:54:24.431: INFO: Selector matched 1 pods for map[app:agnhost] +Jan 14 03:54:24.431: INFO: Found 1 / 1 +Jan 14 03:54:24.431: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +STEP: patching all pods 01/14/23 03:54:24.431 +Jan 14 03:54:24.434: INFO: Selector matched 1 pods for map[app:agnhost] +Jan 14 03:54:24.434: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Jan 14 03:54:24.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-2905 patch pod agnhost-primary-6xrch -p {"metadata":{"annotations":{"x":"y"}}}' +Jan 14 03:54:24.506: INFO: stderr: "" +Jan 14 03:54:24.506: INFO: stdout: "pod/agnhost-primary-6xrch patched\n" +STEP: checking annotations 01/14/23 03:54:24.506 +Jan 14 03:54:24.509: INFO: Selector matched 1 pods for map[app:agnhost] +Jan 14 03:54:24.509: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 +Jan 14 03:54:24.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 +STEP: Destroying namespace "kubectl-2905" for this suite. 01/14/23 03:54:24.514 +------------------------------ +• [2.837 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl patch + test/e2e/kubectl/kubectl.go:1646 + should add annotations for pods in rc [Conformance] + test/e2e/kubectl/kubectl.go:1652 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:54:21.682 + Jan 14 03:54:21.682: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename kubectl 01/14/23 03:54:21.683 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:54:21.697 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:54:21.699 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [It] should add annotations for pods in rc [Conformance] + test/e2e/kubectl/kubectl.go:1652 + STEP: creating Agnhost RC 01/14/23 03:54:21.702 + Jan 14 03:54:21.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-2905 create -f -' + Jan 14 03:54:22.426: INFO: stderr: "" + Jan 14 03:54:22.426: INFO: stdout: "replicationcontroller/agnhost-primary created\n" + STEP: Waiting for Agnhost primary to start. 01/14/23 03:54:22.426 + Jan 14 03:54:23.431: INFO: Selector matched 1 pods for map[app:agnhost] + Jan 14 03:54:23.431: INFO: Found 0 / 1 + Jan 14 03:54:24.431: INFO: Selector matched 1 pods for map[app:agnhost] + Jan 14 03:54:24.431: INFO: Found 1 / 1 + Jan 14 03:54:24.431: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 + STEP: patching all pods 01/14/23 03:54:24.431 + Jan 14 03:54:24.434: INFO: Selector matched 1 pods for map[app:agnhost] + Jan 14 03:54:24.434: INFO: ForEach: Found 1 pods from the filter. Now looping through them. + Jan 14 03:54:24.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-2905 patch pod agnhost-primary-6xrch -p {"metadata":{"annotations":{"x":"y"}}}' + Jan 14 03:54:24.506: INFO: stderr: "" + Jan 14 03:54:24.506: INFO: stdout: "pod/agnhost-primary-6xrch patched\n" + STEP: checking annotations 01/14/23 03:54:24.506 + Jan 14 03:54:24.509: INFO: Selector matched 1 pods for map[app:agnhost] + Jan 14 03:54:24.509: INFO: ForEach: Found 1 pods from the filter. Now looping through them. + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 + Jan 14 03:54:24.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 + STEP: Destroying namespace "kubectl-2905" for this suite. 01/14/23 03:54:24.514 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + test/e2e/apimachinery/webhook.go:277 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:54:24.521 +Jan 14 03:54:24.521: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename webhook 01/14/23 03:54:24.522 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:54:24.535 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:54:24.537 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 01/14/23 03:54:24.549 +STEP: Create role binding to let webhook read extension-apiserver-authentication 01/14/23 03:54:24.913 +STEP: Deploying the webhook pod 01/14/23 03:54:24.918 +STEP: Wait for the deployment to be ready 01/14/23 03:54:24.93 +Jan 14 03:54:24.936: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 01/14/23 03:54:26.946 +STEP: Verifying the service has paired with the endpoint 01/14/23 03:54:26.956 +Jan 14 03:54:27.957: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + test/e2e/apimachinery/webhook.go:277 +STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API 01/14/23 03:54:27.961 +STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API 01/14/23 03:54:27.974 +STEP: Creating a dummy validating-webhook-configuration object 01/14/23 03:54:27.987 +STEP: Deleting the validating-webhook-configuration, which should be possible to remove 01/14/23 03:54:27.994 +STEP: Creating a dummy mutating-webhook-configuration object 01/14/23 03:54:27.999 +STEP: Deleting the mutating-webhook-configuration, which should be possible to remove 01/14/23 03:54:28.006 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jan 14 03:54:28.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "webhook-2597" for this suite. 01/14/23 03:54:28.059 +STEP: Destroying namespace "webhook-2597-markers" for this suite. 01/14/23 03:54:28.066 +------------------------------ +• [3.550 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + test/e2e/apimachinery/webhook.go:277 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:54:24.521 + Jan 14 03:54:24.521: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename webhook 01/14/23 03:54:24.522 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:54:24.535 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:54:24.537 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 01/14/23 03:54:24.549 + STEP: Create role binding to let webhook read extension-apiserver-authentication 01/14/23 03:54:24.913 + STEP: Deploying the webhook pod 01/14/23 03:54:24.918 + STEP: Wait for the deployment to be ready 01/14/23 03:54:24.93 + Jan 14 03:54:24.936: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 01/14/23 03:54:26.946 + STEP: Verifying the service has paired with the endpoint 01/14/23 03:54:26.956 + Jan 14 03:54:27.957: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] + test/e2e/apimachinery/webhook.go:277 + STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API 01/14/23 03:54:27.961 + STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API 01/14/23 03:54:27.974 + STEP: Creating a dummy validating-webhook-configuration object 01/14/23 03:54:27.987 + STEP: Deleting the validating-webhook-configuration, which should be possible to remove 01/14/23 03:54:27.994 + STEP: Creating a dummy mutating-webhook-configuration object 01/14/23 03:54:27.999 + STEP: Deleting the mutating-webhook-configuration, which should be possible to remove 01/14/23 03:54:28.006 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Jan 14 03:54:28.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "webhook-2597" for this suite. 01/14/23 03:54:28.059 + STEP: Destroying namespace "webhook-2597-markers" for this suite. 01/14/23 03:54:28.066 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should test the lifecycle of an Endpoint [Conformance] + test/e2e/network/service.go:3244 +[BeforeEach] [sig-network] Services + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:54:28.072 +Jan 14 03:54:28.072: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename services 01/14/23 03:54:28.073 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:54:28.086 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:54:28.088 +[BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should test the lifecycle of an Endpoint [Conformance] + test/e2e/network/service.go:3244 +STEP: creating an Endpoint 01/14/23 03:54:28.094 +STEP: waiting for available Endpoint 01/14/23 03:54:28.099 +STEP: listing all Endpoints 01/14/23 03:54:28.1 +STEP: updating the Endpoint 01/14/23 03:54:28.104 +STEP: fetching the Endpoint 01/14/23 03:54:28.11 +STEP: patching the Endpoint 01/14/23 03:54:28.112 +STEP: fetching the Endpoint 01/14/23 03:54:28.12 +STEP: deleting the Endpoint by Collection 01/14/23 03:54:28.123 +STEP: waiting for Endpoint deletion 01/14/23 03:54:28.13 +STEP: fetching the Endpoint 01/14/23 03:54:28.131 +[AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 +Jan 14 03:54:28.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 +STEP: Destroying namespace "services-3608" for this suite. 01/14/23 03:54:28.139 +------------------------------ +• [0.072 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should test the lifecycle of an Endpoint [Conformance] + test/e2e/network/service.go:3244 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:54:28.072 + Jan 14 03:54:28.072: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename services 01/14/23 03:54:28.073 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:54:28.086 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:54:28.088 + [BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should test the lifecycle of an Endpoint [Conformance] + test/e2e/network/service.go:3244 + STEP: creating an Endpoint 01/14/23 03:54:28.094 + STEP: waiting for available Endpoint 01/14/23 03:54:28.099 + STEP: listing all Endpoints 01/14/23 03:54:28.1 + STEP: updating the Endpoint 01/14/23 03:54:28.104 + STEP: fetching the Endpoint 01/14/23 03:54:28.11 + STEP: patching the Endpoint 01/14/23 03:54:28.112 + STEP: fetching the Endpoint 01/14/23 03:54:28.12 + STEP: deleting the Endpoint by Collection 01/14/23 03:54:28.123 + STEP: waiting for Endpoint deletion 01/14/23 03:54:28.13 + STEP: fetching the Endpoint 01/14/23 03:54:28.131 + [AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 + Jan 14 03:54:28.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 + STEP: Destroying namespace "services-3608" for this suite. 01/14/23 03:54:28.139 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a service. [Conformance] + test/e2e/apimachinery/resource_quota.go:100 +[BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:54:28.145 +Jan 14 03:54:28.145: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename resourcequota 01/14/23 03:54:28.146 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:54:28.16 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:54:28.162 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 +[It] should create a ResourceQuota and capture the life of a service. [Conformance] + test/e2e/apimachinery/resource_quota.go:100 +STEP: Counting existing ResourceQuota 01/14/23 03:54:28.164 +STEP: Creating a ResourceQuota 01/14/23 03:54:33.168 +STEP: Ensuring resource quota status is calculated 01/14/23 03:54:33.173 +STEP: Creating a Service 01/14/23 03:54:35.177 +STEP: Creating a NodePort Service 01/14/23 03:54:35.192 +STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota 01/14/23 03:54:35.212 +STEP: Ensuring resource quota status captures service creation 01/14/23 03:54:35.231 +STEP: Deleting Services 01/14/23 03:54:37.236 +STEP: Ensuring resource quota status released usage 01/14/23 03:54:37.267 +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 +Jan 14 03:54:39.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 +STEP: Destroying namespace "resourcequota-7803" for this suite. 01/14/23 03:54:39.277 +------------------------------ +• [SLOW TEST] [11.139 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a service. [Conformance] + test/e2e/apimachinery/resource_quota.go:100 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:54:28.145 + Jan 14 03:54:28.145: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename resourcequota 01/14/23 03:54:28.146 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:54:28.16 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:54:28.162 + [BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 + [It] should create a ResourceQuota and capture the life of a service. [Conformance] + test/e2e/apimachinery/resource_quota.go:100 + STEP: Counting existing ResourceQuota 01/14/23 03:54:28.164 + STEP: Creating a ResourceQuota 01/14/23 03:54:33.168 + STEP: Ensuring resource quota status is calculated 01/14/23 03:54:33.173 + STEP: Creating a Service 01/14/23 03:54:35.177 + STEP: Creating a NodePort Service 01/14/23 03:54:35.192 + STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota 01/14/23 03:54:35.212 + STEP: Ensuring resource quota status captures service creation 01/14/23 03:54:35.231 + STEP: Deleting Services 01/14/23 03:54:37.236 + STEP: Ensuring resource quota status released usage 01/14/23 03:54:37.267 + [AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 + Jan 14 03:54:39.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 + STEP: Destroying namespace "resourcequota-7803" for this suite. 01/14/23 03:54:39.277 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:47 +[BeforeEach] [sig-storage] Secrets + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:54:39.284 +Jan 14 03:54:39.284: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename secrets 01/14/23 03:54:39.285 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:54:39.299 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:54:39.301 +[BeforeEach] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:47 +STEP: Creating secret with name secret-test-b19186fe-2f1b-4d59-8f9e-f85c1ae01031 01/14/23 03:54:39.303 +STEP: Creating a pod to test consume secrets 01/14/23 03:54:39.307 +Jan 14 03:54:39.315: INFO: Waiting up to 5m0s for pod "pod-secrets-fa125cb6-e16e-4f22-87ef-e30157344c9c" in namespace "secrets-9936" to be "Succeeded or Failed" +Jan 14 03:54:39.318: INFO: Pod "pod-secrets-fa125cb6-e16e-4f22-87ef-e30157344c9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.947085ms +Jan 14 03:54:41.323: INFO: Pod "pod-secrets-fa125cb6-e16e-4f22-87ef-e30157344c9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008302273s +Jan 14 03:54:43.324: INFO: Pod "pod-secrets-fa125cb6-e16e-4f22-87ef-e30157344c9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008426741s +STEP: Saw pod success 01/14/23 03:54:43.324 +Jan 14 03:54:43.324: INFO: Pod "pod-secrets-fa125cb6-e16e-4f22-87ef-e30157344c9c" satisfied condition "Succeeded or Failed" +Jan 14 03:54:43.327: INFO: Trying to get logs from node 10.0.1.99 pod pod-secrets-fa125cb6-e16e-4f22-87ef-e30157344c9c container secret-volume-test: +STEP: delete the pod 01/14/23 03:54:43.333 +Jan 14 03:54:43.347: INFO: Waiting for pod pod-secrets-fa125cb6-e16e-4f22-87ef-e30157344c9c to disappear +Jan 14 03:54:43.350: INFO: Pod pod-secrets-fa125cb6-e16e-4f22-87ef-e30157344c9c no longer exists +[AfterEach] [sig-storage] Secrets + test/e2e/framework/node/init/init.go:32 +Jan 14 03:54:43.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Secrets + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Secrets + tear down framework | framework.go:193 +STEP: Destroying namespace "secrets-9936" for this suite. 01/14/23 03:54:43.355 +------------------------------ +• [4.079 seconds] +[sig-storage] Secrets +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:47 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Secrets + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:54:39.284 + Jan 14 03:54:39.284: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename secrets 01/14/23 03:54:39.285 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:54:39.299 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:54:39.301 + [BeforeEach] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:47 + STEP: Creating secret with name secret-test-b19186fe-2f1b-4d59-8f9e-f85c1ae01031 01/14/23 03:54:39.303 + STEP: Creating a pod to test consume secrets 01/14/23 03:54:39.307 + Jan 14 03:54:39.315: INFO: Waiting up to 5m0s for pod "pod-secrets-fa125cb6-e16e-4f22-87ef-e30157344c9c" in namespace "secrets-9936" to be "Succeeded or Failed" + Jan 14 03:54:39.318: INFO: Pod "pod-secrets-fa125cb6-e16e-4f22-87ef-e30157344c9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.947085ms + Jan 14 03:54:41.323: INFO: Pod "pod-secrets-fa125cb6-e16e-4f22-87ef-e30157344c9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008302273s + Jan 14 03:54:43.324: INFO: Pod "pod-secrets-fa125cb6-e16e-4f22-87ef-e30157344c9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008426741s + STEP: Saw pod success 01/14/23 03:54:43.324 + Jan 14 03:54:43.324: INFO: Pod "pod-secrets-fa125cb6-e16e-4f22-87ef-e30157344c9c" satisfied condition "Succeeded or Failed" + Jan 14 03:54:43.327: INFO: Trying to get logs from node 10.0.1.99 pod pod-secrets-fa125cb6-e16e-4f22-87ef-e30157344c9c container secret-volume-test: + STEP: delete the pod 01/14/23 03:54:43.333 + Jan 14 03:54:43.347: INFO: Waiting for pod pod-secrets-fa125cb6-e16e-4f22-87ef-e30157344c9c to disappear + Jan 14 03:54:43.350: INFO: Pod pod-secrets-fa125cb6-e16e-4f22-87ef-e30157344c9c no longer exists + [AfterEach] [sig-storage] Secrets + test/e2e/framework/node/init/init.go:32 + Jan 14 03:54:43.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Secrets + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Secrets + tear down framework | framework.go:193 + STEP: Destroying namespace "secrets-9936" for this suite. 01/14/23 03:54:43.355 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:375 +[BeforeEach] [sig-storage] Projected configMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:54:43.364 +Jan 14 03:54:43.364: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename projected 01/14/23 03:54:43.365 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:54:43.381 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:54:43.383 +[BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:375 +STEP: Creating configMap with name projected-configmap-test-volume-9320cd01-ce3a-47bb-ae06-f6549cd36970 01/14/23 03:54:43.386 +STEP: Creating a pod to test consume configMaps 01/14/23 03:54:43.392 +Jan 14 03:54:43.403: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-53d32bf5-cef0-456b-9d00-dc92ab7d5de6" in namespace "projected-226" to be "Succeeded or Failed" +Jan 14 03:54:43.410: INFO: Pod "pod-projected-configmaps-53d32bf5-cef0-456b-9d00-dc92ab7d5de6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.559645ms +Jan 14 03:54:45.414: INFO: Pod "pod-projected-configmaps-53d32bf5-cef0-456b-9d00-dc92ab7d5de6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011280193s +Jan 14 03:54:47.416: INFO: Pod "pod-projected-configmaps-53d32bf5-cef0-456b-9d00-dc92ab7d5de6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012643815s +STEP: Saw pod success 01/14/23 03:54:47.416 +Jan 14 03:54:47.416: INFO: Pod "pod-projected-configmaps-53d32bf5-cef0-456b-9d00-dc92ab7d5de6" satisfied condition "Succeeded or Failed" +Jan 14 03:54:47.419: INFO: Trying to get logs from node 10.0.1.99 pod pod-projected-configmaps-53d32bf5-cef0-456b-9d00-dc92ab7d5de6 container projected-configmap-volume-test: +STEP: delete the pod 01/14/23 03:54:47.425 +Jan 14 03:54:47.444: INFO: Waiting for pod pod-projected-configmaps-53d32bf5-cef0-456b-9d00-dc92ab7d5de6 to disappear +Jan 14 03:54:47.447: INFO: Pod pod-projected-configmaps-53d32bf5-cef0-456b-9d00-dc92ab7d5de6 no longer exists +[AfterEach] [sig-storage] Projected configMap + test/e2e/framework/node/init/init.go:32 +Jan 14 03:54:47.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected configMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected configMap + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-226" for this suite. 01/14/23 03:54:47.452 +------------------------------ +• [4.094 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:375 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected configMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:54:43.364 + Jan 14 03:54:43.364: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename projected 01/14/23 03:54:43.365 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:54:43.381 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:54:43.383 + [BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:375 + STEP: Creating configMap with name projected-configmap-test-volume-9320cd01-ce3a-47bb-ae06-f6549cd36970 01/14/23 03:54:43.386 + STEP: Creating a pod to test consume configMaps 01/14/23 03:54:43.392 + Jan 14 03:54:43.403: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-53d32bf5-cef0-456b-9d00-dc92ab7d5de6" in namespace "projected-226" to be "Succeeded or Failed" + Jan 14 03:54:43.410: INFO: Pod "pod-projected-configmaps-53d32bf5-cef0-456b-9d00-dc92ab7d5de6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.559645ms + Jan 14 03:54:45.414: INFO: Pod "pod-projected-configmaps-53d32bf5-cef0-456b-9d00-dc92ab7d5de6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011280193s + Jan 14 03:54:47.416: INFO: Pod "pod-projected-configmaps-53d32bf5-cef0-456b-9d00-dc92ab7d5de6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012643815s + STEP: Saw pod success 01/14/23 03:54:47.416 + Jan 14 03:54:47.416: INFO: Pod "pod-projected-configmaps-53d32bf5-cef0-456b-9d00-dc92ab7d5de6" satisfied condition "Succeeded or Failed" + Jan 14 03:54:47.419: INFO: Trying to get logs from node 10.0.1.99 pod pod-projected-configmaps-53d32bf5-cef0-456b-9d00-dc92ab7d5de6 container projected-configmap-volume-test: + STEP: delete the pod 01/14/23 03:54:47.425 + Jan 14 03:54:47.444: INFO: Waiting for pod pod-projected-configmaps-53d32bf5-cef0-456b-9d00-dc92ab7d5de6 to disappear + Jan 14 03:54:47.447: INFO: Pod pod-projected-configmaps-53d32bf5-cef0-456b-9d00-dc92ab7d5de6 no longer exists + [AfterEach] [sig-storage] Projected configMap + test/e2e/framework/node/init/init.go:32 + Jan 14 03:54:47.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected configMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected configMap + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-226" for this suite. 01/14/23 03:54:47.452 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-node] Pods + should run through the lifecycle of Pods and PodStatus [Conformance] + test/e2e/common/node/pods.go:896 +[BeforeEach] [sig-node] Pods + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:54:47.459 +Jan 14 03:54:47.459: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename pods 01/14/23 03:54:47.46 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:54:47.52 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:54:47.523 +[BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 +[It] should run through the lifecycle of Pods and PodStatus [Conformance] + test/e2e/common/node/pods.go:896 +STEP: creating a Pod with a static label 01/14/23 03:54:47.535 +STEP: watching for Pod to be ready 01/14/23 03:54:47.599 +Jan 14 03:54:47.601: INFO: observed Pod pod-test in namespace pods-982 in phase Pending with labels: map[test-pod-static:true] & conditions [] +Jan 14 03:54:47.601: INFO: observed Pod pod-test in namespace pods-982 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 03:54:47 +0000 UTC }] +Jan 14 03:54:47.601: INFO: observed Pod pod-test in namespace pods-982 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 03:54:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 03:54:47 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 03:54:47 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 03:54:47 +0000 UTC }] +Jan 14 03:54:47.986: INFO: observed Pod pod-test in namespace pods-982 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 03:54:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 03:54:47 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 03:54:47 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 03:54:47 +0000 UTC }] +Jan 14 03:54:48.892: INFO: Found Pod pod-test in namespace pods-982 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 03:54:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 03:54:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 03:54:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 03:54:47 +0000 UTC }] +STEP: patching the Pod with a new Label and updated data 01/14/23 03:54:48.896 +STEP: getting the Pod and ensuring that it's patched 01/14/23 03:54:48.905 +STEP: replacing the Pod's status Ready condition to False 01/14/23 03:54:48.908 +STEP: check the Pod again to ensure its Ready conditions are False 01/14/23 03:54:48.919 +STEP: deleting the Pod via a Collection with a LabelSelector 01/14/23 03:54:48.919 +STEP: watching for the Pod to be deleted 01/14/23 03:54:48.927 +Jan 14 03:54:48.929: INFO: observed event type MODIFIED +Jan 14 03:54:50.898: INFO: observed event type MODIFIED +Jan 14 03:54:51.897: INFO: observed event type MODIFIED +Jan 14 03:54:51.906: INFO: observed event type MODIFIED +[AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 +Jan 14 03:54:51.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 +STEP: Destroying namespace "pods-982" for this suite. 01/14/23 03:54:51.916 +------------------------------ +• [4.463 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should run through the lifecycle of Pods and PodStatus [Conformance] + test/e2e/common/node/pods.go:896 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Pods + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:54:47.459 + Jan 14 03:54:47.459: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename pods 01/14/23 03:54:47.46 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:54:47.52 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:54:47.523 + [BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 + [It] should run through the lifecycle of Pods and PodStatus [Conformance] + test/e2e/common/node/pods.go:896 + STEP: creating a Pod with a static label 01/14/23 03:54:47.535 + STEP: watching for Pod to be ready 01/14/23 03:54:47.599 + Jan 14 03:54:47.601: INFO: observed Pod pod-test in namespace pods-982 in phase Pending with labels: map[test-pod-static:true] & conditions [] + Jan 14 03:54:47.601: INFO: observed Pod pod-test in namespace pods-982 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 03:54:47 +0000 UTC }] + Jan 14 03:54:47.601: INFO: observed Pod pod-test in namespace pods-982 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 03:54:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 03:54:47 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 03:54:47 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 03:54:47 +0000 UTC }] + Jan 14 03:54:47.986: INFO: observed Pod pod-test in namespace pods-982 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 03:54:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 03:54:47 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 03:54:47 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 03:54:47 +0000 UTC }] + Jan 14 03:54:48.892: INFO: Found Pod pod-test in namespace pods-982 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 03:54:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 03:54:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 03:54:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 03:54:47 +0000 UTC }] + STEP: patching the Pod with a new Label and updated data 01/14/23 03:54:48.896 + STEP: getting the Pod and ensuring that it's patched 01/14/23 03:54:48.905 + STEP: replacing the Pod's status Ready condition to False 01/14/23 03:54:48.908 + STEP: check the Pod again to ensure its Ready conditions are False 01/14/23 03:54:48.919 + STEP: deleting the Pod via a Collection with a LabelSelector 01/14/23 03:54:48.919 + STEP: watching for the Pod to be deleted 01/14/23 03:54:48.927 + Jan 14 03:54:48.929: INFO: observed event type MODIFIED + Jan 14 03:54:50.898: INFO: observed event type MODIFIED + Jan 14 03:54:51.897: INFO: observed event type MODIFIED + Jan 14 03:54:51.906: INFO: observed event type MODIFIED + [AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 + Jan 14 03:54:51.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 + STEP: Destroying namespace "pods-982" for this suite. 01/14/23 03:54:51.916 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-node] Containers + should be able to override the image's default command and arguments [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:87 +[BeforeEach] [sig-node] Containers + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:54:51.922 +Jan 14 03:54:51.922: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename containers 01/14/23 03:54:51.923 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:54:51.937 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:54:51.94 +[BeforeEach] [sig-node] Containers + test/e2e/framework/metrics/init/init.go:31 +[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:87 +STEP: Creating a pod to test override all 01/14/23 03:54:51.942 +Jan 14 03:54:51.949: INFO: Waiting up to 5m0s for pod "client-containers-1503f5ae-7a32-4388-b085-44016a57bc08" in namespace "containers-4692" to be "Succeeded or Failed" +Jan 14 03:54:51.956: INFO: Pod "client-containers-1503f5ae-7a32-4388-b085-44016a57bc08": Phase="Pending", Reason="", readiness=false. Elapsed: 6.444204ms +Jan 14 03:54:53.960: INFO: Pod "client-containers-1503f5ae-7a32-4388-b085-44016a57bc08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010544942s +Jan 14 03:54:55.961: INFO: Pod "client-containers-1503f5ae-7a32-4388-b085-44016a57bc08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011603431s +STEP: Saw pod success 01/14/23 03:54:55.961 +Jan 14 03:54:55.961: INFO: Pod "client-containers-1503f5ae-7a32-4388-b085-44016a57bc08" satisfied condition "Succeeded or Failed" +Jan 14 03:54:55.964: INFO: Trying to get logs from node 10.0.1.99 pod client-containers-1503f5ae-7a32-4388-b085-44016a57bc08 container agnhost-container: +STEP: delete the pod 01/14/23 03:54:55.97 +Jan 14 03:54:55.980: INFO: Waiting for pod client-containers-1503f5ae-7a32-4388-b085-44016a57bc08 to disappear +Jan 14 03:54:55.982: INFO: Pod client-containers-1503f5ae-7a32-4388-b085-44016a57bc08 no longer exists +[AfterEach] [sig-node] Containers + test/e2e/framework/node/init/init.go:32 +Jan 14 03:54:55.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Containers + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Containers + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Containers + tear down framework | framework.go:193 +STEP: Destroying namespace "containers-4692" for this suite. 01/14/23 03:54:55.987 +------------------------------ +• [4.069 seconds] +[sig-node] Containers +test/e2e/common/node/framework.go:23 + should be able to override the image's default command and arguments [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:87 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Containers + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:54:51.922 + Jan 14 03:54:51.922: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename containers 01/14/23 03:54:51.923 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:54:51.937 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:54:51.94 + [BeforeEach] [sig-node] Containers + test/e2e/framework/metrics/init/init.go:31 + [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:87 + STEP: Creating a pod to test override all 01/14/23 03:54:51.942 + Jan 14 03:54:51.949: INFO: Waiting up to 5m0s for pod "client-containers-1503f5ae-7a32-4388-b085-44016a57bc08" in namespace "containers-4692" to be "Succeeded or Failed" + Jan 14 03:54:51.956: INFO: Pod "client-containers-1503f5ae-7a32-4388-b085-44016a57bc08": Phase="Pending", Reason="", readiness=false. Elapsed: 6.444204ms + Jan 14 03:54:53.960: INFO: Pod "client-containers-1503f5ae-7a32-4388-b085-44016a57bc08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010544942s + Jan 14 03:54:55.961: INFO: Pod "client-containers-1503f5ae-7a32-4388-b085-44016a57bc08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011603431s + STEP: Saw pod success 01/14/23 03:54:55.961 + Jan 14 03:54:55.961: INFO: Pod "client-containers-1503f5ae-7a32-4388-b085-44016a57bc08" satisfied condition "Succeeded or Failed" + Jan 14 03:54:55.964: INFO: Trying to get logs from node 10.0.1.99 pod client-containers-1503f5ae-7a32-4388-b085-44016a57bc08 container agnhost-container: + STEP: delete the pod 01/14/23 03:54:55.97 + Jan 14 03:54:55.980: INFO: Waiting for pod client-containers-1503f5ae-7a32-4388-b085-44016a57bc08 to disappear + Jan 14 03:54:55.982: INFO: Pod client-containers-1503f5ae-7a32-4388-b085-44016a57bc08 no longer exists + [AfterEach] [sig-node] Containers + test/e2e/framework/node/init/init.go:32 + Jan 14 03:54:55.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Containers + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Containers + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Containers + tear down framework | framework.go:193 + STEP: Destroying namespace "containers-4692" for this suite. 01/14/23 03:54:55.987 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should deny crd creation [Conformance] + test/e2e/apimachinery/webhook.go:308 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:54:55.992 +Jan 14 03:54:55.992: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename webhook 01/14/23 03:54:55.993 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:54:56.006 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:54:56.008 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 01/14/23 03:54:56.02 +STEP: Create role binding to let webhook read extension-apiserver-authentication 01/14/23 03:54:56.731 +STEP: Deploying the webhook pod 01/14/23 03:54:56.74 +STEP: Wait for the deployment to be ready 01/14/23 03:54:56.751 +Jan 14 03:54:56.758: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 01/14/23 03:54:58.769 +STEP: Verifying the service has paired with the endpoint 01/14/23 03:54:58.779 +Jan 14 03:54:59.779: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should deny crd creation [Conformance] + test/e2e/apimachinery/webhook.go:308 +STEP: Registering the crd webhook via the AdmissionRegistration API 01/14/23 03:54:59.783 +STEP: Creating a custom resource definition that should be denied by the webhook 01/14/23 03:54:59.796 +Jan 14 03:54:59.796: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jan 14 03:54:59.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "webhook-4792" for this suite. 01/14/23 03:54:59.85 +STEP: Destroying namespace "webhook-4792-markers" for this suite. 01/14/23 03:54:59.854 +------------------------------ +• [3.868 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should deny crd creation [Conformance] + test/e2e/apimachinery/webhook.go:308 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:54:55.992 + Jan 14 03:54:55.992: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename webhook 01/14/23 03:54:55.993 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:54:56.006 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:54:56.008 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 01/14/23 03:54:56.02 + STEP: Create role binding to let webhook read extension-apiserver-authentication 01/14/23 03:54:56.731 + STEP: Deploying the webhook pod 01/14/23 03:54:56.74 + STEP: Wait for the deployment to be ready 01/14/23 03:54:56.751 + Jan 14 03:54:56.758: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 01/14/23 03:54:58.769 + STEP: Verifying the service has paired with the endpoint 01/14/23 03:54:58.779 + Jan 14 03:54:59.779: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should deny crd creation [Conformance] + test/e2e/apimachinery/webhook.go:308 + STEP: Registering the crd webhook via the AdmissionRegistration API 01/14/23 03:54:59.783 + STEP: Creating a custom resource definition that should be denied by the webhook 01/14/23 03:54:59.796 + Jan 14 03:54:59.796: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Jan 14 03:54:59.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "webhook-4792" for this suite. 01/14/23 03:54:59.85 + STEP: Destroying namespace "webhook-4792-markers" for this suite. 01/14/23 03:54:59.854 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSS +------------------------------ +[sig-node] KubeletManagedEtcHosts + should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/kubelet_etc_hosts.go:63 +[BeforeEach] [sig-node] KubeletManagedEtcHosts + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:54:59.861 +Jan 14 03:54:59.861: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts 01/14/23 03:54:59.862 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:54:59.876 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:54:59.878 +[BeforeEach] [sig-node] KubeletManagedEtcHosts + test/e2e/framework/metrics/init/init.go:31 +[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/kubelet_etc_hosts.go:63 +STEP: Setting up the test 01/14/23 03:54:59.88 +STEP: Creating hostNetwork=false pod 01/14/23 03:54:59.88 +Jan 14 03:54:59.889: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "e2e-kubelet-etc-hosts-6262" to be "running and ready" +Jan 14 03:54:59.892: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.842977ms +Jan 14 03:54:59.892: INFO: The phase of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) +Jan 14 03:55:01.897: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.008090471s +Jan 14 03:55:01.897: INFO: The phase of Pod test-pod is Running (Ready = true) +Jan 14 03:55:01.897: INFO: Pod "test-pod" satisfied condition "running and ready" +STEP: Creating hostNetwork=true pod 01/14/23 03:55:01.901 +Jan 14 03:55:01.910: INFO: Waiting up to 5m0s for pod "test-host-network-pod" in namespace "e2e-kubelet-etc-hosts-6262" to be "running and ready" +Jan 14 03:55:01.913: INFO: Pod "test-host-network-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.618398ms +Jan 14 03:55:01.913: INFO: The phase of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) +Jan 14 03:55:03.919: INFO: Pod "test-host-network-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.00897097s +Jan 14 03:55:03.919: INFO: The phase of Pod test-host-network-pod is Running (Ready = true) +Jan 14 03:55:03.919: INFO: Pod "test-host-network-pod" satisfied condition "running and ready" +STEP: Running the test 01/14/23 03:55:03.922 +STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false 01/14/23 03:55:03.922 +Jan 14 03:55:03.922: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6262 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jan 14 03:55:03.922: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +Jan 14 03:55:03.923: INFO: ExecWithOptions: Clientset creation +Jan 14 03:55:03.923: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6262/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true) +Jan 14 03:55:03.968: INFO: Exec stderr: "" +Jan 14 03:55:03.968: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6262 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jan 14 03:55:03.968: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +Jan 14 03:55:03.969: INFO: ExecWithOptions: Clientset creation +Jan 14 03:55:03.969: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6262/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true) +Jan 14 03:55:04.017: INFO: Exec stderr: "" +Jan 14 03:55:04.017: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6262 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jan 14 03:55:04.017: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +Jan 14 03:55:04.018: INFO: ExecWithOptions: Clientset creation +Jan 14 03:55:04.018: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6262/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true) +Jan 14 03:55:04.067: INFO: Exec stderr: "" +Jan 14 03:55:04.067: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6262 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jan 14 03:55:04.067: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +Jan 14 03:55:04.067: INFO: ExecWithOptions: Clientset creation +Jan 14 03:55:04.067: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6262/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true) +Jan 14 03:55:04.120: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount 01/14/23 03:55:04.12 +Jan 14 03:55:04.120: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6262 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jan 14 03:55:04.120: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +Jan 14 03:55:04.121: INFO: ExecWithOptions: Clientset creation +Jan 14 03:55:04.121: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6262/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-3&container=busybox-3&stderr=true&stdout=true) +Jan 14 03:55:04.168: INFO: Exec stderr: "" +Jan 14 03:55:04.168: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6262 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jan 14 03:55:04.168: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +Jan 14 03:55:04.168: INFO: ExecWithOptions: Clientset creation +Jan 14 03:55:04.168: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6262/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-3&container=busybox-3&stderr=true&stdout=true) +Jan 14 03:55:04.219: INFO: Exec stderr: "" +STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true 01/14/23 03:55:04.22 +Jan 14 03:55:04.220: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6262 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jan 14 03:55:04.220: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +Jan 14 03:55:04.220: INFO: ExecWithOptions: Clientset creation +Jan 14 03:55:04.220: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6262/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true) +Jan 14 03:55:04.266: INFO: Exec stderr: "" +Jan 14 03:55:04.266: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6262 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jan 14 03:55:04.266: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +Jan 14 03:55:04.267: INFO: ExecWithOptions: Clientset creation +Jan 14 03:55:04.267: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6262/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true) +Jan 14 03:55:04.318: INFO: Exec stderr: "" +Jan 14 03:55:04.318: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6262 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jan 14 03:55:04.318: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +Jan 14 03:55:04.318: INFO: ExecWithOptions: Clientset creation +Jan 14 03:55:04.318: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6262/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true) +Jan 14 03:55:04.359: INFO: Exec stderr: "" +Jan 14 03:55:04.360: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6262 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jan 14 03:55:04.360: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +Jan 14 03:55:04.360: INFO: ExecWithOptions: Clientset creation +Jan 14 03:55:04.360: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6262/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true) +Jan 14 03:55:04.404: INFO: Exec stderr: "" +[AfterEach] [sig-node] KubeletManagedEtcHosts + test/e2e/framework/node/init/init.go:32 +Jan 14 03:55:04.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] KubeletManagedEtcHosts + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] KubeletManagedEtcHosts + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] KubeletManagedEtcHosts + tear down framework | framework.go:193 +STEP: Destroying namespace "e2e-kubelet-etc-hosts-6262" for this suite. 01/14/23 03:55:04.41 +------------------------------ +• [4.555 seconds] +[sig-node] KubeletManagedEtcHosts +test/e2e/common/node/framework.go:23 + should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/kubelet_etc_hosts.go:63 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] KubeletManagedEtcHosts + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:54:59.861 + Jan 14 03:54:59.861: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts 01/14/23 03:54:59.862 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:54:59.876 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:54:59.878 + [BeforeEach] [sig-node] KubeletManagedEtcHosts + test/e2e/framework/metrics/init/init.go:31 + [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/kubelet_etc_hosts.go:63 + STEP: Setting up the test 01/14/23 03:54:59.88 + STEP: Creating hostNetwork=false pod 01/14/23 03:54:59.88 + Jan 14 03:54:59.889: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "e2e-kubelet-etc-hosts-6262" to be "running and ready" + Jan 14 03:54:59.892: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.842977ms + Jan 14 03:54:59.892: INFO: The phase of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) + Jan 14 03:55:01.897: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.008090471s + Jan 14 03:55:01.897: INFO: The phase of Pod test-pod is Running (Ready = true) + Jan 14 03:55:01.897: INFO: Pod "test-pod" satisfied condition "running and ready" + STEP: Creating hostNetwork=true pod 01/14/23 03:55:01.901 + Jan 14 03:55:01.910: INFO: Waiting up to 5m0s for pod "test-host-network-pod" in namespace "e2e-kubelet-etc-hosts-6262" to be "running and ready" + Jan 14 03:55:01.913: INFO: Pod "test-host-network-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.618398ms + Jan 14 03:55:01.913: INFO: The phase of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) + Jan 14 03:55:03.919: INFO: Pod "test-host-network-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.00897097s + Jan 14 03:55:03.919: INFO: The phase of Pod test-host-network-pod is Running (Ready = true) + Jan 14 03:55:03.919: INFO: Pod "test-host-network-pod" satisfied condition "running and ready" + STEP: Running the test 01/14/23 03:55:03.922 + STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false 01/14/23 03:55:03.922 + Jan 14 03:55:03.922: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6262 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jan 14 03:55:03.922: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + Jan 14 03:55:03.923: INFO: ExecWithOptions: Clientset creation + Jan 14 03:55:03.923: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6262/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true) + Jan 14 03:55:03.968: INFO: Exec stderr: "" + Jan 14 03:55:03.968: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6262 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jan 14 03:55:03.968: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + Jan 14 03:55:03.969: INFO: ExecWithOptions: Clientset creation + Jan 14 03:55:03.969: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6262/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true) + Jan 14 03:55:04.017: INFO: Exec stderr: "" + Jan 14 03:55:04.017: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6262 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jan 14 03:55:04.017: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + Jan 14 03:55:04.018: INFO: ExecWithOptions: Clientset creation + Jan 14 03:55:04.018: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6262/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true) + Jan 14 03:55:04.067: INFO: Exec stderr: "" + Jan 14 03:55:04.067: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6262 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jan 14 03:55:04.067: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + Jan 14 03:55:04.067: INFO: ExecWithOptions: Clientset creation + Jan 14 03:55:04.067: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6262/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true) + Jan 14 03:55:04.120: INFO: Exec stderr: "" + STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount 01/14/23 03:55:04.12 + Jan 14 03:55:04.120: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6262 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jan 14 03:55:04.120: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + Jan 14 03:55:04.121: INFO: ExecWithOptions: Clientset creation + Jan 14 03:55:04.121: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6262/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-3&container=busybox-3&stderr=true&stdout=true) + Jan 14 03:55:04.168: INFO: Exec stderr: "" + Jan 14 03:55:04.168: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6262 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jan 14 03:55:04.168: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + Jan 14 03:55:04.168: INFO: ExecWithOptions: Clientset creation + Jan 14 03:55:04.168: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6262/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-3&container=busybox-3&stderr=true&stdout=true) + Jan 14 03:55:04.219: INFO: Exec stderr: "" + STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true 01/14/23 03:55:04.22 + Jan 14 03:55:04.220: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6262 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jan 14 03:55:04.220: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + Jan 14 03:55:04.220: INFO: ExecWithOptions: Clientset creation + Jan 14 03:55:04.220: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6262/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true) + Jan 14 03:55:04.266: INFO: Exec stderr: "" + Jan 14 03:55:04.266: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6262 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jan 14 03:55:04.266: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + Jan 14 03:55:04.267: INFO: ExecWithOptions: Clientset creation + Jan 14 03:55:04.267: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6262/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true) + Jan 14 03:55:04.318: INFO: Exec stderr: "" + Jan 14 03:55:04.318: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6262 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jan 14 03:55:04.318: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + Jan 14 03:55:04.318: INFO: ExecWithOptions: Clientset creation + Jan 14 03:55:04.318: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6262/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true) + Jan 14 03:55:04.359: INFO: Exec stderr: "" + Jan 14 03:55:04.360: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6262 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jan 14 03:55:04.360: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + Jan 14 03:55:04.360: INFO: ExecWithOptions: Clientset creation + Jan 14 03:55:04.360: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/e2e-kubelet-etc-hosts-6262/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true) + Jan 14 03:55:04.404: INFO: Exec stderr: "" + [AfterEach] [sig-node] KubeletManagedEtcHosts + test/e2e/framework/node/init/init.go:32 + Jan 14 03:55:04.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] KubeletManagedEtcHosts + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] KubeletManagedEtcHosts + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] KubeletManagedEtcHosts + tear down framework | framework.go:193 + STEP: Destroying namespace "e2e-kubelet-etc-hosts-6262" for this suite. 01/14/23 03:55:04.41 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + should include custom resource definition resources in discovery documents [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:198 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:55:04.416 +Jan 14 03:55:04.416: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename custom-resource-definition 01/14/23 03:55:04.417 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:55:04.439 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:55:04.441 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[It] should include custom resource definition resources in discovery documents [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:198 +STEP: fetching the /apis discovery document 01/14/23 03:55:04.444 +STEP: finding the apiextensions.k8s.io API group in the /apis discovery document 01/14/23 03:55:04.445 +STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document 01/14/23 03:55:04.445 +STEP: fetching the /apis/apiextensions.k8s.io discovery document 01/14/23 03:55:04.445 +STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document 01/14/23 03:55:04.446 +STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document 01/14/23 03:55:04.446 +STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document 01/14/23 03:55:04.447 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jan 14 03:55:04.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "custom-resource-definition-977" for this suite. 01/14/23 03:55:04.456 +------------------------------ +• [0.045 seconds] +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should include custom resource definition resources in discovery documents [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:198 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:55:04.416 + Jan 14 03:55:04.416: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename custom-resource-definition 01/14/23 03:55:04.417 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:55:04.439 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:55:04.441 + [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [It] should include custom resource definition resources in discovery documents [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:198 + STEP: fetching the /apis discovery document 01/14/23 03:55:04.444 + STEP: finding the apiextensions.k8s.io API group in the /apis discovery document 01/14/23 03:55:04.445 + STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document 01/14/23 03:55:04.445 + STEP: fetching the /apis/apiextensions.k8s.io discovery document 01/14/23 03:55:04.445 + STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document 01/14/23 03:55:04.446 + STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document 01/14/23 03:55:04.446 + STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document 01/14/23 03:55:04.447 + [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Jan 14 03:55:04.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "custom-resource-definition-977" for this suite. 01/14/23 03:55:04.456 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:249 +[BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:55:04.463 +Jan 14 03:55:04.463: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename projected 01/14/23 03:55:04.463 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:55:04.513 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:55:04.516 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 +[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:249 +STEP: Creating a pod to test downward API volume plugin 01/14/23 03:55:04.518 +Jan 14 03:55:04.534: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c3695c05-4964-4fe1-9fd7-b6526b3aafea" in namespace "projected-3441" to be "Succeeded or Failed" +Jan 14 03:55:04.617: INFO: Pod "downwardapi-volume-c3695c05-4964-4fe1-9fd7-b6526b3aafea": Phase="Pending", Reason="", readiness=false. Elapsed: 83.082333ms +Jan 14 03:55:06.622: INFO: Pod "downwardapi-volume-c3695c05-4964-4fe1-9fd7-b6526b3aafea": Phase="Running", Reason="", readiness=false. Elapsed: 2.087807311s +Jan 14 03:55:08.623: INFO: Pod "downwardapi-volume-c3695c05-4964-4fe1-9fd7-b6526b3aafea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.088817024s +STEP: Saw pod success 01/14/23 03:55:08.623 +Jan 14 03:55:08.623: INFO: Pod "downwardapi-volume-c3695c05-4964-4fe1-9fd7-b6526b3aafea" satisfied condition "Succeeded or Failed" +Jan 14 03:55:08.626: INFO: Trying to get logs from node 10.0.1.212 pod downwardapi-volume-c3695c05-4964-4fe1-9fd7-b6526b3aafea container client-container: +STEP: delete the pod 01/14/23 03:55:08.64 +Jan 14 03:55:08.653: INFO: Waiting for pod downwardapi-volume-c3695c05-4964-4fe1-9fd7-b6526b3aafea to disappear +Jan 14 03:55:08.656: INFO: Pod downwardapi-volume-c3695c05-4964-4fe1-9fd7-b6526b3aafea no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 +Jan 14 03:55:08.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-3441" for this suite. 01/14/23 03:55:08.66 +------------------------------ +• [4.204 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:249 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:55:04.463 + Jan 14 03:55:04.463: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename projected 01/14/23 03:55:04.463 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:55:04.513 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:55:04.516 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 + [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:249 + STEP: Creating a pod to test downward API volume plugin 01/14/23 03:55:04.518 + Jan 14 03:55:04.534: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c3695c05-4964-4fe1-9fd7-b6526b3aafea" in namespace "projected-3441" to be "Succeeded or Failed" + Jan 14 03:55:04.617: INFO: Pod "downwardapi-volume-c3695c05-4964-4fe1-9fd7-b6526b3aafea": Phase="Pending", Reason="", readiness=false. Elapsed: 83.082333ms + Jan 14 03:55:06.622: INFO: Pod "downwardapi-volume-c3695c05-4964-4fe1-9fd7-b6526b3aafea": Phase="Running", Reason="", readiness=false. Elapsed: 2.087807311s + Jan 14 03:55:08.623: INFO: Pod "downwardapi-volume-c3695c05-4964-4fe1-9fd7-b6526b3aafea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.088817024s + STEP: Saw pod success 01/14/23 03:55:08.623 + Jan 14 03:55:08.623: INFO: Pod "downwardapi-volume-c3695c05-4964-4fe1-9fd7-b6526b3aafea" satisfied condition "Succeeded or Failed" + Jan 14 03:55:08.626: INFO: Trying to get logs from node 10.0.1.212 pod downwardapi-volume-c3695c05-4964-4fe1-9fd7-b6526b3aafea container client-container: + STEP: delete the pod 01/14/23 03:55:08.64 + Jan 14 03:55:08.653: INFO: Waiting for pod downwardapi-volume-c3695c05-4964-4fe1-9fd7-b6526b3aafea to disappear + Jan 14 03:55:08.656: INFO: Pod downwardapi-volume-c3695c05-4964-4fe1-9fd7-b6526b3aafea no longer exists + [AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 + Jan 14 03:55:08.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-3441" for this suite. 01/14/23 03:55:08.66 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny pod and configmap creation [Conformance] + test/e2e/apimachinery/webhook.go:197 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:55:08.667 +Jan 14 03:55:08.667: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename webhook 01/14/23 03:55:08.668 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:55:08.681 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:55:08.683 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 01/14/23 03:55:08.695 +STEP: Create role binding to let webhook read extension-apiserver-authentication 01/14/23 03:55:09.61 +STEP: Deploying the webhook pod 01/14/23 03:55:09.615 +STEP: Wait for the deployment to be ready 01/14/23 03:55:09.626 +Jan 14 03:55:09.633: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 01/14/23 03:55:11.645 +STEP: Verifying the service has paired with the endpoint 01/14/23 03:55:11.653 +Jan 14 03:55:12.654: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny pod and configmap creation [Conformance] + test/e2e/apimachinery/webhook.go:197 +STEP: Registering the webhook via the AdmissionRegistration API 01/14/23 03:55:12.658 +STEP: create a pod that should be denied by the webhook 01/14/23 03:55:12.672 +STEP: create a pod that causes the webhook to hang 01/14/23 03:55:12.683 +STEP: create a configmap that should be denied by the webhook 01/14/23 03:55:22.691 +STEP: create a configmap that should be admitted by the webhook 01/14/23 03:55:22.711 +STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook 01/14/23 03:55:22.72 +STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook 01/14/23 03:55:22.728 +STEP: create a namespace that bypass the webhook 01/14/23 03:55:22.733 +STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace 01/14/23 03:55:22.74 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jan 14 03:55:22.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "webhook-1632" for this suite. 01/14/23 03:55:22.804 +STEP: Destroying namespace "webhook-1632-markers" for this suite. 01/14/23 03:55:22.811 +------------------------------ +• [SLOW TEST] [14.150 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should be able to deny pod and configmap creation [Conformance] + test/e2e/apimachinery/webhook.go:197 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:55:08.667 + Jan 14 03:55:08.667: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename webhook 01/14/23 03:55:08.668 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:55:08.681 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:55:08.683 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 01/14/23 03:55:08.695 + STEP: Create role binding to let webhook read extension-apiserver-authentication 01/14/23 03:55:09.61 + STEP: Deploying the webhook pod 01/14/23 03:55:09.615 + STEP: Wait for the deployment to be ready 01/14/23 03:55:09.626 + Jan 14 03:55:09.633: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 01/14/23 03:55:11.645 + STEP: Verifying the service has paired with the endpoint 01/14/23 03:55:11.653 + Jan 14 03:55:12.654: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should be able to deny pod and configmap creation [Conformance] + test/e2e/apimachinery/webhook.go:197 + STEP: Registering the webhook via the AdmissionRegistration API 01/14/23 03:55:12.658 + STEP: create a pod that should be denied by the webhook 01/14/23 03:55:12.672 + STEP: create a pod that causes the webhook to hang 01/14/23 03:55:12.683 + STEP: create a configmap that should be denied by the webhook 01/14/23 03:55:22.691 + STEP: create a configmap that should be admitted by the webhook 01/14/23 03:55:22.711 + STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook 01/14/23 03:55:22.72 + STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook 01/14/23 03:55:22.728 + STEP: create a namespace that bypass the webhook 01/14/23 03:55:22.733 + STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace 01/14/23 03:55:22.74 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Jan 14 03:55:22.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "webhook-1632" for this suite. 01/14/23 03:55:22.804 + STEP: Destroying namespace "webhook-1632-markers" for this suite. 01/14/23 03:55:22.811 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:109 +[BeforeEach] [sig-storage] Projected configMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:55:22.818 +Jan 14 03:55:22.818: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename projected 01/14/23 03:55:22.819 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:55:22.833 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:55:22.836 +[BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:109 +STEP: Creating configMap with name projected-configmap-test-volume-map-eb24723f-915a-44d0-b7df-ae0076ee44bc 01/14/23 03:55:22.838 +STEP: Creating a pod to test consume configMaps 01/14/23 03:55:22.842 +Jan 14 03:55:22.851: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c278860f-7159-43c9-8421-3aeba77b1157" in namespace "projected-1113" to be "Succeeded or Failed" +Jan 14 03:55:22.854: INFO: Pod "pod-projected-configmaps-c278860f-7159-43c9-8421-3aeba77b1157": Phase="Pending", Reason="", readiness=false. Elapsed: 3.089027ms +Jan 14 03:55:24.859: INFO: Pod "pod-projected-configmaps-c278860f-7159-43c9-8421-3aeba77b1157": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008058504s +Jan 14 03:55:26.861: INFO: Pod "pod-projected-configmaps-c278860f-7159-43c9-8421-3aeba77b1157": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009243259s +STEP: Saw pod success 01/14/23 03:55:26.861 +Jan 14 03:55:26.861: INFO: Pod "pod-projected-configmaps-c278860f-7159-43c9-8421-3aeba77b1157" satisfied condition "Succeeded or Failed" +Jan 14 03:55:26.864: INFO: Trying to get logs from node 10.0.1.106 pod pod-projected-configmaps-c278860f-7159-43c9-8421-3aeba77b1157 container agnhost-container: +STEP: delete the pod 01/14/23 03:55:26.87 +Jan 14 03:55:26.881: INFO: Waiting for pod pod-projected-configmaps-c278860f-7159-43c9-8421-3aeba77b1157 to disappear +Jan 14 03:55:26.884: INFO: Pod pod-projected-configmaps-c278860f-7159-43c9-8421-3aeba77b1157 no longer exists +[AfterEach] [sig-storage] Projected configMap + test/e2e/framework/node/init/init.go:32 +Jan 14 03:55:26.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected configMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected configMap + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-1113" for this suite. 01/14/23 03:55:26.888 +------------------------------ +• [4.075 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:109 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected configMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:55:22.818 + Jan 14 03:55:22.818: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename projected 01/14/23 03:55:22.819 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:55:22.833 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:55:22.836 + [BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:109 + STEP: Creating configMap with name projected-configmap-test-volume-map-eb24723f-915a-44d0-b7df-ae0076ee44bc 01/14/23 03:55:22.838 + STEP: Creating a pod to test consume configMaps 01/14/23 03:55:22.842 + Jan 14 03:55:22.851: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c278860f-7159-43c9-8421-3aeba77b1157" in namespace "projected-1113" to be "Succeeded or Failed" + Jan 14 03:55:22.854: INFO: Pod "pod-projected-configmaps-c278860f-7159-43c9-8421-3aeba77b1157": Phase="Pending", Reason="", readiness=false. Elapsed: 3.089027ms + Jan 14 03:55:24.859: INFO: Pod "pod-projected-configmaps-c278860f-7159-43c9-8421-3aeba77b1157": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008058504s + Jan 14 03:55:26.861: INFO: Pod "pod-projected-configmaps-c278860f-7159-43c9-8421-3aeba77b1157": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009243259s + STEP: Saw pod success 01/14/23 03:55:26.861 + Jan 14 03:55:26.861: INFO: Pod "pod-projected-configmaps-c278860f-7159-43c9-8421-3aeba77b1157" satisfied condition "Succeeded or Failed" + Jan 14 03:55:26.864: INFO: Trying to get logs from node 10.0.1.106 pod pod-projected-configmaps-c278860f-7159-43c9-8421-3aeba77b1157 container agnhost-container: + STEP: delete the pod 01/14/23 03:55:26.87 + Jan 14 03:55:26.881: INFO: Waiting for pod pod-projected-configmaps-c278860f-7159-43c9-8421-3aeba77b1157 to disappear + Jan 14 03:55:26.884: INFO: Pod pod-projected-configmaps-c278860f-7159-43c9-8421-3aeba77b1157 no longer exists + [AfterEach] [sig-storage] Projected configMap + test/e2e/framework/node/init/init.go:32 + Jan 14 03:55:26.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected configMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected configMap + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-1113" for this suite. 01/14/23 03:55:26.888 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a secret. [Conformance] + test/e2e/apimachinery/resource_quota.go:160 +[BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:55:26.893 +Jan 14 03:55:26.894: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename resourcequota 01/14/23 03:55:26.894 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:55:26.908 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:55:26.91 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 +[It] should create a ResourceQuota and capture the life of a secret. [Conformance] + test/e2e/apimachinery/resource_quota.go:160 +STEP: Discovering how many secrets are in namespace by default 01/14/23 03:55:26.913 +STEP: Counting existing ResourceQuota 01/14/23 03:55:31.916 +STEP: Creating a ResourceQuota 01/14/23 03:55:36.92 +STEP: Ensuring resource quota status is calculated 01/14/23 03:55:36.926 +STEP: Creating a Secret 01/14/23 03:55:38.931 +STEP: Ensuring resource quota status captures secret creation 01/14/23 03:55:38.941 +STEP: Deleting a secret 01/14/23 03:55:40.945 +STEP: Ensuring resource quota status released usage 01/14/23 03:55:40.951 +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 +Jan 14 03:55:42.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 +STEP: Destroying namespace "resourcequota-4030" for this suite. 01/14/23 03:55:42.962 +------------------------------ +• [SLOW TEST] [16.073 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a secret. [Conformance] + test/e2e/apimachinery/resource_quota.go:160 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:55:26.893 + Jan 14 03:55:26.894: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename resourcequota 01/14/23 03:55:26.894 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:55:26.908 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:55:26.91 + [BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 + [It] should create a ResourceQuota and capture the life of a secret. [Conformance] + test/e2e/apimachinery/resource_quota.go:160 + STEP: Discovering how many secrets are in namespace by default 01/14/23 03:55:26.913 + STEP: Counting existing ResourceQuota 01/14/23 03:55:31.916 + STEP: Creating a ResourceQuota 01/14/23 03:55:36.92 + STEP: Ensuring resource quota status is calculated 01/14/23 03:55:36.926 + STEP: Creating a Secret 01/14/23 03:55:38.931 + STEP: Ensuring resource quota status captures secret creation 01/14/23 03:55:38.941 + STEP: Deleting a secret 01/14/23 03:55:40.945 + STEP: Ensuring resource quota status released usage 01/14/23 03:55:40.951 + [AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 + Jan 14 03:55:42.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 + STEP: Destroying namespace "resourcequota-4030" for this suite. 01/14/23 03:55:42.962 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-node] PodTemplates + should delete a collection of pod templates [Conformance] + test/e2e/common/node/podtemplates.go:122 +[BeforeEach] [sig-node] PodTemplates + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:55:42.967 +Jan 14 03:55:42.967: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename podtemplate 01/14/23 03:55:42.968 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:55:42.995 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:55:42.998 +[BeforeEach] [sig-node] PodTemplates + test/e2e/framework/metrics/init/init.go:31 +[It] should delete a collection of pod templates [Conformance] + test/e2e/common/node/podtemplates.go:122 +STEP: Create set of pod templates 01/14/23 03:55:43 +Jan 14 03:55:43.004: INFO: created test-podtemplate-1 +Jan 14 03:55:43.008: INFO: created test-podtemplate-2 +Jan 14 03:55:43.011: INFO: created test-podtemplate-3 +STEP: get a list of pod templates with a label in the current namespace 01/14/23 03:55:43.011 +STEP: delete collection of pod templates 01/14/23 03:55:43.013 +Jan 14 03:55:43.013: INFO: requesting DeleteCollection of pod templates +STEP: check that the list of pod templates matches the requested quantity 01/14/23 03:55:43.027 +Jan 14 03:55:43.027: INFO: requesting list of pod templates to confirm quantity +[AfterEach] [sig-node] PodTemplates + test/e2e/framework/node/init/init.go:32 +Jan 14 03:55:43.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] PodTemplates + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] PodTemplates + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] PodTemplates + tear down framework | framework.go:193 +STEP: Destroying namespace "podtemplate-3748" for this suite. 01/14/23 03:55:43.034 +------------------------------ +• [0.071 seconds] +[sig-node] PodTemplates +test/e2e/common/node/framework.go:23 + should delete a collection of pod templates [Conformance] + test/e2e/common/node/podtemplates.go:122 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] PodTemplates + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:55:42.967 + Jan 14 03:55:42.967: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename podtemplate 01/14/23 03:55:42.968 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:55:42.995 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:55:42.998 + [BeforeEach] [sig-node] PodTemplates + test/e2e/framework/metrics/init/init.go:31 + [It] should delete a collection of pod templates [Conformance] + test/e2e/common/node/podtemplates.go:122 + STEP: Create set of pod templates 01/14/23 03:55:43 + Jan 14 03:55:43.004: INFO: created test-podtemplate-1 + Jan 14 03:55:43.008: INFO: created test-podtemplate-2 + Jan 14 03:55:43.011: INFO: created test-podtemplate-3 + STEP: get a list of pod templates with a label in the current namespace 01/14/23 03:55:43.011 + STEP: delete collection of pod templates 01/14/23 03:55:43.013 + Jan 14 03:55:43.013: INFO: requesting DeleteCollection of pod templates + STEP: check that the list of pod templates matches the requested quantity 01/14/23 03:55:43.027 + Jan 14 03:55:43.027: INFO: requesting list of pod templates to confirm quantity + [AfterEach] [sig-node] PodTemplates + test/e2e/framework/node/init/init.go:32 + Jan 14 03:55:43.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] PodTemplates + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] PodTemplates + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] PodTemplates + tear down framework | framework.go:193 + STEP: Destroying namespace "podtemplate-3748" for this suite. 01/14/23 03:55:43.034 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + RollingUpdateDeployment should delete old pods and create new ones [Conformance] + test/e2e/apps/deployment.go:105 +[BeforeEach] [sig-apps] Deployment + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:55:43.04 +Jan 14 03:55:43.040: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename deployment 01/14/23 03:55:43.04 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:55:43.053 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:55:43.055 +[BeforeEach] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] + test/e2e/apps/deployment.go:105 +Jan 14 03:55:43.057: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) +Jan 14 03:55:43.066: INFO: Pod name sample-pod: Found 0 pods out of 1 +Jan 14 03:55:48.073: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running 01/14/23 03:55:48.073 +Jan 14 03:55:48.074: INFO: Creating deployment "test-rolling-update-deployment" +Jan 14 03:55:48.080: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has +Jan 14 03:55:48.091: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created +Jan 14 03:55:50.098: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected +Jan 14 03:55:50.100: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Jan 14 03:55:50.108: INFO: Deployment "test-rolling-update-deployment": +&Deployment{ObjectMeta:{test-rolling-update-deployment deployment-7855 432e49d4-85ad-4ae7-825f-3d4d9e637f87 422593 1 2023-01-14 03:55:48 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2023-01-14 03:55:48 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 03:55:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002d1c7e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-01-14 03:55:48 +0000 UTC,LastTransitionTime:2023-01-14 03:55:48 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-7549d9f46d" has successfully progressed.,LastUpdateTime:2023-01-14 03:55:49 +0000 UTC,LastTransitionTime:2023-01-14 03:55:48 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Jan 14 03:55:50.111: INFO: New ReplicaSet "test-rolling-update-deployment-7549d9f46d" of Deployment "test-rolling-update-deployment": +&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-7549d9f46d deployment-7855 019a68bf-0272-4be9-8ea2-6739ff3534d4 422583 1 2023-01-14 03:55:48 +0000 UTC map[name:sample-pod pod-template-hash:7549d9f46d] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 432e49d4-85ad-4ae7-825f-3d4d9e637f87 0xc002d1d317 0xc002d1d318}] [] [{kube-controller-manager Update apps/v1 2023-01-14 03:55:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"432e49d4-85ad-4ae7-825f-3d4d9e637f87\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 03:55:49 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 7549d9f46d,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:7549d9f46d] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002d1d3c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Jan 14 03:55:50.111: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": +Jan 14 03:55:50.111: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-7855 d3ebf579-751f-4bd8-82f7-83094a17286a 422592 2 2023-01-14 03:55:43 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 432e49d4-85ad-4ae7-825f-3d4d9e637f87 0xc002d1ced7 0xc002d1ced8}] [] [{e2e.test Update apps/v1 2023-01-14 03:55:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 03:55:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"432e49d4-85ad-4ae7-825f-3d4d9e637f87\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2023-01-14 03:55:49 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002d1d2a8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Jan 14 03:55:50.114: INFO: Pod "test-rolling-update-deployment-7549d9f46d-s9lg9" is available: +&Pod{ObjectMeta:{test-rolling-update-deployment-7549d9f46d-s9lg9 test-rolling-update-deployment-7549d9f46d- deployment-7855 b8656b8e-ca7b-4983-bf67-9e20b95c3a3b 422582 0 2023-01-14 03:55:48 +0000 UTC map[name:sample-pod pod-template-hash:7549d9f46d] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.1.74" + ], + "mac": "c6:52:87:3e:39:a3", + "default": true, + "dns": {} +}]] [{apps/v1 ReplicaSet test-rolling-update-deployment-7549d9f46d 019a68bf-0272-4be9-8ea2-6739ff3534d4 0xc0037c9ae7 0xc0037c9ae8}] [] [{kube-controller-manager Update v1 2023-01-14 03:55:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"019a68bf-0272-4be9-8ea2-6739ff3534d4\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus Update v1 2023-01-14 03:55:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status} {kubelet Update v1 2023-01-14 03:55:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.1.74\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8g5rw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8g5rw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.99,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 03:55:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 03:55:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 03:55:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 03:55:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.99,PodIP:10.52.1.74,StartTime:2023-01-14 03:55:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 03:55:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,ImageID:registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:containerd://4cf5954e94a0c69f833694df2a53f234e1ecc08d74826dcee82b7a7d0f3d260e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.52.1.74,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + test/e2e/framework/node/init/init.go:32 +Jan 14 03:55:50.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Deployment + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Deployment + tear down framework | framework.go:193 +STEP: Destroying namespace "deployment-7855" for this suite. 01/14/23 03:55:50.118 +------------------------------ +• [SLOW TEST] [7.084 seconds] +[sig-apps] Deployment +test/e2e/apps/framework.go:23 + RollingUpdateDeployment should delete old pods and create new ones [Conformance] + test/e2e/apps/deployment.go:105 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Deployment + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:55:43.04 + Jan 14 03:55:43.040: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename deployment 01/14/23 03:55:43.04 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:55:43.053 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:55:43.055 + [BeforeEach] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 + [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] + test/e2e/apps/deployment.go:105 + Jan 14 03:55:43.057: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) + Jan 14 03:55:43.066: INFO: Pod name sample-pod: Found 0 pods out of 1 + Jan 14 03:55:48.073: INFO: Pod name sample-pod: Found 1 pods out of 1 + STEP: ensuring each pod is running 01/14/23 03:55:48.073 + Jan 14 03:55:48.074: INFO: Creating deployment "test-rolling-update-deployment" + Jan 14 03:55:48.080: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has + Jan 14 03:55:48.091: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created + Jan 14 03:55:50.098: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected + Jan 14 03:55:50.100: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) + [AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 + Jan 14 03:55:50.108: INFO: Deployment "test-rolling-update-deployment": + &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-7855 432e49d4-85ad-4ae7-825f-3d4d9e637f87 422593 1 2023-01-14 03:55:48 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2023-01-14 03:55:48 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 03:55:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002d1c7e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-01-14 03:55:48 +0000 UTC,LastTransitionTime:2023-01-14 03:55:48 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-7549d9f46d" has successfully progressed.,LastUpdateTime:2023-01-14 03:55:49 +0000 UTC,LastTransitionTime:2023-01-14 03:55:48 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + + Jan 14 03:55:50.111: INFO: New ReplicaSet "test-rolling-update-deployment-7549d9f46d" of Deployment "test-rolling-update-deployment": + &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-7549d9f46d deployment-7855 019a68bf-0272-4be9-8ea2-6739ff3534d4 422583 1 2023-01-14 03:55:48 +0000 UTC map[name:sample-pod pod-template-hash:7549d9f46d] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 432e49d4-85ad-4ae7-825f-3d4d9e637f87 0xc002d1d317 0xc002d1d318}] [] [{kube-controller-manager Update apps/v1 2023-01-14 03:55:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"432e49d4-85ad-4ae7-825f-3d4d9e637f87\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 03:55:49 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 7549d9f46d,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:7549d9f46d] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002d1d3c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} + Jan 14 03:55:50.111: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": + Jan 14 03:55:50.111: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-7855 d3ebf579-751f-4bd8-82f7-83094a17286a 422592 2 2023-01-14 03:55:43 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 432e49d4-85ad-4ae7-825f-3d4d9e637f87 0xc002d1ced7 0xc002d1ced8}] [] [{e2e.test Update apps/v1 2023-01-14 03:55:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 03:55:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"432e49d4-85ad-4ae7-825f-3d4d9e637f87\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2023-01-14 03:55:49 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002d1d2a8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + Jan 14 03:55:50.114: INFO: Pod "test-rolling-update-deployment-7549d9f46d-s9lg9" is available: + &Pod{ObjectMeta:{test-rolling-update-deployment-7549d9f46d-s9lg9 test-rolling-update-deployment-7549d9f46d- deployment-7855 b8656b8e-ca7b-4983-bf67-9e20b95c3a3b 422582 0 2023-01-14 03:55:48 +0000 UTC map[name:sample-pod pod-template-hash:7549d9f46d] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.1.74" + ], + "mac": "c6:52:87:3e:39:a3", + "default": true, + "dns": {} + }]] [{apps/v1 ReplicaSet test-rolling-update-deployment-7549d9f46d 019a68bf-0272-4be9-8ea2-6739ff3534d4 0xc0037c9ae7 0xc0037c9ae8}] [] [{kube-controller-manager Update v1 2023-01-14 03:55:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"019a68bf-0272-4be9-8ea2-6739ff3534d4\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus Update v1 2023-01-14 03:55:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status} {kubelet Update v1 2023-01-14 03:55:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.1.74\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8g5rw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8g5rw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.99,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 03:55:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 03:55:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 03:55:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 03:55:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.99,PodIP:10.52.1.74,StartTime:2023-01-14 03:55:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 03:55:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,ImageID:registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:containerd://4cf5954e94a0c69f833694df2a53f234e1ecc08d74826dcee82b7a7d0f3d260e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.52.1.74,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + [AfterEach] [sig-apps] Deployment + test/e2e/framework/node/init/init.go:32 + Jan 14 03:55:50.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Deployment + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Deployment + tear down framework | framework.go:193 + STEP: Destroying namespace "deployment-7855" for this suite. 01/14/23 03:55:50.118 + << End Captured GinkgoWriter Output +------------------------------ +[sig-network] Proxy version v1 + should proxy through a service and a pod [Conformance] + test/e2e/network/proxy.go:101 +[BeforeEach] version v1 + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:55:50.123 +Jan 14 03:55:50.124: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename proxy 01/14/23 03:55:50.124 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:55:50.138 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:55:50.141 +[BeforeEach] version v1 + test/e2e/framework/metrics/init/init.go:31 +[It] should proxy through a service and a pod [Conformance] + test/e2e/network/proxy.go:101 +STEP: starting an echo server on multiple ports 01/14/23 03:55:50.151 +STEP: creating replication controller proxy-service-w4hj8 in namespace proxy-485 01/14/23 03:55:50.151 +I0114 03:55:50.159611 25 runners.go:193] Created replication controller with name: proxy-service-w4hj8, namespace: proxy-485, replica count: 1 +I0114 03:55:51.211020 25 runners.go:193] proxy-service-w4hj8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0114 03:55:52.211801 25 runners.go:193] proxy-service-w4hj8 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Jan 14 03:55:52.215: INFO: setup took 2.071554616s, starting test cases +STEP: running 16 cases, 20 attempts per case, 320 total attempts 01/14/23 03:55:52.215 +Jan 14 03:55:52.219: INFO: (0) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:162/proxy/: bar (200; 3.885102ms) +Jan 14 03:55:52.220: INFO: (0) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:162/proxy/: bar (200; 4.689077ms) +Jan 14 03:55:52.220: INFO: (0) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z/proxy/: test (200; 4.577596ms) +Jan 14 03:55:52.220: INFO: (0) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:160/proxy/: foo (200; 4.662505ms) +Jan 14 03:55:52.220: INFO: (0) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:160/proxy/: foo (200; 4.907148ms) +Jan 14 03:55:52.220: INFO: (0) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:1080/proxy/: testt... (200; 5.216373ms) +Jan 14 03:55:52.221: INFO: (0) /api/v1/namespaces/proxy-485/services/http:proxy-service-w4hj8:portname2/proxy/: bar (200; 5.758776ms) +Jan 14 03:55:52.222: INFO: (0) /api/v1/namespaces/proxy-485/services/http:proxy-service-w4hj8:portname1/proxy/: foo (200; 6.517386ms) +Jan 14 03:55:52.222: INFO: (0) /api/v1/namespaces/proxy-485/services/proxy-service-w4hj8:portname2/proxy/: bar (200; 6.549553ms) +Jan 14 03:55:52.222: INFO: (0) /api/v1/namespaces/proxy-485/services/proxy-service-w4hj8:portname1/proxy/: foo (200; 6.548494ms) +Jan 14 03:55:52.224: INFO: (0) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:443/proxy/: test (200; 3.800232ms) +Jan 14 03:55:52.229: INFO: (1) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:1080/proxy/: testt... (200; 4.382503ms) +Jan 14 03:55:52.230: INFO: (1) /api/v1/namespaces/proxy-485/services/proxy-service-w4hj8:portname2/proxy/: bar (200; 4.851194ms) +Jan 14 03:55:52.231: INFO: (1) /api/v1/namespaces/proxy-485/services/https:proxy-service-w4hj8:tlsportname2/proxy/: tls qux (200; 5.828344ms) +Jan 14 03:55:52.231: INFO: (1) /api/v1/namespaces/proxy-485/services/proxy-service-w4hj8:portname1/proxy/: foo (200; 5.809312ms) +Jan 14 03:55:52.231: INFO: (1) /api/v1/namespaces/proxy-485/services/https:proxy-service-w4hj8:tlsportname1/proxy/: tls baz (200; 5.860369ms) +Jan 14 03:55:52.231: INFO: (1) /api/v1/namespaces/proxy-485/services/http:proxy-service-w4hj8:portname2/proxy/: bar (200; 5.892479ms) +Jan 14 03:55:52.231: INFO: (1) /api/v1/namespaces/proxy-485/services/http:proxy-service-w4hj8:portname1/proxy/: foo (200; 5.789861ms) +Jan 14 03:55:52.234: INFO: (2) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z/proxy/: test (200; 2.856238ms) +Jan 14 03:55:52.235: INFO: (2) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:160/proxy/: foo (200; 3.696197ms) +Jan 14 03:55:52.235: INFO: (2) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:1080/proxy/: testt... (200; 3.95574ms) +Jan 14 03:55:52.235: INFO: (2) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:162/proxy/: bar (200; 4.238293ms) +Jan 14 03:55:52.235: INFO: (2) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:160/proxy/: foo (200; 4.329837ms) +Jan 14 03:55:52.235: INFO: (2) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:443/proxy/: test (200; 2.936048ms) +Jan 14 03:55:52.241: INFO: (3) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:462/proxy/: tls qux (200; 3.982021ms) +Jan 14 03:55:52.241: INFO: (3) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:1080/proxy/: t... (200; 3.975426ms) +Jan 14 03:55:52.241: INFO: (3) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:443/proxy/: testtest (200; 3.532302ms) +Jan 14 03:55:52.247: INFO: (4) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:460/proxy/: tls baz (200; 3.496221ms) +Jan 14 03:55:52.247: INFO: (4) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:462/proxy/: tls qux (200; 3.549094ms) +Jan 14 03:55:52.247: INFO: (4) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:162/proxy/: bar (200; 3.776933ms) +Jan 14 03:55:52.247: INFO: (4) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:1080/proxy/: t... (200; 3.871133ms) +Jan 14 03:55:52.247: INFO: (4) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:443/proxy/: testtesttest (200; 3.847389ms) +Jan 14 03:55:52.254: INFO: (5) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:460/proxy/: tls baz (200; 3.911244ms) +Jan 14 03:55:52.254: INFO: (5) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:1080/proxy/: t... (200; 3.940657ms) +Jan 14 03:55:52.254: INFO: (5) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:160/proxy/: foo (200; 3.97215ms) +Jan 14 03:55:52.254: INFO: (5) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:462/proxy/: tls qux (200; 4.068841ms) +Jan 14 03:55:52.254: INFO: (5) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:160/proxy/: foo (200; 4.18409ms) +Jan 14 03:55:52.255: INFO: (5) /api/v1/namespaces/proxy-485/services/https:proxy-service-w4hj8:tlsportname1/proxy/: tls baz (200; 5.05576ms) +Jan 14 03:55:52.256: INFO: (5) /api/v1/namespaces/proxy-485/services/http:proxy-service-w4hj8:portname2/proxy/: bar (200; 5.953281ms) +Jan 14 03:55:52.256: INFO: (5) /api/v1/namespaces/proxy-485/services/proxy-service-w4hj8:portname2/proxy/: bar (200; 6.163706ms) +Jan 14 03:55:52.256: INFO: (5) /api/v1/namespaces/proxy-485/services/https:proxy-service-w4hj8:tlsportname2/proxy/: tls qux (200; 6.089808ms) +Jan 14 03:55:52.256: INFO: (5) /api/v1/namespaces/proxy-485/services/http:proxy-service-w4hj8:portname1/proxy/: foo (200; 6.112951ms) +Jan 14 03:55:52.256: INFO: (5) /api/v1/namespaces/proxy-485/services/proxy-service-w4hj8:portname1/proxy/: foo (200; 6.228639ms) +Jan 14 03:55:52.260: INFO: (6) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:160/proxy/: foo (200; 3.551064ms) +Jan 14 03:55:52.260: INFO: (6) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:162/proxy/: bar (200; 3.569314ms) +Jan 14 03:55:52.260: INFO: (6) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:462/proxy/: tls qux (200; 3.71359ms) +Jan 14 03:55:52.260: INFO: (6) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:160/proxy/: foo (200; 3.790086ms) +Jan 14 03:55:52.260: INFO: (6) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z/proxy/: test (200; 3.758602ms) +Jan 14 03:55:52.260: INFO: (6) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:162/proxy/: bar (200; 3.924903ms) +Jan 14 03:55:52.260: INFO: (6) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:460/proxy/: tls baz (200; 3.899471ms) +Jan 14 03:55:52.260: INFO: (6) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:1080/proxy/: t... (200; 3.916816ms) +Jan 14 03:55:52.260: INFO: (6) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:1080/proxy/: testt... (200; 2.715468ms) +Jan 14 03:55:52.266: INFO: (7) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z/proxy/: test (200; 3.611047ms) +Jan 14 03:55:52.266: INFO: (7) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:462/proxy/: tls qux (200; 3.63667ms) +Jan 14 03:55:52.266: INFO: (7) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:1080/proxy/: testtestt... (200; 3.857945ms) +Jan 14 03:55:52.272: INFO: (8) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:462/proxy/: tls qux (200; 3.839911ms) +Jan 14 03:55:52.272: INFO: (8) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z/proxy/: test (200; 3.80636ms) +Jan 14 03:55:52.272: INFO: (8) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:160/proxy/: foo (200; 3.870512ms) +Jan 14 03:55:52.272: INFO: (8) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:460/proxy/: tls baz (200; 4.105512ms) +Jan 14 03:55:52.272: INFO: (8) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:162/proxy/: bar (200; 4.092094ms) +Jan 14 03:55:52.273: INFO: (8) /api/v1/namespaces/proxy-485/services/http:proxy-service-w4hj8:portname2/proxy/: bar (200; 4.810067ms) +Jan 14 03:55:52.274: INFO: (8) /api/v1/namespaces/proxy-485/services/proxy-service-w4hj8:portname2/proxy/: bar (200; 5.816914ms) +Jan 14 03:55:52.274: INFO: (8) /api/v1/namespaces/proxy-485/services/http:proxy-service-w4hj8:portname1/proxy/: foo (200; 5.896292ms) +Jan 14 03:55:52.274: INFO: (8) /api/v1/namespaces/proxy-485/services/proxy-service-w4hj8:portname1/proxy/: foo (200; 5.840136ms) +Jan 14 03:55:52.274: INFO: (8) /api/v1/namespaces/proxy-485/services/https:proxy-service-w4hj8:tlsportname1/proxy/: tls baz (200; 5.949686ms) +Jan 14 03:55:52.274: INFO: (8) /api/v1/namespaces/proxy-485/services/https:proxy-service-w4hj8:tlsportname2/proxy/: tls qux (200; 5.995433ms) +Jan 14 03:55:52.277: INFO: (9) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z/proxy/: test (200; 2.911619ms) +Jan 14 03:55:52.278: INFO: (9) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:1080/proxy/: testt... (200; 4.193437ms) +Jan 14 03:55:52.278: INFO: (9) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:462/proxy/: tls qux (200; 4.23614ms) +Jan 14 03:55:52.278: INFO: (9) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:160/proxy/: foo (200; 4.266083ms) +Jan 14 03:55:52.279: INFO: (9) /api/v1/namespaces/proxy-485/services/http:proxy-service-w4hj8:portname2/proxy/: bar (200; 5.167197ms) +Jan 14 03:55:52.280: INFO: (9) /api/v1/namespaces/proxy-485/services/https:proxy-service-w4hj8:tlsportname2/proxy/: tls qux (200; 6.11762ms) +Jan 14 03:55:52.280: INFO: (9) /api/v1/namespaces/proxy-485/services/proxy-service-w4hj8:portname2/proxy/: bar (200; 6.040679ms) +Jan 14 03:55:52.280: INFO: (9) /api/v1/namespaces/proxy-485/services/proxy-service-w4hj8:portname1/proxy/: foo (200; 6.090302ms) +Jan 14 03:55:52.280: INFO: (9) /api/v1/namespaces/proxy-485/services/http:proxy-service-w4hj8:portname1/proxy/: foo (200; 6.081358ms) +Jan 14 03:55:52.280: INFO: (9) /api/v1/namespaces/proxy-485/services/https:proxy-service-w4hj8:tlsportname1/proxy/: tls baz (200; 6.113475ms) +Jan 14 03:55:52.283: INFO: (10) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:162/proxy/: bar (200; 2.97589ms) +Jan 14 03:55:52.285: INFO: (10) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:160/proxy/: foo (200; 3.881958ms) +Jan 14 03:55:52.285: INFO: (10) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:1080/proxy/: t... (200; 3.924668ms) +Jan 14 03:55:52.285: INFO: (10) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:162/proxy/: bar (200; 3.913326ms) +Jan 14 03:55:52.285: INFO: (10) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:460/proxy/: tls baz (200; 3.971684ms) +Jan 14 03:55:52.285: INFO: (10) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:462/proxy/: tls qux (200; 4.136094ms) +Jan 14 03:55:52.285: INFO: (10) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:160/proxy/: foo (200; 4.14977ms) +Jan 14 03:55:52.285: INFO: (10) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:1080/proxy/: testtest (200; 4.110195ms) +Jan 14 03:55:52.285: INFO: (10) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:443/proxy/: test (200; 4.195564ms) +Jan 14 03:55:52.292: INFO: (11) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:460/proxy/: tls baz (200; 4.310562ms) +Jan 14 03:55:52.292: INFO: (11) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:462/proxy/: tls qux (200; 4.432361ms) +Jan 14 03:55:52.292: INFO: (11) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:1080/proxy/: testt... (200; 4.55516ms) +Jan 14 03:55:52.292: INFO: (11) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:162/proxy/: bar (200; 4.515644ms) +Jan 14 03:55:52.292: INFO: (11) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:160/proxy/: foo (200; 4.725134ms) +Jan 14 03:55:52.294: INFO: (11) /api/v1/namespaces/proxy-485/services/proxy-service-w4hj8:portname1/proxy/: foo (200; 5.877905ms) +Jan 14 03:55:52.294: INFO: (11) /api/v1/namespaces/proxy-485/services/https:proxy-service-w4hj8:tlsportname2/proxy/: tls qux (200; 6.644933ms) +Jan 14 03:55:52.294: INFO: (11) /api/v1/namespaces/proxy-485/services/http:proxy-service-w4hj8:portname1/proxy/: foo (200; 6.721899ms) +Jan 14 03:55:52.295: INFO: (11) /api/v1/namespaces/proxy-485/services/https:proxy-service-w4hj8:tlsportname1/proxy/: tls baz (200; 6.784484ms) +Jan 14 03:55:52.295: INFO: (11) /api/v1/namespaces/proxy-485/services/proxy-service-w4hj8:portname2/proxy/: bar (200; 7.00662ms) +Jan 14 03:55:52.295: INFO: (11) /api/v1/namespaces/proxy-485/services/http:proxy-service-w4hj8:portname2/proxy/: bar (200; 7.000094ms) +Jan 14 03:55:52.302: INFO: (12) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:160/proxy/: foo (200; 6.777238ms) +Jan 14 03:55:52.302: INFO: (12) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:443/proxy/: test (200; 7.295304ms) +Jan 14 03:55:52.302: INFO: (12) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:1080/proxy/: t... (200; 7.277339ms) +Jan 14 03:55:52.302: INFO: (12) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:1080/proxy/: testtestt... (200; 4.667887ms) +Jan 14 03:55:52.309: INFO: (13) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z/proxy/: test (200; 4.658528ms) +Jan 14 03:55:52.310: INFO: (13) /api/v1/namespaces/proxy-485/services/proxy-service-w4hj8:portname1/proxy/: foo (200; 5.568545ms) +Jan 14 03:55:52.311: INFO: (13) /api/v1/namespaces/proxy-485/services/http:proxy-service-w4hj8:portname1/proxy/: foo (200; 6.543754ms) +Jan 14 03:55:52.311: INFO: (13) /api/v1/namespaces/proxy-485/services/https:proxy-service-w4hj8:tlsportname1/proxy/: tls baz (200; 6.412376ms) +Jan 14 03:55:52.311: INFO: (13) /api/v1/namespaces/proxy-485/services/http:proxy-service-w4hj8:portname2/proxy/: bar (200; 6.480733ms) +Jan 14 03:55:52.311: INFO: (13) /api/v1/namespaces/proxy-485/services/proxy-service-w4hj8:portname2/proxy/: bar (200; 6.401443ms) +Jan 14 03:55:52.311: INFO: (13) /api/v1/namespaces/proxy-485/services/https:proxy-service-w4hj8:tlsportname2/proxy/: tls qux (200; 6.452559ms) +Jan 14 03:55:52.314: INFO: (14) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:462/proxy/: tls qux (200; 3.109488ms) +Jan 14 03:55:52.315: INFO: (14) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:162/proxy/: bar (200; 4.490118ms) +Jan 14 03:55:52.315: INFO: (14) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z/proxy/: test (200; 4.406781ms) +Jan 14 03:55:52.315: INFO: (14) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:1080/proxy/: testt... (200; 4.473213ms) +Jan 14 03:55:52.315: INFO: (14) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:443/proxy/: t... (200; 4.27771ms) +Jan 14 03:55:52.322: INFO: (15) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:160/proxy/: foo (200; 4.240023ms) +Jan 14 03:55:52.322: INFO: (15) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:162/proxy/: bar (200; 4.290771ms) +Jan 14 03:55:52.322: INFO: (15) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:443/proxy/: test (200; 4.3478ms) +Jan 14 03:55:52.322: INFO: (15) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:1080/proxy/: testtestt... (200; 5.271977ms) +Jan 14 03:55:52.330: INFO: (16) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:462/proxy/: tls qux (200; 5.349774ms) +Jan 14 03:55:52.330: INFO: (16) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:162/proxy/: bar (200; 5.404002ms) +Jan 14 03:55:52.330: INFO: (16) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z/proxy/: test (200; 5.380278ms) +Jan 14 03:55:52.330: INFO: (16) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:443/proxy/: t... (200; 4.106206ms) +Jan 14 03:55:52.336: INFO: (17) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:462/proxy/: tls qux (200; 4.029801ms) +Jan 14 03:55:52.336: INFO: (17) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z/proxy/: test (200; 4.033625ms) +Jan 14 03:55:52.336: INFO: (17) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:1080/proxy/: testt... (200; 3.600456ms) +Jan 14 03:55:52.342: INFO: (18) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:160/proxy/: foo (200; 3.644246ms) +Jan 14 03:55:52.342: INFO: (18) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:462/proxy/: tls qux (200; 4.099932ms) +Jan 14 03:55:52.342: INFO: (18) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:1080/proxy/: testtest (200; 4.084495ms) +Jan 14 03:55:52.343: INFO: (18) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:460/proxy/: tls baz (200; 4.280102ms) +Jan 14 03:55:52.343: INFO: (18) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:162/proxy/: bar (200; 4.461077ms) +Jan 14 03:55:52.343: INFO: (18) /api/v1/namespaces/proxy-485/services/proxy-service-w4hj8:portname2/proxy/: bar (200; 4.456545ms) +Jan 14 03:55:52.343: INFO: (18) /api/v1/namespaces/proxy-485/services/http:proxy-service-w4hj8:portname2/proxy/: bar (200; 4.962124ms) +Jan 14 03:55:52.344: INFO: (18) /api/v1/namespaces/proxy-485/services/http:proxy-service-w4hj8:portname1/proxy/: foo (200; 5.678559ms) +Jan 14 03:55:52.344: INFO: (18) /api/v1/namespaces/proxy-485/services/https:proxy-service-w4hj8:tlsportname2/proxy/: tls qux (200; 5.77052ms) +Jan 14 03:55:52.344: INFO: (18) /api/v1/namespaces/proxy-485/services/proxy-service-w4hj8:portname1/proxy/: foo (200; 5.847648ms) +Jan 14 03:55:52.344: INFO: (18) /api/v1/namespaces/proxy-485/services/https:proxy-service-w4hj8:tlsportname1/proxy/: tls baz (200; 5.831597ms) +Jan 14 03:55:52.348: INFO: (19) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:162/proxy/: bar (200; 3.551521ms) +Jan 14 03:55:52.348: INFO: (19) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:1080/proxy/: t... (200; 3.947535ms) +Jan 14 03:55:52.348: INFO: (19) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:160/proxy/: foo (200; 4.003551ms) +Jan 14 03:55:52.348: INFO: (19) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:1080/proxy/: testtest (200; 4.550743ms) +Jan 14 03:55:52.349: INFO: (19) /api/v1/namespaces/proxy-485/services/http:proxy-service-w4hj8:portname2/proxy/: bar (200; 4.57141ms) +Jan 14 03:55:52.349: INFO: (19) /api/v1/namespaces/proxy-485/services/proxy-service-w4hj8:portname2/proxy/: bar (200; 4.921241ms) +Jan 14 03:55:52.350: INFO: (19) /api/v1/namespaces/proxy-485/services/http:proxy-service-w4hj8:portname1/proxy/: foo (200; 5.714878ms) +Jan 14 03:55:52.350: INFO: (19) /api/v1/namespaces/proxy-485/services/proxy-service-w4hj8:portname1/proxy/: foo (200; 5.845333ms) +Jan 14 03:55:52.350: INFO: (19) /api/v1/namespaces/proxy-485/services/https:proxy-service-w4hj8:tlsportname1/proxy/: tls baz (200; 5.952822ms) +Jan 14 03:55:52.350: INFO: (19) /api/v1/namespaces/proxy-485/services/https:proxy-service-w4hj8:tlsportname2/proxy/: tls qux (200; 5.810456ms) +STEP: deleting ReplicationController proxy-service-w4hj8 in namespace proxy-485, will wait for the garbage collector to delete the pods 01/14/23 03:55:52.35 +Jan 14 03:55:52.410: INFO: Deleting ReplicationController proxy-service-w4hj8 took: 7.017537ms +Jan 14 03:55:52.511: INFO: Terminating ReplicationController proxy-service-w4hj8 pods took: 100.586264ms +[AfterEach] version v1 + test/e2e/framework/node/init/init.go:32 +Jan 14 03:55:55.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] version v1 + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] version v1 + dump namespaces | framework.go:196 +[DeferCleanup (Each)] version v1 + tear down framework | framework.go:193 +STEP: Destroying namespace "proxy-485" for this suite. 01/14/23 03:55:55.317 +------------------------------ +• [SLOW TEST] [5.199 seconds] +[sig-network] Proxy +test/e2e/network/common/framework.go:23 + version v1 + test/e2e/network/proxy.go:74 + should proxy through a service and a pod [Conformance] + test/e2e/network/proxy.go:101 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] version v1 + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:55:50.123 + Jan 14 03:55:50.124: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename proxy 01/14/23 03:55:50.124 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:55:50.138 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:55:50.141 + [BeforeEach] version v1 + test/e2e/framework/metrics/init/init.go:31 + [It] should proxy through a service and a pod [Conformance] + test/e2e/network/proxy.go:101 + STEP: starting an echo server on multiple ports 01/14/23 03:55:50.151 + STEP: creating replication controller proxy-service-w4hj8 in namespace proxy-485 01/14/23 03:55:50.151 + I0114 03:55:50.159611 25 runners.go:193] Created replication controller with name: proxy-service-w4hj8, namespace: proxy-485, replica count: 1 + I0114 03:55:51.211020 25 runners.go:193] proxy-service-w4hj8 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + I0114 03:55:52.211801 25 runners.go:193] proxy-service-w4hj8 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Jan 14 03:55:52.215: INFO: setup took 2.071554616s, starting test cases + STEP: running 16 cases, 20 attempts per case, 320 total attempts 01/14/23 03:55:52.215 + Jan 14 03:55:52.219: INFO: (0) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:162/proxy/: bar (200; 3.885102ms) + Jan 14 03:55:52.220: INFO: (0) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:162/proxy/: bar (200; 4.689077ms) + Jan 14 03:55:52.220: INFO: (0) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z/proxy/: test (200; 4.577596ms) + Jan 14 03:55:52.220: INFO: (0) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:160/proxy/: foo (200; 4.662505ms) + Jan 14 03:55:52.220: INFO: (0) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:160/proxy/: foo (200; 4.907148ms) + Jan 14 03:55:52.220: INFO: (0) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:1080/proxy/: testt... (200; 5.216373ms) + Jan 14 03:55:52.221: INFO: (0) /api/v1/namespaces/proxy-485/services/http:proxy-service-w4hj8:portname2/proxy/: bar (200; 5.758776ms) + Jan 14 03:55:52.222: INFO: (0) /api/v1/namespaces/proxy-485/services/http:proxy-service-w4hj8:portname1/proxy/: foo (200; 6.517386ms) + Jan 14 03:55:52.222: INFO: (0) /api/v1/namespaces/proxy-485/services/proxy-service-w4hj8:portname2/proxy/: bar (200; 6.549553ms) + Jan 14 03:55:52.222: INFO: (0) /api/v1/namespaces/proxy-485/services/proxy-service-w4hj8:portname1/proxy/: foo (200; 6.548494ms) + Jan 14 03:55:52.224: INFO: (0) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:443/proxy/: test (200; 3.800232ms) + Jan 14 03:55:52.229: INFO: (1) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:1080/proxy/: testt... (200; 4.382503ms) + Jan 14 03:55:52.230: INFO: (1) /api/v1/namespaces/proxy-485/services/proxy-service-w4hj8:portname2/proxy/: bar (200; 4.851194ms) + Jan 14 03:55:52.231: INFO: (1) /api/v1/namespaces/proxy-485/services/https:proxy-service-w4hj8:tlsportname2/proxy/: tls qux (200; 5.828344ms) + Jan 14 03:55:52.231: INFO: (1) /api/v1/namespaces/proxy-485/services/proxy-service-w4hj8:portname1/proxy/: foo (200; 5.809312ms) + Jan 14 03:55:52.231: INFO: (1) /api/v1/namespaces/proxy-485/services/https:proxy-service-w4hj8:tlsportname1/proxy/: tls baz (200; 5.860369ms) + Jan 14 03:55:52.231: INFO: (1) /api/v1/namespaces/proxy-485/services/http:proxy-service-w4hj8:portname2/proxy/: bar (200; 5.892479ms) + Jan 14 03:55:52.231: INFO: (1) /api/v1/namespaces/proxy-485/services/http:proxy-service-w4hj8:portname1/proxy/: foo (200; 5.789861ms) + Jan 14 03:55:52.234: INFO: (2) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z/proxy/: test (200; 2.856238ms) + Jan 14 03:55:52.235: INFO: (2) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:160/proxy/: foo (200; 3.696197ms) + Jan 14 03:55:52.235: INFO: (2) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:1080/proxy/: testt... (200; 3.95574ms) + Jan 14 03:55:52.235: INFO: (2) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:162/proxy/: bar (200; 4.238293ms) + Jan 14 03:55:52.235: INFO: (2) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:160/proxy/: foo (200; 4.329837ms) + Jan 14 03:55:52.235: INFO: (2) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:443/proxy/: test (200; 2.936048ms) + Jan 14 03:55:52.241: INFO: (3) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:462/proxy/: tls qux (200; 3.982021ms) + Jan 14 03:55:52.241: INFO: (3) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:1080/proxy/: t... (200; 3.975426ms) + Jan 14 03:55:52.241: INFO: (3) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:443/proxy/: testtest (200; 3.532302ms) + Jan 14 03:55:52.247: INFO: (4) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:460/proxy/: tls baz (200; 3.496221ms) + Jan 14 03:55:52.247: INFO: (4) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:462/proxy/: tls qux (200; 3.549094ms) + Jan 14 03:55:52.247: INFO: (4) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:162/proxy/: bar (200; 3.776933ms) + Jan 14 03:55:52.247: INFO: (4) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:1080/proxy/: t... (200; 3.871133ms) + Jan 14 03:55:52.247: INFO: (4) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:443/proxy/: testtesttest (200; 3.847389ms) + Jan 14 03:55:52.254: INFO: (5) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:460/proxy/: tls baz (200; 3.911244ms) + Jan 14 03:55:52.254: INFO: (5) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:1080/proxy/: t... (200; 3.940657ms) + Jan 14 03:55:52.254: INFO: (5) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:160/proxy/: foo (200; 3.97215ms) + Jan 14 03:55:52.254: INFO: (5) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:462/proxy/: tls qux (200; 4.068841ms) + Jan 14 03:55:52.254: INFO: (5) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:160/proxy/: foo (200; 4.18409ms) + Jan 14 03:55:52.255: INFO: (5) /api/v1/namespaces/proxy-485/services/https:proxy-service-w4hj8:tlsportname1/proxy/: tls baz (200; 5.05576ms) + Jan 14 03:55:52.256: INFO: (5) /api/v1/namespaces/proxy-485/services/http:proxy-service-w4hj8:portname2/proxy/: bar (200; 5.953281ms) + Jan 14 03:55:52.256: INFO: (5) /api/v1/namespaces/proxy-485/services/proxy-service-w4hj8:portname2/proxy/: bar (200; 6.163706ms) + Jan 14 03:55:52.256: INFO: (5) /api/v1/namespaces/proxy-485/services/https:proxy-service-w4hj8:tlsportname2/proxy/: tls qux (200; 6.089808ms) + Jan 14 03:55:52.256: INFO: (5) /api/v1/namespaces/proxy-485/services/http:proxy-service-w4hj8:portname1/proxy/: foo (200; 6.112951ms) + Jan 14 03:55:52.256: INFO: (5) /api/v1/namespaces/proxy-485/services/proxy-service-w4hj8:portname1/proxy/: foo (200; 6.228639ms) + Jan 14 03:55:52.260: INFO: (6) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:160/proxy/: foo (200; 3.551064ms) + Jan 14 03:55:52.260: INFO: (6) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:162/proxy/: bar (200; 3.569314ms) + Jan 14 03:55:52.260: INFO: (6) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:462/proxy/: tls qux (200; 3.71359ms) + Jan 14 03:55:52.260: INFO: (6) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:160/proxy/: foo (200; 3.790086ms) + Jan 14 03:55:52.260: INFO: (6) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z/proxy/: test (200; 3.758602ms) + Jan 14 03:55:52.260: INFO: (6) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:162/proxy/: bar (200; 3.924903ms) + Jan 14 03:55:52.260: INFO: (6) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:460/proxy/: tls baz (200; 3.899471ms) + Jan 14 03:55:52.260: INFO: (6) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:1080/proxy/: t... (200; 3.916816ms) + Jan 14 03:55:52.260: INFO: (6) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:1080/proxy/: testt... (200; 2.715468ms) + Jan 14 03:55:52.266: INFO: (7) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z/proxy/: test (200; 3.611047ms) + Jan 14 03:55:52.266: INFO: (7) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:462/proxy/: tls qux (200; 3.63667ms) + Jan 14 03:55:52.266: INFO: (7) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:1080/proxy/: testtestt... (200; 3.857945ms) + Jan 14 03:55:52.272: INFO: (8) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:462/proxy/: tls qux (200; 3.839911ms) + Jan 14 03:55:52.272: INFO: (8) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z/proxy/: test (200; 3.80636ms) + Jan 14 03:55:52.272: INFO: (8) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:160/proxy/: foo (200; 3.870512ms) + Jan 14 03:55:52.272: INFO: (8) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:460/proxy/: tls baz (200; 4.105512ms) + Jan 14 03:55:52.272: INFO: (8) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:162/proxy/: bar (200; 4.092094ms) + Jan 14 03:55:52.273: INFO: (8) /api/v1/namespaces/proxy-485/services/http:proxy-service-w4hj8:portname2/proxy/: bar (200; 4.810067ms) + Jan 14 03:55:52.274: INFO: (8) /api/v1/namespaces/proxy-485/services/proxy-service-w4hj8:portname2/proxy/: bar (200; 5.816914ms) + Jan 14 03:55:52.274: INFO: (8) /api/v1/namespaces/proxy-485/services/http:proxy-service-w4hj8:portname1/proxy/: foo (200; 5.896292ms) + Jan 14 03:55:52.274: INFO: (8) /api/v1/namespaces/proxy-485/services/proxy-service-w4hj8:portname1/proxy/: foo (200; 5.840136ms) + Jan 14 03:55:52.274: INFO: (8) /api/v1/namespaces/proxy-485/services/https:proxy-service-w4hj8:tlsportname1/proxy/: tls baz (200; 5.949686ms) + Jan 14 03:55:52.274: INFO: (8) /api/v1/namespaces/proxy-485/services/https:proxy-service-w4hj8:tlsportname2/proxy/: tls qux (200; 5.995433ms) + Jan 14 03:55:52.277: INFO: (9) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z/proxy/: test (200; 2.911619ms) + Jan 14 03:55:52.278: INFO: (9) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:1080/proxy/: testt... (200; 4.193437ms) + Jan 14 03:55:52.278: INFO: (9) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:462/proxy/: tls qux (200; 4.23614ms) + Jan 14 03:55:52.278: INFO: (9) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:160/proxy/: foo (200; 4.266083ms) + Jan 14 03:55:52.279: INFO: (9) /api/v1/namespaces/proxy-485/services/http:proxy-service-w4hj8:portname2/proxy/: bar (200; 5.167197ms) + Jan 14 03:55:52.280: INFO: (9) /api/v1/namespaces/proxy-485/services/https:proxy-service-w4hj8:tlsportname2/proxy/: tls qux (200; 6.11762ms) + Jan 14 03:55:52.280: INFO: (9) /api/v1/namespaces/proxy-485/services/proxy-service-w4hj8:portname2/proxy/: bar (200; 6.040679ms) + Jan 14 03:55:52.280: INFO: (9) /api/v1/namespaces/proxy-485/services/proxy-service-w4hj8:portname1/proxy/: foo (200; 6.090302ms) + Jan 14 03:55:52.280: INFO: (9) /api/v1/namespaces/proxy-485/services/http:proxy-service-w4hj8:portname1/proxy/: foo (200; 6.081358ms) + Jan 14 03:55:52.280: INFO: (9) /api/v1/namespaces/proxy-485/services/https:proxy-service-w4hj8:tlsportname1/proxy/: tls baz (200; 6.113475ms) + Jan 14 03:55:52.283: INFO: (10) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:162/proxy/: bar (200; 2.97589ms) + Jan 14 03:55:52.285: INFO: (10) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:160/proxy/: foo (200; 3.881958ms) + Jan 14 03:55:52.285: INFO: (10) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:1080/proxy/: t... (200; 3.924668ms) + Jan 14 03:55:52.285: INFO: (10) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:162/proxy/: bar (200; 3.913326ms) + Jan 14 03:55:52.285: INFO: (10) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:460/proxy/: tls baz (200; 3.971684ms) + Jan 14 03:55:52.285: INFO: (10) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:462/proxy/: tls qux (200; 4.136094ms) + Jan 14 03:55:52.285: INFO: (10) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:160/proxy/: foo (200; 4.14977ms) + Jan 14 03:55:52.285: INFO: (10) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:1080/proxy/: testtest (200; 4.110195ms) + Jan 14 03:55:52.285: INFO: (10) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:443/proxy/: test (200; 4.195564ms) + Jan 14 03:55:52.292: INFO: (11) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:460/proxy/: tls baz (200; 4.310562ms) + Jan 14 03:55:52.292: INFO: (11) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:462/proxy/: tls qux (200; 4.432361ms) + Jan 14 03:55:52.292: INFO: (11) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:1080/proxy/: testt... (200; 4.55516ms) + Jan 14 03:55:52.292: INFO: (11) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:162/proxy/: bar (200; 4.515644ms) + Jan 14 03:55:52.292: INFO: (11) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:160/proxy/: foo (200; 4.725134ms) + Jan 14 03:55:52.294: INFO: (11) /api/v1/namespaces/proxy-485/services/proxy-service-w4hj8:portname1/proxy/: foo (200; 5.877905ms) + Jan 14 03:55:52.294: INFO: (11) /api/v1/namespaces/proxy-485/services/https:proxy-service-w4hj8:tlsportname2/proxy/: tls qux (200; 6.644933ms) + Jan 14 03:55:52.294: INFO: (11) /api/v1/namespaces/proxy-485/services/http:proxy-service-w4hj8:portname1/proxy/: foo (200; 6.721899ms) + Jan 14 03:55:52.295: INFO: (11) /api/v1/namespaces/proxy-485/services/https:proxy-service-w4hj8:tlsportname1/proxy/: tls baz (200; 6.784484ms) + Jan 14 03:55:52.295: INFO: (11) /api/v1/namespaces/proxy-485/services/proxy-service-w4hj8:portname2/proxy/: bar (200; 7.00662ms) + Jan 14 03:55:52.295: INFO: (11) /api/v1/namespaces/proxy-485/services/http:proxy-service-w4hj8:portname2/proxy/: bar (200; 7.000094ms) + Jan 14 03:55:52.302: INFO: (12) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:160/proxy/: foo (200; 6.777238ms) + Jan 14 03:55:52.302: INFO: (12) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:443/proxy/: test (200; 7.295304ms) + Jan 14 03:55:52.302: INFO: (12) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:1080/proxy/: t... (200; 7.277339ms) + Jan 14 03:55:52.302: INFO: (12) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:1080/proxy/: testtestt... (200; 4.667887ms) + Jan 14 03:55:52.309: INFO: (13) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z/proxy/: test (200; 4.658528ms) + Jan 14 03:55:52.310: INFO: (13) /api/v1/namespaces/proxy-485/services/proxy-service-w4hj8:portname1/proxy/: foo (200; 5.568545ms) + Jan 14 03:55:52.311: INFO: (13) /api/v1/namespaces/proxy-485/services/http:proxy-service-w4hj8:portname1/proxy/: foo (200; 6.543754ms) + Jan 14 03:55:52.311: INFO: (13) /api/v1/namespaces/proxy-485/services/https:proxy-service-w4hj8:tlsportname1/proxy/: tls baz (200; 6.412376ms) + Jan 14 03:55:52.311: INFO: (13) /api/v1/namespaces/proxy-485/services/http:proxy-service-w4hj8:portname2/proxy/: bar (200; 6.480733ms) + Jan 14 03:55:52.311: INFO: (13) /api/v1/namespaces/proxy-485/services/proxy-service-w4hj8:portname2/proxy/: bar (200; 6.401443ms) + Jan 14 03:55:52.311: INFO: (13) /api/v1/namespaces/proxy-485/services/https:proxy-service-w4hj8:tlsportname2/proxy/: tls qux (200; 6.452559ms) + Jan 14 03:55:52.314: INFO: (14) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:462/proxy/: tls qux (200; 3.109488ms) + Jan 14 03:55:52.315: INFO: (14) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:162/proxy/: bar (200; 4.490118ms) + Jan 14 03:55:52.315: INFO: (14) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z/proxy/: test (200; 4.406781ms) + Jan 14 03:55:52.315: INFO: (14) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:1080/proxy/: testt... (200; 4.473213ms) + Jan 14 03:55:52.315: INFO: (14) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:443/proxy/: t... (200; 4.27771ms) + Jan 14 03:55:52.322: INFO: (15) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:160/proxy/: foo (200; 4.240023ms) + Jan 14 03:55:52.322: INFO: (15) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:162/proxy/: bar (200; 4.290771ms) + Jan 14 03:55:52.322: INFO: (15) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:443/proxy/: test (200; 4.3478ms) + Jan 14 03:55:52.322: INFO: (15) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:1080/proxy/: testtestt... (200; 5.271977ms) + Jan 14 03:55:52.330: INFO: (16) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:462/proxy/: tls qux (200; 5.349774ms) + Jan 14 03:55:52.330: INFO: (16) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:162/proxy/: bar (200; 5.404002ms) + Jan 14 03:55:52.330: INFO: (16) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z/proxy/: test (200; 5.380278ms) + Jan 14 03:55:52.330: INFO: (16) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:443/proxy/: t... (200; 4.106206ms) + Jan 14 03:55:52.336: INFO: (17) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:462/proxy/: tls qux (200; 4.029801ms) + Jan 14 03:55:52.336: INFO: (17) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z/proxy/: test (200; 4.033625ms) + Jan 14 03:55:52.336: INFO: (17) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:1080/proxy/: testt... (200; 3.600456ms) + Jan 14 03:55:52.342: INFO: (18) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:160/proxy/: foo (200; 3.644246ms) + Jan 14 03:55:52.342: INFO: (18) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:462/proxy/: tls qux (200; 4.099932ms) + Jan 14 03:55:52.342: INFO: (18) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:1080/proxy/: testtest (200; 4.084495ms) + Jan 14 03:55:52.343: INFO: (18) /api/v1/namespaces/proxy-485/pods/https:proxy-service-w4hj8-psv8z:460/proxy/: tls baz (200; 4.280102ms) + Jan 14 03:55:52.343: INFO: (18) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:162/proxy/: bar (200; 4.461077ms) + Jan 14 03:55:52.343: INFO: (18) /api/v1/namespaces/proxy-485/services/proxy-service-w4hj8:portname2/proxy/: bar (200; 4.456545ms) + Jan 14 03:55:52.343: INFO: (18) /api/v1/namespaces/proxy-485/services/http:proxy-service-w4hj8:portname2/proxy/: bar (200; 4.962124ms) + Jan 14 03:55:52.344: INFO: (18) /api/v1/namespaces/proxy-485/services/http:proxy-service-w4hj8:portname1/proxy/: foo (200; 5.678559ms) + Jan 14 03:55:52.344: INFO: (18) /api/v1/namespaces/proxy-485/services/https:proxy-service-w4hj8:tlsportname2/proxy/: tls qux (200; 5.77052ms) + Jan 14 03:55:52.344: INFO: (18) /api/v1/namespaces/proxy-485/services/proxy-service-w4hj8:portname1/proxy/: foo (200; 5.847648ms) + Jan 14 03:55:52.344: INFO: (18) /api/v1/namespaces/proxy-485/services/https:proxy-service-w4hj8:tlsportname1/proxy/: tls baz (200; 5.831597ms) + Jan 14 03:55:52.348: INFO: (19) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:162/proxy/: bar (200; 3.551521ms) + Jan 14 03:55:52.348: INFO: (19) /api/v1/namespaces/proxy-485/pods/http:proxy-service-w4hj8-psv8z:1080/proxy/: t... (200; 3.947535ms) + Jan 14 03:55:52.348: INFO: (19) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:160/proxy/: foo (200; 4.003551ms) + Jan 14 03:55:52.348: INFO: (19) /api/v1/namespaces/proxy-485/pods/proxy-service-w4hj8-psv8z:1080/proxy/: testtest (200; 4.550743ms) + Jan 14 03:55:52.349: INFO: (19) /api/v1/namespaces/proxy-485/services/http:proxy-service-w4hj8:portname2/proxy/: bar (200; 4.57141ms) + Jan 14 03:55:52.349: INFO: (19) /api/v1/namespaces/proxy-485/services/proxy-service-w4hj8:portname2/proxy/: bar (200; 4.921241ms) + Jan 14 03:55:52.350: INFO: (19) /api/v1/namespaces/proxy-485/services/http:proxy-service-w4hj8:portname1/proxy/: foo (200; 5.714878ms) + Jan 14 03:55:52.350: INFO: (19) /api/v1/namespaces/proxy-485/services/proxy-service-w4hj8:portname1/proxy/: foo (200; 5.845333ms) + Jan 14 03:55:52.350: INFO: (19) /api/v1/namespaces/proxy-485/services/https:proxy-service-w4hj8:tlsportname1/proxy/: tls baz (200; 5.952822ms) + Jan 14 03:55:52.350: INFO: (19) /api/v1/namespaces/proxy-485/services/https:proxy-service-w4hj8:tlsportname2/proxy/: tls qux (200; 5.810456ms) + STEP: deleting ReplicationController proxy-service-w4hj8 in namespace proxy-485, will wait for the garbage collector to delete the pods 01/14/23 03:55:52.35 + Jan 14 03:55:52.410: INFO: Deleting ReplicationController proxy-service-w4hj8 took: 7.017537ms + Jan 14 03:55:52.511: INFO: Terminating ReplicationController proxy-service-w4hj8 pods took: 100.586264ms + [AfterEach] version v1 + test/e2e/framework/node/init/init.go:32 + Jan 14 03:55:55.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] version v1 + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] version v1 + dump namespaces | framework.go:196 + [DeferCleanup (Each)] version v1 + tear down framework | framework.go:193 + STEP: Destroying namespace "proxy-485" for this suite. 01/14/23 03:55:55.317 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + creating/deleting custom resource definition objects works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:58 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:55:55.324 +Jan 14 03:55:55.324: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename custom-resource-definition 01/14/23 03:55:55.326 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:55:55.34 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:55:55.342 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[It] creating/deleting custom resource definition objects works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:58 +Jan 14 03:55:55.345: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jan 14 03:55:56.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "custom-resource-definition-6344" for this suite. 01/14/23 03:55:56.374 +------------------------------ +• [1.056 seconds] +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + Simple CustomResourceDefinition + test/e2e/apimachinery/custom_resource_definition.go:50 + creating/deleting custom resource definition objects works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:58 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:55:55.324 + Jan 14 03:55:55.324: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename custom-resource-definition 01/14/23 03:55:55.326 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:55:55.34 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:55:55.342 + [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [It] creating/deleting custom resource definition objects works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:58 + Jan 14 03:55:55.345: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Jan 14 03:55:56.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "custom-resource-definition-6344" for this suite. 01/14/23 03:55:56.374 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:193 +[BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:55:56.381 +Jan 14 03:55:56.381: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename downward-api 01/14/23 03:55:56.382 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:55:56.396 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:55:56.398 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 +[It] should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:193 +STEP: Creating a pod to test downward API volume plugin 01/14/23 03:55:56.401 +Jan 14 03:55:56.410: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c17f3519-c77a-42ae-a089-a97bfb45e69d" in namespace "downward-api-2332" to be "Succeeded or Failed" +Jan 14 03:55:56.413: INFO: Pod "downwardapi-volume-c17f3519-c77a-42ae-a089-a97bfb45e69d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.954176ms +Jan 14 03:55:58.417: INFO: Pod "downwardapi-volume-c17f3519-c77a-42ae-a089-a97bfb45e69d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007280049s +Jan 14 03:56:00.417: INFO: Pod "downwardapi-volume-c17f3519-c77a-42ae-a089-a97bfb45e69d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006900991s +STEP: Saw pod success 01/14/23 03:56:00.417 +Jan 14 03:56:00.417: INFO: Pod "downwardapi-volume-c17f3519-c77a-42ae-a089-a97bfb45e69d" satisfied condition "Succeeded or Failed" +Jan 14 03:56:00.420: INFO: Trying to get logs from node 10.0.1.106 pod downwardapi-volume-c17f3519-c77a-42ae-a089-a97bfb45e69d container client-container: +STEP: delete the pod 01/14/23 03:56:00.425 +Jan 14 03:56:00.439: INFO: Waiting for pod downwardapi-volume-c17f3519-c77a-42ae-a089-a97bfb45e69d to disappear +Jan 14 03:56:00.442: INFO: Pod downwardapi-volume-c17f3519-c77a-42ae-a089-a97bfb45e69d no longer exists +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 +Jan 14 03:56:00.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 +STEP: Destroying namespace "downward-api-2332" for this suite. 01/14/23 03:56:00.447 +------------------------------ +• [4.072 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:193 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:55:56.381 + Jan 14 03:55:56.381: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename downward-api 01/14/23 03:55:56.382 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:55:56.396 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:55:56.398 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 + [It] should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:193 + STEP: Creating a pod to test downward API volume plugin 01/14/23 03:55:56.401 + Jan 14 03:55:56.410: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c17f3519-c77a-42ae-a089-a97bfb45e69d" in namespace "downward-api-2332" to be "Succeeded or Failed" + Jan 14 03:55:56.413: INFO: Pod "downwardapi-volume-c17f3519-c77a-42ae-a089-a97bfb45e69d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.954176ms + Jan 14 03:55:58.417: INFO: Pod "downwardapi-volume-c17f3519-c77a-42ae-a089-a97bfb45e69d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007280049s + Jan 14 03:56:00.417: INFO: Pod "downwardapi-volume-c17f3519-c77a-42ae-a089-a97bfb45e69d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006900991s + STEP: Saw pod success 01/14/23 03:56:00.417 + Jan 14 03:56:00.417: INFO: Pod "downwardapi-volume-c17f3519-c77a-42ae-a089-a97bfb45e69d" satisfied condition "Succeeded or Failed" + Jan 14 03:56:00.420: INFO: Trying to get logs from node 10.0.1.106 pod downwardapi-volume-c17f3519-c77a-42ae-a089-a97bfb45e69d container client-container: + STEP: delete the pod 01/14/23 03:56:00.425 + Jan 14 03:56:00.439: INFO: Waiting for pod downwardapi-volume-c17f3519-c77a-42ae-a089-a97bfb45e69d to disappear + Jan 14 03:56:00.442: INFO: Pod downwardapi-volume-c17f3519-c77a-42ae-a089-a97bfb45e69d no longer exists + [AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 + Jan 14 03:56:00.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 + STEP: Destroying namespace "downward-api-2332" for this suite. 01/14/23 03:56:00.447 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:232 +[BeforeEach] [sig-node] Container Runtime + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:56:00.455 +Jan 14 03:56:00.455: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename container-runtime 01/14/23 03:56:00.455 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:56:00.467 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:56:00.469 +[BeforeEach] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:31 +[It] should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:232 +STEP: create the container 01/14/23 03:56:00.473 +STEP: wait for the container to reach Succeeded 01/14/23 03:56:00.481 +STEP: get the container status 01/14/23 03:56:03.497 +STEP: the container should be terminated 01/14/23 03:56:03.503 +STEP: the termination message should be set 01/14/23 03:56:03.503 +Jan 14 03:56:03.503: INFO: Expected: &{} to match Container's Termination Message: -- +STEP: delete the container 01/14/23 03:56:03.503 +[AfterEach] [sig-node] Container Runtime + test/e2e/framework/node/init/init.go:32 +Jan 14 03:56:03.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Container Runtime + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Container Runtime + tear down framework | framework.go:193 +STEP: Destroying namespace "container-runtime-940" for this suite. 01/14/23 03:56:03.524 +------------------------------ +• [3.075 seconds] +[sig-node] Container Runtime +test/e2e/common/node/framework.go:23 + blackbox test + test/e2e/common/node/runtime.go:44 + on terminated container + test/e2e/common/node/runtime.go:137 + should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:232 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Container Runtime + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:56:00.455 + Jan 14 03:56:00.455: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename container-runtime 01/14/23 03:56:00.455 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:56:00.467 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:56:00.469 + [BeforeEach] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:31 + [It] should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:232 + STEP: create the container 01/14/23 03:56:00.473 + STEP: wait for the container to reach Succeeded 01/14/23 03:56:00.481 + STEP: get the container status 01/14/23 03:56:03.497 + STEP: the container should be terminated 01/14/23 03:56:03.503 + STEP: the termination message should be set 01/14/23 03:56:03.503 + Jan 14 03:56:03.503: INFO: Expected: &{} to match Container's Termination Message: -- + STEP: delete the container 01/14/23 03:56:03.503 + [AfterEach] [sig-node] Container Runtime + test/e2e/framework/node/init/init.go:32 + Jan 14 03:56:03.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Container Runtime + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Container Runtime + tear down framework | framework.go:193 + STEP: Destroying namespace "container-runtime-940" for this suite. 01/14/23 03:56:03.524 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:167 +[BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:56:03.53 +Jan 14 03:56:03.530: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename emptydir 01/14/23 03:56:03.531 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:56:03.549 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:56:03.552 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 +[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:167 +STEP: Creating a pod to test emptydir 0644 on node default medium 01/14/23 03:56:03.556 +Jan 14 03:56:03.567: INFO: Waiting up to 5m0s for pod "pod-fb737487-4011-42e2-8ce2-56887db58d50" in namespace "emptydir-6512" to be "Succeeded or Failed" +Jan 14 03:56:03.570: INFO: Pod "pod-fb737487-4011-42e2-8ce2-56887db58d50": Phase="Pending", Reason="", readiness=false. Elapsed: 3.174367ms +Jan 14 03:56:05.575: INFO: Pod "pod-fb737487-4011-42e2-8ce2-56887db58d50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008534579s +Jan 14 03:56:07.574: INFO: Pod "pod-fb737487-4011-42e2-8ce2-56887db58d50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00770591s +STEP: Saw pod success 01/14/23 03:56:07.574 +Jan 14 03:56:07.575: INFO: Pod "pod-fb737487-4011-42e2-8ce2-56887db58d50" satisfied condition "Succeeded or Failed" +Jan 14 03:56:07.578: INFO: Trying to get logs from node 10.0.1.106 pod pod-fb737487-4011-42e2-8ce2-56887db58d50 container test-container: +STEP: delete the pod 01/14/23 03:56:07.583 +Jan 14 03:56:07.596: INFO: Waiting for pod pod-fb737487-4011-42e2-8ce2-56887db58d50 to disappear +Jan 14 03:56:07.599: INFO: Pod pod-fb737487-4011-42e2-8ce2-56887db58d50 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 +Jan 14 03:56:07.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 +STEP: Destroying namespace "emptydir-6512" for this suite. 01/14/23 03:56:07.603 +------------------------------ +• [4.078 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:167 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:56:03.53 + Jan 14 03:56:03.530: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename emptydir 01/14/23 03:56:03.531 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:56:03.549 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:56:03.552 + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 + [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:167 + STEP: Creating a pod to test emptydir 0644 on node default medium 01/14/23 03:56:03.556 + Jan 14 03:56:03.567: INFO: Waiting up to 5m0s for pod "pod-fb737487-4011-42e2-8ce2-56887db58d50" in namespace "emptydir-6512" to be "Succeeded or Failed" + Jan 14 03:56:03.570: INFO: Pod "pod-fb737487-4011-42e2-8ce2-56887db58d50": Phase="Pending", Reason="", readiness=false. Elapsed: 3.174367ms + Jan 14 03:56:05.575: INFO: Pod "pod-fb737487-4011-42e2-8ce2-56887db58d50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008534579s + Jan 14 03:56:07.574: INFO: Pod "pod-fb737487-4011-42e2-8ce2-56887db58d50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00770591s + STEP: Saw pod success 01/14/23 03:56:07.574 + Jan 14 03:56:07.575: INFO: Pod "pod-fb737487-4011-42e2-8ce2-56887db58d50" satisfied condition "Succeeded or Failed" + Jan 14 03:56:07.578: INFO: Trying to get logs from node 10.0.1.106 pod pod-fb737487-4011-42e2-8ce2-56887db58d50 container test-container: + STEP: delete the pod 01/14/23 03:56:07.583 + Jan 14 03:56:07.596: INFO: Waiting for pod pod-fb737487-4011-42e2-8ce2-56887db58d50 to disappear + Jan 14 03:56:07.599: INFO: Pod pod-fb737487-4011-42e2-8ce2-56887db58d50 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 + Jan 14 03:56:07.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 + STEP: Destroying namespace "emptydir-6512" for this suite. 01/14/23 03:56:07.603 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should perform canary updates and phased rolling updates of template modifications [Conformance] + test/e2e/apps/statefulset.go:317 +[BeforeEach] [sig-apps] StatefulSet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:56:07.608 +Jan 14 03:56:07.608: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename statefulset 01/14/23 03:56:07.609 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:56:07.622 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:56:07.624 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 +STEP: Creating service test in namespace statefulset-223 01/14/23 03:56:07.626 +[It] should perform canary updates and phased rolling updates of template modifications [Conformance] + test/e2e/apps/statefulset.go:317 +STEP: Creating a new StatefulSet 01/14/23 03:56:07.63 +Jan 14 03:56:07.639: INFO: Found 0 stateful pods, waiting for 3 +Jan 14 03:56:17.645: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Jan 14 03:56:17.645: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Jan 14 03:56:17.645: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Updating stateful set template: update image from registry.k8s.io/e2e-test-images/httpd:2.4.38-4 to registry.k8s.io/e2e-test-images/httpd:2.4.39-4 01/14/23 03:56:17.655 +Jan 14 03:56:17.674: INFO: Updating stateful set ss2 +STEP: Creating a new revision 01/14/23 03:56:17.674 +STEP: Not applying an update when the partition is greater than the number of replicas 01/14/23 03:56:27.689 +STEP: Performing a canary update 01/14/23 03:56:27.689 +Jan 14 03:56:27.708: INFO: Updating stateful set ss2 +Jan 14 03:56:27.855: INFO: Waiting for Pod statefulset-223/ss2-2 to have revision ss2-5459d8585b update revision ss2-7b6c9599d5 +STEP: Restoring Pods to the correct revision when they are deleted 01/14/23 03:56:37.865 +Jan 14 03:56:37.900: INFO: Found 2 stateful pods, waiting for 3 +Jan 14 03:56:47.905: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Jan 14 03:56:47.905: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Jan 14 03:56:47.905: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Performing a phased rolling update 01/14/23 03:56:47.911 +Jan 14 03:56:47.928: INFO: Updating stateful set ss2 +Jan 14 03:56:47.933: INFO: Waiting for Pod statefulset-223/ss2-1 to have revision ss2-5459d8585b update revision ss2-7b6c9599d5 +Jan 14 03:56:57.961: INFO: Updating stateful set ss2 +Jan 14 03:56:58.072: INFO: Waiting for StatefulSet statefulset-223/ss2 to complete update +Jan 14 03:56:58.072: INFO: Waiting for Pod statefulset-223/ss2-0 to have revision ss2-5459d8585b update revision ss2-7b6c9599d5 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 +Jan 14 03:57:08.081: INFO: Deleting all statefulset in ns statefulset-223 +Jan 14 03:57:08.083: INFO: Scaling statefulset ss2 to 0 +Jan 14 03:57:18.102: INFO: Waiting for statefulset status.replicas updated to 0 +Jan 14 03:57:18.105: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + test/e2e/framework/node/init/init.go:32 +Jan 14 03:57:18.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] StatefulSet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] StatefulSet + tear down framework | framework.go:193 +STEP: Destroying namespace "statefulset-223" for this suite. 01/14/23 03:57:18.122 +------------------------------ +• [SLOW TEST] [70.521 seconds] +[sig-apps] StatefulSet +test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:103 + should perform canary updates and phased rolling updates of template modifications [Conformance] + test/e2e/apps/statefulset.go:317 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] StatefulSet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:56:07.608 + Jan 14 03:56:07.608: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename statefulset 01/14/23 03:56:07.609 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:56:07.622 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:56:07.624 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 + [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 + STEP: Creating service test in namespace statefulset-223 01/14/23 03:56:07.626 + [It] should perform canary updates and phased rolling updates of template modifications [Conformance] + test/e2e/apps/statefulset.go:317 + STEP: Creating a new StatefulSet 01/14/23 03:56:07.63 + Jan 14 03:56:07.639: INFO: Found 0 stateful pods, waiting for 3 + Jan 14 03:56:17.645: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true + Jan 14 03:56:17.645: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true + Jan 14 03:56:17.645: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true + STEP: Updating stateful set template: update image from registry.k8s.io/e2e-test-images/httpd:2.4.38-4 to registry.k8s.io/e2e-test-images/httpd:2.4.39-4 01/14/23 03:56:17.655 + Jan 14 03:56:17.674: INFO: Updating stateful set ss2 + STEP: Creating a new revision 01/14/23 03:56:17.674 + STEP: Not applying an update when the partition is greater than the number of replicas 01/14/23 03:56:27.689 + STEP: Performing a canary update 01/14/23 03:56:27.689 + Jan 14 03:56:27.708: INFO: Updating stateful set ss2 + Jan 14 03:56:27.855: INFO: Waiting for Pod statefulset-223/ss2-2 to have revision ss2-5459d8585b update revision ss2-7b6c9599d5 + STEP: Restoring Pods to the correct revision when they are deleted 01/14/23 03:56:37.865 + Jan 14 03:56:37.900: INFO: Found 2 stateful pods, waiting for 3 + Jan 14 03:56:47.905: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true + Jan 14 03:56:47.905: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true + Jan 14 03:56:47.905: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true + STEP: Performing a phased rolling update 01/14/23 03:56:47.911 + Jan 14 03:56:47.928: INFO: Updating stateful set ss2 + Jan 14 03:56:47.933: INFO: Waiting for Pod statefulset-223/ss2-1 to have revision ss2-5459d8585b update revision ss2-7b6c9599d5 + Jan 14 03:56:57.961: INFO: Updating stateful set ss2 + Jan 14 03:56:58.072: INFO: Waiting for StatefulSet statefulset-223/ss2 to complete update + Jan 14 03:56:58.072: INFO: Waiting for Pod statefulset-223/ss2-0 to have revision ss2-5459d8585b update revision ss2-7b6c9599d5 + [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 + Jan 14 03:57:08.081: INFO: Deleting all statefulset in ns statefulset-223 + Jan 14 03:57:08.083: INFO: Scaling statefulset ss2 to 0 + Jan 14 03:57:18.102: INFO: Waiting for statefulset status.replicas updated to 0 + Jan 14 03:57:18.105: INFO: Deleting statefulset ss2 + [AfterEach] [sig-apps] StatefulSet + test/e2e/framework/node/init/init.go:32 + Jan 14 03:57:18.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] StatefulSet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] StatefulSet + tear down framework | framework.go:193 + STEP: Destroying namespace "statefulset-223" for this suite. 01/14/23 03:57:18.122 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate configmap [Conformance] + test/e2e/apimachinery/webhook.go:252 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:57:18.131 +Jan 14 03:57:18.131: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename webhook 01/14/23 03:57:18.132 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:57:18.144 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:57:18.149 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 01/14/23 03:57:18.161 +STEP: Create role binding to let webhook read extension-apiserver-authentication 01/14/23 03:57:18.562 +STEP: Deploying the webhook pod 01/14/23 03:57:18.57 +STEP: Wait for the deployment to be ready 01/14/23 03:57:18.586 +Jan 14 03:57:18.594: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 01/14/23 03:57:20.605 +STEP: Verifying the service has paired with the endpoint 01/14/23 03:57:20.614 +Jan 14 03:57:21.614: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate configmap [Conformance] + test/e2e/apimachinery/webhook.go:252 +STEP: Registering the mutating configmap webhook via the AdmissionRegistration API 01/14/23 03:57:21.618 +STEP: create a configmap that should be updated by the webhook 01/14/23 03:57:21.631 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jan 14 03:57:21.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "webhook-8032" for this suite. 01/14/23 03:57:21.682 +STEP: Destroying namespace "webhook-8032-markers" for this suite. 01/14/23 03:57:21.688 +------------------------------ +• [3.563 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should mutate configmap [Conformance] + test/e2e/apimachinery/webhook.go:252 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:57:18.131 + Jan 14 03:57:18.131: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename webhook 01/14/23 03:57:18.132 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:57:18.144 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:57:18.149 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 01/14/23 03:57:18.161 + STEP: Create role binding to let webhook read extension-apiserver-authentication 01/14/23 03:57:18.562 + STEP: Deploying the webhook pod 01/14/23 03:57:18.57 + STEP: Wait for the deployment to be ready 01/14/23 03:57:18.586 + Jan 14 03:57:18.594: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 01/14/23 03:57:20.605 + STEP: Verifying the service has paired with the endpoint 01/14/23 03:57:20.614 + Jan 14 03:57:21.614: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should mutate configmap [Conformance] + test/e2e/apimachinery/webhook.go:252 + STEP: Registering the mutating configmap webhook via the AdmissionRegistration API 01/14/23 03:57:21.618 + STEP: create a configmap that should be updated by the webhook 01/14/23 03:57:21.631 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Jan 14 03:57:21.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "webhook-8032" for this suite. 01/14/23 03:57:21.682 + STEP: Destroying namespace "webhook-8032-markers" for this suite. 01/14/23 03:57:21.688 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Containers + should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:59 +[BeforeEach] [sig-node] Containers + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:57:21.695 +Jan 14 03:57:21.695: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename containers 01/14/23 03:57:21.695 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:57:21.709 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:57:21.711 +[BeforeEach] [sig-node] Containers + test/e2e/framework/metrics/init/init.go:31 +[It] should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:59 +STEP: Creating a pod to test override arguments 01/14/23 03:57:21.713 +Jan 14 03:57:21.722: INFO: Waiting up to 5m0s for pod "client-containers-17be43ac-12c5-42f6-aeb0-8be9a8975d99" in namespace "containers-3894" to be "Succeeded or Failed" +Jan 14 03:57:21.725: INFO: Pod "client-containers-17be43ac-12c5-42f6-aeb0-8be9a8975d99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.958296ms +Jan 14 03:57:23.730: INFO: Pod "client-containers-17be43ac-12c5-42f6-aeb0-8be9a8975d99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00818885s +Jan 14 03:57:25.730: INFO: Pod "client-containers-17be43ac-12c5-42f6-aeb0-8be9a8975d99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007341894s +STEP: Saw pod success 01/14/23 03:57:25.73 +Jan 14 03:57:25.730: INFO: Pod "client-containers-17be43ac-12c5-42f6-aeb0-8be9a8975d99" satisfied condition "Succeeded or Failed" +Jan 14 03:57:25.733: INFO: Trying to get logs from node 10.0.1.106 pod client-containers-17be43ac-12c5-42f6-aeb0-8be9a8975d99 container agnhost-container: +STEP: delete the pod 01/14/23 03:57:25.738 +Jan 14 03:57:25.751: INFO: Waiting for pod client-containers-17be43ac-12c5-42f6-aeb0-8be9a8975d99 to disappear +Jan 14 03:57:25.754: INFO: Pod client-containers-17be43ac-12c5-42f6-aeb0-8be9a8975d99 no longer exists +[AfterEach] [sig-node] Containers + test/e2e/framework/node/init/init.go:32 +Jan 14 03:57:25.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Containers + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Containers + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Containers + tear down framework | framework.go:193 +STEP: Destroying namespace "containers-3894" for this suite. 01/14/23 03:57:25.758 +------------------------------ +• [4.069 seconds] +[sig-node] Containers +test/e2e/common/node/framework.go:23 + should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:59 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Containers + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:57:21.695 + Jan 14 03:57:21.695: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename containers 01/14/23 03:57:21.695 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:57:21.709 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:57:21.711 + [BeforeEach] [sig-node] Containers + test/e2e/framework/metrics/init/init.go:31 + [It] should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:59 + STEP: Creating a pod to test override arguments 01/14/23 03:57:21.713 + Jan 14 03:57:21.722: INFO: Waiting up to 5m0s for pod "client-containers-17be43ac-12c5-42f6-aeb0-8be9a8975d99" in namespace "containers-3894" to be "Succeeded or Failed" + Jan 14 03:57:21.725: INFO: Pod "client-containers-17be43ac-12c5-42f6-aeb0-8be9a8975d99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.958296ms + Jan 14 03:57:23.730: INFO: Pod "client-containers-17be43ac-12c5-42f6-aeb0-8be9a8975d99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00818885s + Jan 14 03:57:25.730: INFO: Pod "client-containers-17be43ac-12c5-42f6-aeb0-8be9a8975d99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007341894s + STEP: Saw pod success 01/14/23 03:57:25.73 + Jan 14 03:57:25.730: INFO: Pod "client-containers-17be43ac-12c5-42f6-aeb0-8be9a8975d99" satisfied condition "Succeeded or Failed" + Jan 14 03:57:25.733: INFO: Trying to get logs from node 10.0.1.106 pod client-containers-17be43ac-12c5-42f6-aeb0-8be9a8975d99 container agnhost-container: + STEP: delete the pod 01/14/23 03:57:25.738 + Jan 14 03:57:25.751: INFO: Waiting for pod client-containers-17be43ac-12c5-42f6-aeb0-8be9a8975d99 to disappear + Jan 14 03:57:25.754: INFO: Pod client-containers-17be43ac-12c5-42f6-aeb0-8be9a8975d99 no longer exists + [AfterEach] [sig-node] Containers + test/e2e/framework/node/init/init.go:32 + Jan 14 03:57:25.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Containers + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Containers + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Containers + tear down framework | framework.go:193 + STEP: Destroying namespace "containers-3894" for this suite. 01/14/23 03:57:25.758 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSS +------------------------------ +[sig-cli] Kubectl client Guestbook application + should create and stop a working application [Conformance] + test/e2e/kubectl/kubectl.go:394 +[BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:57:25.764 +Jan 14 03:57:25.764: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename kubectl 01/14/23 03:57:25.765 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:57:25.779 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:57:25.781 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[It] should create and stop a working application [Conformance] + test/e2e/kubectl/kubectl.go:394 +STEP: creating all guestbook components 01/14/23 03:57:25.783 +Jan 14 03:57:25.783: INFO: apiVersion: v1 +kind: Service +metadata: + name: agnhost-replica + labels: + app: agnhost + role: replica + tier: backend +spec: + ports: + - port: 6379 + selector: + app: agnhost + role: replica + tier: backend + +Jan 14 03:57:25.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-6155 create -f -' +Jan 14 03:57:26.460: INFO: stderr: "" +Jan 14 03:57:26.460: INFO: stdout: "service/agnhost-replica created\n" +Jan 14 03:57:26.460: INFO: apiVersion: v1 +kind: Service +metadata: + name: agnhost-primary + labels: + app: agnhost + role: primary + tier: backend +spec: + ports: + - port: 6379 + targetPort: 6379 + selector: + app: agnhost + role: primary + tier: backend + +Jan 14 03:57:26.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-6155 create -f -' +Jan 14 03:57:27.137: INFO: stderr: "" +Jan 14 03:57:27.137: INFO: stdout: "service/agnhost-primary created\n" +Jan 14 03:57:27.137: INFO: apiVersion: v1 +kind: Service +metadata: + name: frontend + labels: + app: guestbook + tier: frontend +spec: + # if your cluster supports it, uncomment the following to automatically create + # an external load-balanced IP for the frontend service. + # type: LoadBalancer + ports: + - port: 80 + selector: + app: guestbook + tier: frontend + +Jan 14 03:57:27.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-6155 create -f -' +Jan 14 03:57:27.316: INFO: stderr: "" +Jan 14 03:57:27.316: INFO: stdout: "service/frontend created\n" +Jan 14 03:57:27.316: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: frontend +spec: + replicas: 3 + selector: + matchLabels: + app: guestbook + tier: frontend + template: + metadata: + labels: + app: guestbook + tier: frontend + spec: + containers: + - name: guestbook-frontend + image: registry.k8s.io/e2e-test-images/agnhost:2.43 + args: [ "guestbook", "--backend-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 80 + +Jan 14 03:57:27.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-6155 create -f -' +Jan 14 03:57:27.489: INFO: stderr: "" +Jan 14 03:57:27.489: INFO: stdout: "deployment.apps/frontend created\n" +Jan 14 03:57:27.489: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: agnhost-primary +spec: + replicas: 1 + selector: + matchLabels: + app: agnhost + role: primary + tier: backend + template: + metadata: + labels: + app: agnhost + role: primary + tier: backend + spec: + containers: + - name: primary + image: registry.k8s.io/e2e-test-images/agnhost:2.43 + args: [ "guestbook", "--http-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + +Jan 14 03:57:27.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-6155 create -f -' +Jan 14 03:57:27.667: INFO: stderr: "" +Jan 14 03:57:27.667: INFO: stdout: "deployment.apps/agnhost-primary created\n" +Jan 14 03:57:27.667: INFO: apiVersion: apps/v1 +kind: Deployment +metadata: + name: agnhost-replica +spec: + replicas: 2 + selector: + matchLabels: + app: agnhost + role: replica + tier: backend + template: + metadata: + labels: + app: agnhost + role: replica + tier: backend + spec: + containers: + - name: replica + image: registry.k8s.io/e2e-test-images/agnhost:2.43 + args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + +Jan 14 03:57:27.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-6155 create -f -' +Jan 14 03:57:27.849: INFO: stderr: "" +Jan 14 03:57:27.849: INFO: stdout: "deployment.apps/agnhost-replica created\n" +STEP: validating guestbook app 01/14/23 03:57:27.849 +Jan 14 03:57:27.849: INFO: Waiting for all frontend pods to be Running. +Jan 14 03:57:32.900: INFO: Waiting for frontend to serve content. +Jan 14 03:57:32.912: INFO: Trying to add a new entry to the guestbook. +Jan 14 03:57:32.923: INFO: Verifying that added entry can be retrieved. +STEP: using delete to clean up resources 01/14/23 03:57:32.93 +Jan 14 03:57:32.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-6155 delete --grace-period=0 --force -f -' +Jan 14 03:57:33.013: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jan 14 03:57:33.013: INFO: stdout: "service \"agnhost-replica\" force deleted\n" +STEP: using delete to clean up resources 01/14/23 03:57:33.013 +Jan 14 03:57:33.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-6155 delete --grace-period=0 --force -f -' +Jan 14 03:57:33.099: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jan 14 03:57:33.099: INFO: stdout: "service \"agnhost-primary\" force deleted\n" +STEP: using delete to clean up resources 01/14/23 03:57:33.099 +Jan 14 03:57:33.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-6155 delete --grace-period=0 --force -f -' +Jan 14 03:57:33.172: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jan 14 03:57:33.173: INFO: stdout: "service \"frontend\" force deleted\n" +STEP: using delete to clean up resources 01/14/23 03:57:33.173 +Jan 14 03:57:33.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-6155 delete --grace-period=0 --force -f -' +Jan 14 03:57:33.243: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jan 14 03:57:33.243: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" +STEP: using delete to clean up resources 01/14/23 03:57:33.243 +Jan 14 03:57:33.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-6155 delete --grace-period=0 --force -f -' +Jan 14 03:57:33.316: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jan 14 03:57:33.316: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" +STEP: using delete to clean up resources 01/14/23 03:57:33.316 +Jan 14 03:57:33.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-6155 delete --grace-period=0 --force -f -' +Jan 14 03:57:33.392: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jan 14 03:57:33.392: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 +Jan 14 03:57:33.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 +STEP: Destroying namespace "kubectl-6155" for this suite. 01/14/23 03:57:33.403 +------------------------------ +• [SLOW TEST] [7.650 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Guestbook application + test/e2e/kubectl/kubectl.go:369 + should create and stop a working application [Conformance] + test/e2e/kubectl/kubectl.go:394 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:57:25.764 + Jan 14 03:57:25.764: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename kubectl 01/14/23 03:57:25.765 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:57:25.779 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:57:25.781 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [It] should create and stop a working application [Conformance] + test/e2e/kubectl/kubectl.go:394 + STEP: creating all guestbook components 01/14/23 03:57:25.783 + Jan 14 03:57:25.783: INFO: apiVersion: v1 + kind: Service + metadata: + name: agnhost-replica + labels: + app: agnhost + role: replica + tier: backend + spec: + ports: + - port: 6379 + selector: + app: agnhost + role: replica + tier: backend + + Jan 14 03:57:25.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-6155 create -f -' + Jan 14 03:57:26.460: INFO: stderr: "" + Jan 14 03:57:26.460: INFO: stdout: "service/agnhost-replica created\n" + Jan 14 03:57:26.460: INFO: apiVersion: v1 + kind: Service + metadata: + name: agnhost-primary + labels: + app: agnhost + role: primary + tier: backend + spec: + ports: + - port: 6379 + targetPort: 6379 + selector: + app: agnhost + role: primary + tier: backend + + Jan 14 03:57:26.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-6155 create -f -' + Jan 14 03:57:27.137: INFO: stderr: "" + Jan 14 03:57:27.137: INFO: stdout: "service/agnhost-primary created\n" + Jan 14 03:57:27.137: INFO: apiVersion: v1 + kind: Service + metadata: + name: frontend + labels: + app: guestbook + tier: frontend + spec: + # if your cluster supports it, uncomment the following to automatically create + # an external load-balanced IP for the frontend service. + # type: LoadBalancer + ports: + - port: 80 + selector: + app: guestbook + tier: frontend + + Jan 14 03:57:27.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-6155 create -f -' + Jan 14 03:57:27.316: INFO: stderr: "" + Jan 14 03:57:27.316: INFO: stdout: "service/frontend created\n" + Jan 14 03:57:27.316: INFO: apiVersion: apps/v1 + kind: Deployment + metadata: + name: frontend + spec: + replicas: 3 + selector: + matchLabels: + app: guestbook + tier: frontend + template: + metadata: + labels: + app: guestbook + tier: frontend + spec: + containers: + - name: guestbook-frontend + image: registry.k8s.io/e2e-test-images/agnhost:2.43 + args: [ "guestbook", "--backend-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 80 + + Jan 14 03:57:27.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-6155 create -f -' + Jan 14 03:57:27.489: INFO: stderr: "" + Jan 14 03:57:27.489: INFO: stdout: "deployment.apps/frontend created\n" + Jan 14 03:57:27.489: INFO: apiVersion: apps/v1 + kind: Deployment + metadata: + name: agnhost-primary + spec: + replicas: 1 + selector: + matchLabels: + app: agnhost + role: primary + tier: backend + template: + metadata: + labels: + app: agnhost + role: primary + tier: backend + spec: + containers: + - name: primary + image: registry.k8s.io/e2e-test-images/agnhost:2.43 + args: [ "guestbook", "--http-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + + Jan 14 03:57:27.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-6155 create -f -' + Jan 14 03:57:27.667: INFO: stderr: "" + Jan 14 03:57:27.667: INFO: stdout: "deployment.apps/agnhost-primary created\n" + Jan 14 03:57:27.667: INFO: apiVersion: apps/v1 + kind: Deployment + metadata: + name: agnhost-replica + spec: + replicas: 2 + selector: + matchLabels: + app: agnhost + role: replica + tier: backend + template: + metadata: + labels: + app: agnhost + role: replica + tier: backend + spec: + containers: + - name: replica + image: registry.k8s.io/e2e-test-images/agnhost:2.43 + args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] + resources: + requests: + cpu: 100m + memory: 100Mi + ports: + - containerPort: 6379 + + Jan 14 03:57:27.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-6155 create -f -' + Jan 14 03:57:27.849: INFO: stderr: "" + Jan 14 03:57:27.849: INFO: stdout: "deployment.apps/agnhost-replica created\n" + STEP: validating guestbook app 01/14/23 03:57:27.849 + Jan 14 03:57:27.849: INFO: Waiting for all frontend pods to be Running. + Jan 14 03:57:32.900: INFO: Waiting for frontend to serve content. + Jan 14 03:57:32.912: INFO: Trying to add a new entry to the guestbook. + Jan 14 03:57:32.923: INFO: Verifying that added entry can be retrieved. + STEP: using delete to clean up resources 01/14/23 03:57:32.93 + Jan 14 03:57:32.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-6155 delete --grace-period=0 --force -f -' + Jan 14 03:57:33.013: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" + Jan 14 03:57:33.013: INFO: stdout: "service \"agnhost-replica\" force deleted\n" + STEP: using delete to clean up resources 01/14/23 03:57:33.013 + Jan 14 03:57:33.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-6155 delete --grace-period=0 --force -f -' + Jan 14 03:57:33.099: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" + Jan 14 03:57:33.099: INFO: stdout: "service \"agnhost-primary\" force deleted\n" + STEP: using delete to clean up resources 01/14/23 03:57:33.099 + Jan 14 03:57:33.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-6155 delete --grace-period=0 --force -f -' + Jan 14 03:57:33.172: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" + Jan 14 03:57:33.173: INFO: stdout: "service \"frontend\" force deleted\n" + STEP: using delete to clean up resources 01/14/23 03:57:33.173 + Jan 14 03:57:33.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-6155 delete --grace-period=0 --force -f -' + Jan 14 03:57:33.243: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" + Jan 14 03:57:33.243: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" + STEP: using delete to clean up resources 01/14/23 03:57:33.243 + Jan 14 03:57:33.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-6155 delete --grace-period=0 --force -f -' + Jan 14 03:57:33.316: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" + Jan 14 03:57:33.316: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" + STEP: using delete to clean up resources 01/14/23 03:57:33.316 + Jan 14 03:57:33.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-6155 delete --grace-period=0 --force -f -' + Jan 14 03:57:33.392: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" + Jan 14 03:57:33.392: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 + Jan 14 03:57:33.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 + STEP: Destroying namespace "kubectl-6155" for this suite. 01/14/23 03:57:33.403 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + test/e2e/common/node/init_container.go:458 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:57:33.414 +Jan 14 03:57:33.414: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename init-container 01/14/23 03:57:33.415 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:57:33.43 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:57:33.432 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/common/node/init_container.go:165 +[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + test/e2e/common/node/init_container.go:458 +STEP: creating the pod 01/14/23 03:57:33.434 +Jan 14 03:57:33.435: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/node/init/init.go:32 +Jan 14 03:57:37.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + tear down framework | framework.go:193 +STEP: Destroying namespace "init-container-4978" for this suite. 01/14/23 03:57:37.228 +------------------------------ +• [3.820 seconds] +[sig-node] InitContainer [NodeConformance] +test/e2e/common/node/framework.go:23 + should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + test/e2e/common/node/init_container.go:458 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] InitContainer [NodeConformance] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:57:33.414 + Jan 14 03:57:33.414: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename init-container 01/14/23 03:57:33.415 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:57:33.43 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:57:33.432 + [BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/common/node/init_container.go:165 + [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + test/e2e/common/node/init_container.go:458 + STEP: creating the pod 01/14/23 03:57:33.434 + Jan 14 03:57:33.435: INFO: PodSpec: initContainers in spec.initContainers + [AfterEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/node/init/init.go:32 + Jan 14 03:57:37.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + tear down framework | framework.go:193 + STEP: Destroying namespace "init-container-4978" for this suite. 01/14/23 03:57:37.228 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should allow opting out of API token automount [Conformance] + test/e2e/auth/service_accounts.go:161 +[BeforeEach] [sig-auth] ServiceAccounts + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:57:37.235 +Jan 14 03:57:37.235: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename svcaccounts 01/14/23 03:57:37.236 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:57:37.249 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:57:37.251 +[BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:31 +[It] should allow opting out of API token automount [Conformance] + test/e2e/auth/service_accounts.go:161 +Jan 14 03:57:37.267: INFO: created pod pod-service-account-defaultsa +Jan 14 03:57:37.267: INFO: pod pod-service-account-defaultsa service account token volume mount: true +Jan 14 03:57:37.274: INFO: created pod pod-service-account-mountsa +Jan 14 03:57:37.274: INFO: pod pod-service-account-mountsa service account token volume mount: true +Jan 14 03:57:37.283: INFO: created pod pod-service-account-nomountsa +Jan 14 03:57:37.283: INFO: pod pod-service-account-nomountsa service account token volume mount: false +Jan 14 03:57:37.294: INFO: created pod pod-service-account-defaultsa-mountspec +Jan 14 03:57:37.294: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true +Jan 14 03:57:37.303: INFO: created pod pod-service-account-mountsa-mountspec +Jan 14 03:57:37.303: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true +Jan 14 03:57:37.312: INFO: created pod pod-service-account-nomountsa-mountspec +Jan 14 03:57:37.312: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true +Jan 14 03:57:37.321: INFO: created pod pod-service-account-defaultsa-nomountspec +Jan 14 03:57:37.321: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false +Jan 14 03:57:37.332: INFO: created pod pod-service-account-mountsa-nomountspec +Jan 14 03:57:37.332: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false +Jan 14 03:57:37.339: INFO: created pod pod-service-account-nomountsa-nomountspec +Jan 14 03:57:37.339: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false +[AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/node/init/init.go:32 +Jan 14 03:57:37.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + tear down framework | framework.go:193 +STEP: Destroying namespace "svcaccounts-3416" for this suite. 01/14/23 03:57:37.349 +------------------------------ +• [0.124 seconds] +[sig-auth] ServiceAccounts +test/e2e/auth/framework.go:23 + should allow opting out of API token automount [Conformance] + test/e2e/auth/service_accounts.go:161 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-auth] ServiceAccounts + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:57:37.235 + Jan 14 03:57:37.235: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename svcaccounts 01/14/23 03:57:37.236 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:57:37.249 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:57:37.251 + [BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:31 + [It] should allow opting out of API token automount [Conformance] + test/e2e/auth/service_accounts.go:161 + Jan 14 03:57:37.267: INFO: created pod pod-service-account-defaultsa + Jan 14 03:57:37.267: INFO: pod pod-service-account-defaultsa service account token volume mount: true + Jan 14 03:57:37.274: INFO: created pod pod-service-account-mountsa + Jan 14 03:57:37.274: INFO: pod pod-service-account-mountsa service account token volume mount: true + Jan 14 03:57:37.283: INFO: created pod pod-service-account-nomountsa + Jan 14 03:57:37.283: INFO: pod pod-service-account-nomountsa service account token volume mount: false + Jan 14 03:57:37.294: INFO: created pod pod-service-account-defaultsa-mountspec + Jan 14 03:57:37.294: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true + Jan 14 03:57:37.303: INFO: created pod pod-service-account-mountsa-mountspec + Jan 14 03:57:37.303: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true + Jan 14 03:57:37.312: INFO: created pod pod-service-account-nomountsa-mountspec + Jan 14 03:57:37.312: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true + Jan 14 03:57:37.321: INFO: created pod pod-service-account-defaultsa-nomountspec + Jan 14 03:57:37.321: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false + Jan 14 03:57:37.332: INFO: created pod pod-service-account-mountsa-nomountspec + Jan 14 03:57:37.332: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false + Jan 14 03:57:37.339: INFO: created pod pod-service-account-nomountsa-nomountspec + Jan 14 03:57:37.339: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false + [AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/node/init/init.go:32 + Jan 14 03:57:37.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + tear down framework | framework.go:193 + STEP: Destroying namespace "svcaccounts-3416" for this suite. 01/14/23 03:57:37.349 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Security Context when creating containers with AllowPrivilegeEscalation + should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:609 +[BeforeEach] [sig-node] Security Context + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:57:37.36 +Jan 14 03:57:37.360: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename security-context-test 01/14/23 03:57:37.361 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:57:37.375 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:57:37.377 +[BeforeEach] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Security Context + test/e2e/common/node/security_context.go:50 +[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:609 +Jan 14 03:57:37.388: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-a5a3f49e-078a-4a18-9751-437f14c82016" in namespace "security-context-test-4991" to be "Succeeded or Failed" +Jan 14 03:57:37.391: INFO: Pod "alpine-nnp-false-a5a3f49e-078a-4a18-9751-437f14c82016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.886466ms +Jan 14 03:57:39.395: INFO: Pod "alpine-nnp-false-a5a3f49e-078a-4a18-9751-437f14c82016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007072994s +Jan 14 03:57:41.396: INFO: Pod "alpine-nnp-false-a5a3f49e-078a-4a18-9751-437f14c82016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007958476s +Jan 14 03:57:41.396: INFO: Pod "alpine-nnp-false-a5a3f49e-078a-4a18-9751-437f14c82016" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context + test/e2e/framework/node/init/init.go:32 +Jan 14 03:57:41.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Security Context + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Security Context + tear down framework | framework.go:193 +STEP: Destroying namespace "security-context-test-4991" for this suite. 01/14/23 03:57:41.413 +------------------------------ +• [4.058 seconds] +[sig-node] Security Context +test/e2e/common/node/framework.go:23 + when creating containers with AllowPrivilegeEscalation + test/e2e/common/node/security_context.go:555 + should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:609 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Security Context + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:57:37.36 + Jan 14 03:57:37.360: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename security-context-test 01/14/23 03:57:37.361 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:57:37.375 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:57:37.377 + [BeforeEach] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Security Context + test/e2e/common/node/security_context.go:50 + [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:609 + Jan 14 03:57:37.388: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-a5a3f49e-078a-4a18-9751-437f14c82016" in namespace "security-context-test-4991" to be "Succeeded or Failed" + Jan 14 03:57:37.391: INFO: Pod "alpine-nnp-false-a5a3f49e-078a-4a18-9751-437f14c82016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.886466ms + Jan 14 03:57:39.395: INFO: Pod "alpine-nnp-false-a5a3f49e-078a-4a18-9751-437f14c82016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007072994s + Jan 14 03:57:41.396: INFO: Pod "alpine-nnp-false-a5a3f49e-078a-4a18-9751-437f14c82016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007958476s + Jan 14 03:57:41.396: INFO: Pod "alpine-nnp-false-a5a3f49e-078a-4a18-9751-437f14c82016" satisfied condition "Succeeded or Failed" + [AfterEach] [sig-node] Security Context + test/e2e/framework/node/init/init.go:32 + Jan 14 03:57:41.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Security Context + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Security Context + tear down framework | framework.go:193 + STEP: Destroying namespace "security-context-test-4991" for this suite. 01/14/23 03:57:41.413 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:67 +[BeforeEach] [sig-storage] Projected secret + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:57:41.419 +Jan 14 03:57:41.419: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename projected 01/14/23 03:57:41.419 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:57:41.433 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:57:41.436 +[BeforeEach] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:67 +STEP: Creating projection with secret that has name projected-secret-test-8c5af846-ca5c-482a-9b1f-dff183176ee2 01/14/23 03:57:41.438 +STEP: Creating a pod to test consume secrets 01/14/23 03:57:41.442 +Jan 14 03:57:41.452: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8331cb56-fec4-45fe-9810-82b250760b96" in namespace "projected-1059" to be "Succeeded or Failed" +Jan 14 03:57:41.457: INFO: Pod "pod-projected-secrets-8331cb56-fec4-45fe-9810-82b250760b96": Phase="Pending", Reason="", readiness=false. Elapsed: 4.901011ms +Jan 14 03:57:43.464: INFO: Pod "pod-projected-secrets-8331cb56-fec4-45fe-9810-82b250760b96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011839322s +Jan 14 03:57:45.461: INFO: Pod "pod-projected-secrets-8331cb56-fec4-45fe-9810-82b250760b96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009446684s +STEP: Saw pod success 01/14/23 03:57:45.462 +Jan 14 03:57:45.462: INFO: Pod "pod-projected-secrets-8331cb56-fec4-45fe-9810-82b250760b96" satisfied condition "Succeeded or Failed" +Jan 14 03:57:45.465: INFO: Trying to get logs from node 10.0.1.99 pod pod-projected-secrets-8331cb56-fec4-45fe-9810-82b250760b96 container projected-secret-volume-test: +STEP: delete the pod 01/14/23 03:57:45.47 +Jan 14 03:57:45.483: INFO: Waiting for pod pod-projected-secrets-8331cb56-fec4-45fe-9810-82b250760b96 to disappear +Jan 14 03:57:45.486: INFO: Pod pod-projected-secrets-8331cb56-fec4-45fe-9810-82b250760b96 no longer exists +[AfterEach] [sig-storage] Projected secret + test/e2e/framework/node/init/init.go:32 +Jan 14 03:57:45.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected secret + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected secret + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-1059" for this suite. 01/14/23 03:57:45.49 +------------------------------ +• [4.077 seconds] +[sig-storage] Projected secret +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:67 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected secret + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:57:41.419 + Jan 14 03:57:41.419: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename projected 01/14/23 03:57:41.419 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:57:41.433 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:57:41.436 + [BeforeEach] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:67 + STEP: Creating projection with secret that has name projected-secret-test-8c5af846-ca5c-482a-9b1f-dff183176ee2 01/14/23 03:57:41.438 + STEP: Creating a pod to test consume secrets 01/14/23 03:57:41.442 + Jan 14 03:57:41.452: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8331cb56-fec4-45fe-9810-82b250760b96" in namespace "projected-1059" to be "Succeeded or Failed" + Jan 14 03:57:41.457: INFO: Pod "pod-projected-secrets-8331cb56-fec4-45fe-9810-82b250760b96": Phase="Pending", Reason="", readiness=false. Elapsed: 4.901011ms + Jan 14 03:57:43.464: INFO: Pod "pod-projected-secrets-8331cb56-fec4-45fe-9810-82b250760b96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011839322s + Jan 14 03:57:45.461: INFO: Pod "pod-projected-secrets-8331cb56-fec4-45fe-9810-82b250760b96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009446684s + STEP: Saw pod success 01/14/23 03:57:45.462 + Jan 14 03:57:45.462: INFO: Pod "pod-projected-secrets-8331cb56-fec4-45fe-9810-82b250760b96" satisfied condition "Succeeded or Failed" + Jan 14 03:57:45.465: INFO: Trying to get logs from node 10.0.1.99 pod pod-projected-secrets-8331cb56-fec4-45fe-9810-82b250760b96 container projected-secret-volume-test: + STEP: delete the pod 01/14/23 03:57:45.47 + Jan 14 03:57:45.483: INFO: Waiting for pod pod-projected-secrets-8331cb56-fec4-45fe-9810-82b250760b96 to disappear + Jan 14 03:57:45.486: INFO: Pod pod-projected-secrets-8331cb56-fec4-45fe-9810-82b250760b96 no longer exists + [AfterEach] [sig-storage] Projected secret + test/e2e/framework/node/init/init.go:32 + Jan 14 03:57:45.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected secret + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected secret + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-1059" for this suite. 01/14/23 03:57:45.49 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should apply changes to a resourcequota status [Conformance] + test/e2e/apimachinery/resource_quota.go:1010 +[BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:57:45.497 +Jan 14 03:57:45.497: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename resourcequota 01/14/23 03:57:45.497 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:57:45.51 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:57:45.512 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 +[It] should apply changes to a resourcequota status [Conformance] + test/e2e/apimachinery/resource_quota.go:1010 +STEP: Creating resourceQuota "e2e-rq-status-frkmc" 01/14/23 03:57:45.517 +Jan 14 03:57:45.524: INFO: Resource quota "e2e-rq-status-frkmc" reports spec: hard cpu limit of 500m +Jan 14 03:57:45.524: INFO: Resource quota "e2e-rq-status-frkmc" reports spec: hard memory limit of 500Mi +STEP: Updating resourceQuota "e2e-rq-status-frkmc" /status 01/14/23 03:57:45.524 +STEP: Confirm /status for "e2e-rq-status-frkmc" resourceQuota via watch 01/14/23 03:57:45.531 +Jan 14 03:57:45.532: INFO: observed resourceQuota "e2e-rq-status-frkmc" in namespace "resourcequota-3056" with hard status: v1.ResourceList(nil) +Jan 14 03:57:45.532: INFO: Found resourceQuota "e2e-rq-status-frkmc" in namespace "resourcequota-3056" with hard status: v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:500, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}} +Jan 14 03:57:45.532: INFO: ResourceQuota "e2e-rq-status-frkmc" /status was updated +STEP: Patching hard spec values for cpu & memory 01/14/23 03:57:45.534 +Jan 14 03:57:45.539: INFO: Resource quota "e2e-rq-status-frkmc" reports spec: hard cpu limit of 1 +Jan 14 03:57:45.539: INFO: Resource quota "e2e-rq-status-frkmc" reports spec: hard memory limit of 1Gi +STEP: Patching "e2e-rq-status-frkmc" /status 01/14/23 03:57:45.539 +STEP: Confirm /status for "e2e-rq-status-frkmc" resourceQuota via watch 01/14/23 03:57:45.542 +Jan 14 03:57:45.543: INFO: observed resourceQuota "e2e-rq-status-frkmc" in namespace "resourcequota-3056" with hard status: v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:500, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}} +Jan 14 03:57:45.543: INFO: Found resourceQuota "e2e-rq-status-frkmc" in namespace "resourcequota-3056" with hard status: v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:1, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}} +Jan 14 03:57:45.543: INFO: ResourceQuota "e2e-rq-status-frkmc" /status was patched +STEP: Get "e2e-rq-status-frkmc" /status 01/14/23 03:57:45.543 +Jan 14 03:57:45.546: INFO: Resourcequota "e2e-rq-status-frkmc" reports status: hard cpu of 1 +Jan 14 03:57:45.546: INFO: Resourcequota "e2e-rq-status-frkmc" reports status: hard memory of 1Gi +STEP: Repatching "e2e-rq-status-frkmc" /status before checking Spec is unchanged 01/14/23 03:57:45.548 +Jan 14 03:57:45.552: INFO: Resourcequota "e2e-rq-status-frkmc" reports status: hard cpu of 2 +Jan 14 03:57:45.552: INFO: Resourcequota "e2e-rq-status-frkmc" reports status: hard memory of 2Gi +Jan 14 03:57:45.553: INFO: Found resourceQuota "e2e-rq-status-frkmc" in namespace "resourcequota-3056" with hard status: v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:2, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"2", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:2147483648, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"2Gi", Format:"BinarySI"}} +Jan 14 03:57:50.559: INFO: ResourceQuota "e2e-rq-status-frkmc" Spec was unchanged and /status reset +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 +Jan 14 03:57:50.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 +STEP: Destroying namespace "resourcequota-3056" for this suite. 01/14/23 03:57:50.564 +------------------------------ +• [SLOW TEST] [5.072 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should apply changes to a resourcequota status [Conformance] + test/e2e/apimachinery/resource_quota.go:1010 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:57:45.497 + Jan 14 03:57:45.497: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename resourcequota 01/14/23 03:57:45.497 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:57:45.51 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:57:45.512 + [BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 + [It] should apply changes to a resourcequota status [Conformance] + test/e2e/apimachinery/resource_quota.go:1010 + STEP: Creating resourceQuota "e2e-rq-status-frkmc" 01/14/23 03:57:45.517 + Jan 14 03:57:45.524: INFO: Resource quota "e2e-rq-status-frkmc" reports spec: hard cpu limit of 500m + Jan 14 03:57:45.524: INFO: Resource quota "e2e-rq-status-frkmc" reports spec: hard memory limit of 500Mi + STEP: Updating resourceQuota "e2e-rq-status-frkmc" /status 01/14/23 03:57:45.524 + STEP: Confirm /status for "e2e-rq-status-frkmc" resourceQuota via watch 01/14/23 03:57:45.531 + Jan 14 03:57:45.532: INFO: observed resourceQuota "e2e-rq-status-frkmc" in namespace "resourcequota-3056" with hard status: v1.ResourceList(nil) + Jan 14 03:57:45.532: INFO: Found resourceQuota "e2e-rq-status-frkmc" in namespace "resourcequota-3056" with hard status: v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:500, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}} + Jan 14 03:57:45.532: INFO: ResourceQuota "e2e-rq-status-frkmc" /status was updated + STEP: Patching hard spec values for cpu & memory 01/14/23 03:57:45.534 + Jan 14 03:57:45.539: INFO: Resource quota "e2e-rq-status-frkmc" reports spec: hard cpu limit of 1 + Jan 14 03:57:45.539: INFO: Resource quota "e2e-rq-status-frkmc" reports spec: hard memory limit of 1Gi + STEP: Patching "e2e-rq-status-frkmc" /status 01/14/23 03:57:45.539 + STEP: Confirm /status for "e2e-rq-status-frkmc" resourceQuota via watch 01/14/23 03:57:45.542 + Jan 14 03:57:45.543: INFO: observed resourceQuota "e2e-rq-status-frkmc" in namespace "resourcequota-3056" with hard status: v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:500, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}} + Jan 14 03:57:45.543: INFO: Found resourceQuota "e2e-rq-status-frkmc" in namespace "resourcequota-3056" with hard status: v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:1, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}} + Jan 14 03:57:45.543: INFO: ResourceQuota "e2e-rq-status-frkmc" /status was patched + STEP: Get "e2e-rq-status-frkmc" /status 01/14/23 03:57:45.543 + Jan 14 03:57:45.546: INFO: Resourcequota "e2e-rq-status-frkmc" reports status: hard cpu of 1 + Jan 14 03:57:45.546: INFO: Resourcequota "e2e-rq-status-frkmc" reports status: hard memory of 1Gi + STEP: Repatching "e2e-rq-status-frkmc" /status before checking Spec is unchanged 01/14/23 03:57:45.548 + Jan 14 03:57:45.552: INFO: Resourcequota "e2e-rq-status-frkmc" reports status: hard cpu of 2 + Jan 14 03:57:45.552: INFO: Resourcequota "e2e-rq-status-frkmc" reports status: hard memory of 2Gi + Jan 14 03:57:45.553: INFO: Found resourceQuota "e2e-rq-status-frkmc" in namespace "resourcequota-3056" with hard status: v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:2, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"2", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:2147483648, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"2Gi", Format:"BinarySI"}} + Jan 14 03:57:50.559: INFO: ResourceQuota "e2e-rq-status-frkmc" Spec was unchanged and /status reset + [AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 + Jan 14 03:57:50.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 + STEP: Destroying namespace "resourcequota-3056" for this suite. 01/14/23 03:57:50.564 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + pod should support shared volumes between containers [Conformance] + test/e2e/common/storage/empty_dir.go:227 +[BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:57:50.569 +Jan 14 03:57:50.569: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename emptydir 01/14/23 03:57:50.57 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:57:50.584 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:57:50.586 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 +[It] pod should support shared volumes between containers [Conformance] + test/e2e/common/storage/empty_dir.go:227 +STEP: Creating Pod 01/14/23 03:57:50.588 +Jan 14 03:57:50.597: INFO: Waiting up to 5m0s for pod "pod-sharedvolume-34f8dc57-b8f2-4dd8-8e11-1b2b6fb1130f" in namespace "emptydir-455" to be "running" +Jan 14 03:57:50.601: INFO: Pod "pod-sharedvolume-34f8dc57-b8f2-4dd8-8e11-1b2b6fb1130f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.73561ms +Jan 14 03:57:52.608: INFO: Pod "pod-sharedvolume-34f8dc57-b8f2-4dd8-8e11-1b2b6fb1130f": Phase="Running", Reason="", readiness=false. Elapsed: 2.01068878s +Jan 14 03:57:52.608: INFO: Pod "pod-sharedvolume-34f8dc57-b8f2-4dd8-8e11-1b2b6fb1130f" satisfied condition "running" +STEP: Reading file content from the nginx-container 01/14/23 03:57:52.608 +Jan 14 03:57:52.608: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-455 PodName:pod-sharedvolume-34f8dc57-b8f2-4dd8-8e11-1b2b6fb1130f ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jan 14 03:57:52.608: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +Jan 14 03:57:52.609: INFO: ExecWithOptions: Clientset creation +Jan 14 03:57:52.609: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/emptydir-455/pods/pod-sharedvolume-34f8dc57-b8f2-4dd8-8e11-1b2b6fb1130f/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fusr%2Fshare%2Fvolumeshare%2Fshareddata.txt&container=busybox-main-container&container=busybox-main-container&stderr=true&stdout=true) +Jan 14 03:57:52.664: INFO: Exec stderr: "" +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 +Jan 14 03:57:52.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 +STEP: Destroying namespace "emptydir-455" for this suite. 01/14/23 03:57:52.669 +------------------------------ +• [2.106 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + pod should support shared volumes between containers [Conformance] + test/e2e/common/storage/empty_dir.go:227 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:57:50.569 + Jan 14 03:57:50.569: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename emptydir 01/14/23 03:57:50.57 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:57:50.584 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:57:50.586 + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 + [It] pod should support shared volumes between containers [Conformance] + test/e2e/common/storage/empty_dir.go:227 + STEP: Creating Pod 01/14/23 03:57:50.588 + Jan 14 03:57:50.597: INFO: Waiting up to 5m0s for pod "pod-sharedvolume-34f8dc57-b8f2-4dd8-8e11-1b2b6fb1130f" in namespace "emptydir-455" to be "running" + Jan 14 03:57:50.601: INFO: Pod "pod-sharedvolume-34f8dc57-b8f2-4dd8-8e11-1b2b6fb1130f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.73561ms + Jan 14 03:57:52.608: INFO: Pod "pod-sharedvolume-34f8dc57-b8f2-4dd8-8e11-1b2b6fb1130f": Phase="Running", Reason="", readiness=false. Elapsed: 2.01068878s + Jan 14 03:57:52.608: INFO: Pod "pod-sharedvolume-34f8dc57-b8f2-4dd8-8e11-1b2b6fb1130f" satisfied condition "running" + STEP: Reading file content from the nginx-container 01/14/23 03:57:52.608 + Jan 14 03:57:52.608: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-455 PodName:pod-sharedvolume-34f8dc57-b8f2-4dd8-8e11-1b2b6fb1130f ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jan 14 03:57:52.608: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + Jan 14 03:57:52.609: INFO: ExecWithOptions: Clientset creation + Jan 14 03:57:52.609: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/emptydir-455/pods/pod-sharedvolume-34f8dc57-b8f2-4dd8-8e11-1b2b6fb1130f/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fusr%2Fshare%2Fvolumeshare%2Fshareddata.txt&container=busybox-main-container&container=busybox-main-container&stderr=true&stdout=true) + Jan 14 03:57:52.664: INFO: Exec stderr: "" + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 + Jan 14 03:57:52.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 + STEP: Destroying namespace "emptydir-455" for this suite. 01/14/23 03:57:52.669 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with configmap pod with mountPath of existing file [Conformance] + test/e2e/storage/subpath.go:80 +[BeforeEach] [sig-storage] Subpath + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:57:52.675 +Jan 14 03:57:52.675: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename subpath 01/14/23 03:57:52.676 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:57:52.694 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:57:52.697 +[BeforeEach] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 +STEP: Setting up data 01/14/23 03:57:52.699 +[It] should support subpaths with configmap pod with mountPath of existing file [Conformance] + test/e2e/storage/subpath.go:80 +STEP: Creating pod pod-subpath-test-configmap-dxll 01/14/23 03:57:52.711 +STEP: Creating a pod to test atomic-volume-subpath 01/14/23 03:57:52.711 +Jan 14 03:57:52.721: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-dxll" in namespace "subpath-7649" to be "Succeeded or Failed" +Jan 14 03:57:52.724: INFO: Pod "pod-subpath-test-configmap-dxll": Phase="Pending", Reason="", readiness=false. Elapsed: 2.884057ms +Jan 14 03:57:54.729: INFO: Pod "pod-subpath-test-configmap-dxll": Phase="Running", Reason="", readiness=true. Elapsed: 2.008226074s +Jan 14 03:57:56.729: INFO: Pod "pod-subpath-test-configmap-dxll": Phase="Running", Reason="", readiness=true. Elapsed: 4.007585217s +Jan 14 03:57:58.729: INFO: Pod "pod-subpath-test-configmap-dxll": Phase="Running", Reason="", readiness=true. Elapsed: 6.008025108s +Jan 14 03:58:00.729: INFO: Pod "pod-subpath-test-configmap-dxll": Phase="Running", Reason="", readiness=true. Elapsed: 8.007784908s +Jan 14 03:58:02.729: INFO: Pod "pod-subpath-test-configmap-dxll": Phase="Running", Reason="", readiness=true. Elapsed: 10.007527458s +Jan 14 03:58:04.728: INFO: Pod "pod-subpath-test-configmap-dxll": Phase="Running", Reason="", readiness=true. Elapsed: 12.007278813s +Jan 14 03:58:06.730: INFO: Pod "pod-subpath-test-configmap-dxll": Phase="Running", Reason="", readiness=true. Elapsed: 14.008618075s +Jan 14 03:58:08.730: INFO: Pod "pod-subpath-test-configmap-dxll": Phase="Running", Reason="", readiness=true. Elapsed: 16.008827696s +Jan 14 03:58:10.729: INFO: Pod "pod-subpath-test-configmap-dxll": Phase="Running", Reason="", readiness=true. Elapsed: 18.008010395s +Jan 14 03:58:12.730: INFO: Pod "pod-subpath-test-configmap-dxll": Phase="Running", Reason="", readiness=true. Elapsed: 20.008736829s +Jan 14 03:58:14.728: INFO: Pod "pod-subpath-test-configmap-dxll": Phase="Running", Reason="", readiness=false. Elapsed: 22.00696117s +Jan 14 03:58:16.730: INFO: Pod "pod-subpath-test-configmap-dxll": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.009223054s +STEP: Saw pod success 01/14/23 03:58:16.73 +Jan 14 03:58:16.730: INFO: Pod "pod-subpath-test-configmap-dxll" satisfied condition "Succeeded or Failed" +Jan 14 03:58:16.733: INFO: Trying to get logs from node 10.0.1.212 pod pod-subpath-test-configmap-dxll container test-container-subpath-configmap-dxll: +STEP: delete the pod 01/14/23 03:58:16.745 +Jan 14 03:58:16.761: INFO: Waiting for pod pod-subpath-test-configmap-dxll to disappear +Jan 14 03:58:16.764: INFO: Pod pod-subpath-test-configmap-dxll no longer exists +STEP: Deleting pod pod-subpath-test-configmap-dxll 01/14/23 03:58:16.764 +Jan 14 03:58:16.764: INFO: Deleting pod "pod-subpath-test-configmap-dxll" in namespace "subpath-7649" +[AfterEach] [sig-storage] Subpath + test/e2e/framework/node/init/init.go:32 +Jan 14 03:58:16.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Subpath + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Subpath + tear down framework | framework.go:193 +STEP: Destroying namespace "subpath-7649" for this suite. 01/14/23 03:58:16.771 +------------------------------ +• [SLOW TEST] [24.102 seconds] +[sig-storage] Subpath +test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + test/e2e/storage/subpath.go:36 + should support subpaths with configmap pod with mountPath of existing file [Conformance] + test/e2e/storage/subpath.go:80 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Subpath + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:57:52.675 + Jan 14 03:57:52.675: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename subpath 01/14/23 03:57:52.676 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:57:52.694 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:57:52.697 + [BeforeEach] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 + STEP: Setting up data 01/14/23 03:57:52.699 + [It] should support subpaths with configmap pod with mountPath of existing file [Conformance] + test/e2e/storage/subpath.go:80 + STEP: Creating pod pod-subpath-test-configmap-dxll 01/14/23 03:57:52.711 + STEP: Creating a pod to test atomic-volume-subpath 01/14/23 03:57:52.711 + Jan 14 03:57:52.721: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-dxll" in namespace "subpath-7649" to be "Succeeded or Failed" + Jan 14 03:57:52.724: INFO: Pod "pod-subpath-test-configmap-dxll": Phase="Pending", Reason="", readiness=false. Elapsed: 2.884057ms + Jan 14 03:57:54.729: INFO: Pod "pod-subpath-test-configmap-dxll": Phase="Running", Reason="", readiness=true. Elapsed: 2.008226074s + Jan 14 03:57:56.729: INFO: Pod "pod-subpath-test-configmap-dxll": Phase="Running", Reason="", readiness=true. Elapsed: 4.007585217s + Jan 14 03:57:58.729: INFO: Pod "pod-subpath-test-configmap-dxll": Phase="Running", Reason="", readiness=true. Elapsed: 6.008025108s + Jan 14 03:58:00.729: INFO: Pod "pod-subpath-test-configmap-dxll": Phase="Running", Reason="", readiness=true. Elapsed: 8.007784908s + Jan 14 03:58:02.729: INFO: Pod "pod-subpath-test-configmap-dxll": Phase="Running", Reason="", readiness=true. Elapsed: 10.007527458s + Jan 14 03:58:04.728: INFO: Pod "pod-subpath-test-configmap-dxll": Phase="Running", Reason="", readiness=true. Elapsed: 12.007278813s + Jan 14 03:58:06.730: INFO: Pod "pod-subpath-test-configmap-dxll": Phase="Running", Reason="", readiness=true. Elapsed: 14.008618075s + Jan 14 03:58:08.730: INFO: Pod "pod-subpath-test-configmap-dxll": Phase="Running", Reason="", readiness=true. Elapsed: 16.008827696s + Jan 14 03:58:10.729: INFO: Pod "pod-subpath-test-configmap-dxll": Phase="Running", Reason="", readiness=true. Elapsed: 18.008010395s + Jan 14 03:58:12.730: INFO: Pod "pod-subpath-test-configmap-dxll": Phase="Running", Reason="", readiness=true. Elapsed: 20.008736829s + Jan 14 03:58:14.728: INFO: Pod "pod-subpath-test-configmap-dxll": Phase="Running", Reason="", readiness=false. Elapsed: 22.00696117s + Jan 14 03:58:16.730: INFO: Pod "pod-subpath-test-configmap-dxll": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.009223054s + STEP: Saw pod success 01/14/23 03:58:16.73 + Jan 14 03:58:16.730: INFO: Pod "pod-subpath-test-configmap-dxll" satisfied condition "Succeeded or Failed" + Jan 14 03:58:16.733: INFO: Trying to get logs from node 10.0.1.212 pod pod-subpath-test-configmap-dxll container test-container-subpath-configmap-dxll: + STEP: delete the pod 01/14/23 03:58:16.745 + Jan 14 03:58:16.761: INFO: Waiting for pod pod-subpath-test-configmap-dxll to disappear + Jan 14 03:58:16.764: INFO: Pod pod-subpath-test-configmap-dxll no longer exists + STEP: Deleting pod pod-subpath-test-configmap-dxll 01/14/23 03:58:16.764 + Jan 14 03:58:16.764: INFO: Deleting pod "pod-subpath-test-configmap-dxll" in namespace "subpath-7649" + [AfterEach] [sig-storage] Subpath + test/e2e/framework/node/init/init.go:32 + Jan 14 03:58:16.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Subpath + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Subpath + tear down framework | framework.go:193 + STEP: Destroying namespace "subpath-7649" for this suite. 01/14/23 03:58:16.771 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] RuntimeClass + should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:104 +[BeforeEach] [sig-node] RuntimeClass + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:58:16.778 +Jan 14 03:58:16.778: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename runtimeclass 01/14/23 03:58:16.778 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:58:16.791 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:58:16.793 +[BeforeEach] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:31 +[It] should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:104 +Jan 14 03:58:16.807: INFO: Waiting up to 1m20s for at least 1 pods in namespace runtimeclass-7686 to be scheduled +Jan 14 03:58:16.810: INFO: 1 pods are not scheduled: [runtimeclass-7686/test-runtimeclass-runtimeclass-7686-preconfigured-handler-f5tx2(58a2c0f0-f0b2-45e2-bf36-2ab7693a9bf7)] +[AfterEach] [sig-node] RuntimeClass + test/e2e/framework/node/init/init.go:32 +Jan 14 03:58:18.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] RuntimeClass + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] RuntimeClass + tear down framework | framework.go:193 +STEP: Destroying namespace "runtimeclass-7686" for this suite. 01/14/23 03:58:18.825 +------------------------------ +• [2.056 seconds] +[sig-node] RuntimeClass +test/e2e/common/node/framework.go:23 + should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:104 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] RuntimeClass + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:58:16.778 + Jan 14 03:58:16.778: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename runtimeclass 01/14/23 03:58:16.778 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:58:16.791 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:58:16.793 + [BeforeEach] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:31 + [It] should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:104 + Jan 14 03:58:16.807: INFO: Waiting up to 1m20s for at least 1 pods in namespace runtimeclass-7686 to be scheduled + Jan 14 03:58:16.810: INFO: 1 pods are not scheduled: [runtimeclass-7686/test-runtimeclass-runtimeclass-7686-preconfigured-handler-f5tx2(58a2c0f0-f0b2-45e2-bf36-2ab7693a9bf7)] + [AfterEach] [sig-node] RuntimeClass + test/e2e/framework/node/init/init.go:32 + Jan 14 03:58:18.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] RuntimeClass + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] RuntimeClass + tear down framework | framework.go:193 + STEP: Destroying namespace "runtimeclass-7686" for this suite. 01/14/23 03:58:18.825 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:74 +[BeforeEach] [sig-storage] Projected configMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:58:18.834 +Jan 14 03:58:18.835: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename projected 01/14/23 03:58:18.835 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:58:18.849 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:58:18.851 +[BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:74 +STEP: Creating configMap with name projected-configmap-test-volume-2866d93b-2e40-4dae-98cd-feb8de20cda2 01/14/23 03:58:18.853 +STEP: Creating a pod to test consume configMaps 01/14/23 03:58:18.857 +Jan 14 03:58:18.866: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-37e558b4-b6bc-4ef9-b28f-f6550834268d" in namespace "projected-5931" to be "Succeeded or Failed" +Jan 14 03:58:18.870: INFO: Pod "pod-projected-configmaps-37e558b4-b6bc-4ef9-b28f-f6550834268d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.826096ms +Jan 14 03:58:20.875: INFO: Pod "pod-projected-configmaps-37e558b4-b6bc-4ef9-b28f-f6550834268d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009335101s +Jan 14 03:58:22.874: INFO: Pod "pod-projected-configmaps-37e558b4-b6bc-4ef9-b28f-f6550834268d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008618296s +STEP: Saw pod success 01/14/23 03:58:22.875 +Jan 14 03:58:22.875: INFO: Pod "pod-projected-configmaps-37e558b4-b6bc-4ef9-b28f-f6550834268d" satisfied condition "Succeeded or Failed" +Jan 14 03:58:22.878: INFO: Trying to get logs from node 10.0.1.99 pod pod-projected-configmaps-37e558b4-b6bc-4ef9-b28f-f6550834268d container agnhost-container: +STEP: delete the pod 01/14/23 03:58:22.886 +Jan 14 03:58:22.901: INFO: Waiting for pod pod-projected-configmaps-37e558b4-b6bc-4ef9-b28f-f6550834268d to disappear +Jan 14 03:58:22.904: INFO: Pod pod-projected-configmaps-37e558b4-b6bc-4ef9-b28f-f6550834268d no longer exists +[AfterEach] [sig-storage] Projected configMap + test/e2e/framework/node/init/init.go:32 +Jan 14 03:58:22.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected configMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected configMap + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-5931" for this suite. 01/14/23 03:58:22.908 +------------------------------ +• [4.080 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:74 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected configMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:58:18.834 + Jan 14 03:58:18.835: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename projected 01/14/23 03:58:18.835 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:58:18.849 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:58:18.851 + [BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:74 + STEP: Creating configMap with name projected-configmap-test-volume-2866d93b-2e40-4dae-98cd-feb8de20cda2 01/14/23 03:58:18.853 + STEP: Creating a pod to test consume configMaps 01/14/23 03:58:18.857 + Jan 14 03:58:18.866: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-37e558b4-b6bc-4ef9-b28f-f6550834268d" in namespace "projected-5931" to be "Succeeded or Failed" + Jan 14 03:58:18.870: INFO: Pod "pod-projected-configmaps-37e558b4-b6bc-4ef9-b28f-f6550834268d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.826096ms + Jan 14 03:58:20.875: INFO: Pod "pod-projected-configmaps-37e558b4-b6bc-4ef9-b28f-f6550834268d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009335101s + Jan 14 03:58:22.874: INFO: Pod "pod-projected-configmaps-37e558b4-b6bc-4ef9-b28f-f6550834268d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008618296s + STEP: Saw pod success 01/14/23 03:58:22.875 + Jan 14 03:58:22.875: INFO: Pod "pod-projected-configmaps-37e558b4-b6bc-4ef9-b28f-f6550834268d" satisfied condition "Succeeded or Failed" + Jan 14 03:58:22.878: INFO: Trying to get logs from node 10.0.1.99 pod pod-projected-configmaps-37e558b4-b6bc-4ef9-b28f-f6550834268d container agnhost-container: + STEP: delete the pod 01/14/23 03:58:22.886 + Jan 14 03:58:22.901: INFO: Waiting for pod pod-projected-configmaps-37e558b4-b6bc-4ef9-b28f-f6550834268d to disappear + Jan 14 03:58:22.904: INFO: Pod pod-projected-configmaps-37e558b4-b6bc-4ef9-b28f-f6550834268d no longer exists + [AfterEach] [sig-storage] Projected configMap + test/e2e/framework/node/init/init.go:32 + Jan 14 03:58:22.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected configMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected configMap + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-5931" for this suite. 01/14/23 03:58:22.908 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + listing custom resource definition objects works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:85 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:58:22.915 +Jan 14 03:58:22.915: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename custom-resource-definition 01/14/23 03:58:22.916 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:58:22.928 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:58:22.93 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[It] listing custom resource definition objects works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:85 +Jan 14 03:58:22.932: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jan 14 03:58:29.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "custom-resource-definition-5143" for this suite. 01/14/23 03:58:29.135 +------------------------------ +• [SLOW TEST] [6.227 seconds] +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + Simple CustomResourceDefinition + test/e2e/apimachinery/custom_resource_definition.go:50 + listing custom resource definition objects works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:85 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:58:22.915 + Jan 14 03:58:22.915: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename custom-resource-definition 01/14/23 03:58:22.916 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:58:22.928 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:58:22.93 + [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [It] listing custom resource definition objects works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:85 + Jan 14 03:58:22.932: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Jan 14 03:58:29.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "custom-resource-definition-5143" for this suite. 01/14/23 03:58:29.135 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should be consumable via the environment [NodeConformance] [Conformance] + test/e2e/common/node/configmap.go:93 +[BeforeEach] [sig-node] ConfigMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:58:29.142 +Jan 14 03:58:29.143: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename configmap 01/14/23 03:58:29.143 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:58:29.156 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:58:29.158 +[BeforeEach] [sig-node] ConfigMap + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable via the environment [NodeConformance] [Conformance] + test/e2e/common/node/configmap.go:93 +STEP: Creating configMap configmap-5425/configmap-test-585d1c26-ae36-4e22-aa41-e5eaccc3f0e0 01/14/23 03:58:29.161 +STEP: Creating a pod to test consume configMaps 01/14/23 03:58:29.164 +Jan 14 03:58:29.174: INFO: Waiting up to 5m0s for pod "pod-configmaps-1df725b0-9101-4982-a40f-01d8039c2111" in namespace "configmap-5425" to be "Succeeded or Failed" +Jan 14 03:58:29.177: INFO: Pod "pod-configmaps-1df725b0-9101-4982-a40f-01d8039c2111": Phase="Pending", Reason="", readiness=false. Elapsed: 3.281295ms +Jan 14 03:58:31.182: INFO: Pod "pod-configmaps-1df725b0-9101-4982-a40f-01d8039c2111": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007801532s +Jan 14 03:58:33.182: INFO: Pod "pod-configmaps-1df725b0-9101-4982-a40f-01d8039c2111": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008210413s +STEP: Saw pod success 01/14/23 03:58:33.182 +Jan 14 03:58:33.182: INFO: Pod "pod-configmaps-1df725b0-9101-4982-a40f-01d8039c2111" satisfied condition "Succeeded or Failed" +Jan 14 03:58:33.185: INFO: Trying to get logs from node 10.0.1.99 pod pod-configmaps-1df725b0-9101-4982-a40f-01d8039c2111 container env-test: +STEP: delete the pod 01/14/23 03:58:33.191 +Jan 14 03:58:33.205: INFO: Waiting for pod pod-configmaps-1df725b0-9101-4982-a40f-01d8039c2111 to disappear +Jan 14 03:58:33.208: INFO: Pod pod-configmaps-1df725b0-9101-4982-a40f-01d8039c2111 no longer exists +[AfterEach] [sig-node] ConfigMap + test/e2e/framework/node/init/init.go:32 +Jan 14 03:58:33.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] ConfigMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] ConfigMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] ConfigMap + tear down framework | framework.go:193 +STEP: Destroying namespace "configmap-5425" for this suite. 01/14/23 03:58:33.212 +------------------------------ +• [4.074 seconds] +[sig-node] ConfigMap +test/e2e/common/node/framework.go:23 + should be consumable via the environment [NodeConformance] [Conformance] + test/e2e/common/node/configmap.go:93 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] ConfigMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:58:29.142 + Jan 14 03:58:29.143: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename configmap 01/14/23 03:58:29.143 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:58:29.156 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:58:29.158 + [BeforeEach] [sig-node] ConfigMap + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable via the environment [NodeConformance] [Conformance] + test/e2e/common/node/configmap.go:93 + STEP: Creating configMap configmap-5425/configmap-test-585d1c26-ae36-4e22-aa41-e5eaccc3f0e0 01/14/23 03:58:29.161 + STEP: Creating a pod to test consume configMaps 01/14/23 03:58:29.164 + Jan 14 03:58:29.174: INFO: Waiting up to 5m0s for pod "pod-configmaps-1df725b0-9101-4982-a40f-01d8039c2111" in namespace "configmap-5425" to be "Succeeded or Failed" + Jan 14 03:58:29.177: INFO: Pod "pod-configmaps-1df725b0-9101-4982-a40f-01d8039c2111": Phase="Pending", Reason="", readiness=false. Elapsed: 3.281295ms + Jan 14 03:58:31.182: INFO: Pod "pod-configmaps-1df725b0-9101-4982-a40f-01d8039c2111": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007801532s + Jan 14 03:58:33.182: INFO: Pod "pod-configmaps-1df725b0-9101-4982-a40f-01d8039c2111": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008210413s + STEP: Saw pod success 01/14/23 03:58:33.182 + Jan 14 03:58:33.182: INFO: Pod "pod-configmaps-1df725b0-9101-4982-a40f-01d8039c2111" satisfied condition "Succeeded or Failed" + Jan 14 03:58:33.185: INFO: Trying to get logs from node 10.0.1.99 pod pod-configmaps-1df725b0-9101-4982-a40f-01d8039c2111 container env-test: + STEP: delete the pod 01/14/23 03:58:33.191 + Jan 14 03:58:33.205: INFO: Waiting for pod pod-configmaps-1df725b0-9101-4982-a40f-01d8039c2111 to disappear + Jan 14 03:58:33.208: INFO: Pod pod-configmaps-1df725b0-9101-4982-a40f-01d8039c2111 no longer exists + [AfterEach] [sig-node] ConfigMap + test/e2e/framework/node/init/init.go:32 + Jan 14 03:58:33.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] ConfigMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] ConfigMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] ConfigMap + tear down framework | framework.go:193 + STEP: Destroying namespace "configmap-5425" for this suite. 01/14/23 03:58:33.212 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox command that always fails in a pod + should be possible to delete [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:135 +[BeforeEach] [sig-node] Kubelet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:58:33.217 +Jan 14 03:58:33.217: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename kubelet-test 01/14/23 03:58:33.218 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:58:33.231 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:58:33.234 +[BeforeEach] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 +[BeforeEach] when scheduling a busybox command that always fails in a pod + test/e2e/common/node/kubelet.go:85 +[It] should be possible to delete [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:135 +[AfterEach] [sig-node] Kubelet + test/e2e/framework/node/init/init.go:32 +Jan 14 03:58:33.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Kubelet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Kubelet + tear down framework | framework.go:193 +STEP: Destroying namespace "kubelet-test-5356" for this suite. 01/14/23 03:58:33.263 +------------------------------ +• [0.051 seconds] +[sig-node] Kubelet +test/e2e/common/node/framework.go:23 + when scheduling a busybox command that always fails in a pod + test/e2e/common/node/kubelet.go:82 + should be possible to delete [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:135 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Kubelet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:58:33.217 + Jan 14 03:58:33.217: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename kubelet-test 01/14/23 03:58:33.218 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:58:33.231 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:58:33.234 + [BeforeEach] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 + [BeforeEach] when scheduling a busybox command that always fails in a pod + test/e2e/common/node/kubelet.go:85 + [It] should be possible to delete [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:135 + [AfterEach] [sig-node] Kubelet + test/e2e/framework/node/init/init.go:32 + Jan 14 03:58:33.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Kubelet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Kubelet + tear down framework | framework.go:193 + STEP: Destroying namespace "kubelet-test-5356" for this suite. 01/14/23 03:58:33.263 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for pods for Subdomain [Conformance] + test/e2e/network/dns.go:290 +[BeforeEach] [sig-network] DNS + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:58:33.269 +Jan 14 03:58:33.269: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename dns 01/14/23 03:58:33.27 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:58:33.284 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:58:33.289 +[BeforeEach] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:31 +[It] should provide DNS for pods for Subdomain [Conformance] + test/e2e/network/dns.go:290 +STEP: Creating a test headless service 01/14/23 03:58:33.291 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5713.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-5713.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5713.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5713.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5713.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-5713.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5713.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-5713.svc.cluster.local;sleep 1; done + 01/14/23 03:58:33.294 +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5713.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-5713.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5713.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-5713.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5713.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-5713.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5713.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-5713.svc.cluster.local;sleep 1; done + 01/14/23 03:58:33.294 +STEP: creating a pod to probe DNS 01/14/23 03:58:33.294 +STEP: submitting the pod to kubernetes 01/14/23 03:58:33.295 +Jan 14 03:58:33.304: INFO: Waiting up to 15m0s for pod "dns-test-af7cc046-a9bd-4d9b-a185-e85fc80ff73a" in namespace "dns-5713" to be "running" +Jan 14 03:58:33.307: INFO: Pod "dns-test-af7cc046-a9bd-4d9b-a185-e85fc80ff73a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.733305ms +Jan 14 03:58:35.312: INFO: Pod "dns-test-af7cc046-a9bd-4d9b-a185-e85fc80ff73a": Phase="Running", Reason="", readiness=true. Elapsed: 2.007161735s +Jan 14 03:58:35.312: INFO: Pod "dns-test-af7cc046-a9bd-4d9b-a185-e85fc80ff73a" satisfied condition "running" +STEP: retrieving the pod 01/14/23 03:58:35.312 +STEP: looking for the results for each expected name from probers 01/14/23 03:58:35.315 +Jan 14 03:58:35.319: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5713.svc.cluster.local from pod dns-5713/dns-test-af7cc046-a9bd-4d9b-a185-e85fc80ff73a: the server could not find the requested resource (get pods dns-test-af7cc046-a9bd-4d9b-a185-e85fc80ff73a) +Jan 14 03:58:35.322: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5713.svc.cluster.local from pod dns-5713/dns-test-af7cc046-a9bd-4d9b-a185-e85fc80ff73a: the server could not find the requested resource (get pods dns-test-af7cc046-a9bd-4d9b-a185-e85fc80ff73a) +Jan 14 03:58:35.328: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5713.svc.cluster.local from pod dns-5713/dns-test-af7cc046-a9bd-4d9b-a185-e85fc80ff73a: the server could not find the requested resource (get pods dns-test-af7cc046-a9bd-4d9b-a185-e85fc80ff73a) +Jan 14 03:58:35.334: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5713.svc.cluster.local from pod dns-5713/dns-test-af7cc046-a9bd-4d9b-a185-e85fc80ff73a: the server could not find the requested resource (get pods dns-test-af7cc046-a9bd-4d9b-a185-e85fc80ff73a) +Jan 14 03:58:35.337: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5713.svc.cluster.local from pod dns-5713/dns-test-af7cc046-a9bd-4d9b-a185-e85fc80ff73a: the server could not find the requested resource (get pods dns-test-af7cc046-a9bd-4d9b-a185-e85fc80ff73a) +Jan 14 03:58:35.339: INFO: Lookups using dns-5713/dns-test-af7cc046-a9bd-4d9b-a185-e85fc80ff73a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5713.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5713.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5713.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5713.svc.cluster.local jessie_udp@dns-test-service-2.dns-5713.svc.cluster.local] + +Jan 14 03:58:40.367: INFO: DNS probes using dns-5713/dns-test-af7cc046-a9bd-4d9b-a185-e85fc80ff73a succeeded + +STEP: deleting the pod 01/14/23 03:58:40.367 +STEP: deleting the test headless service 01/14/23 03:58:40.407 +[AfterEach] [sig-network] DNS + test/e2e/framework/node/init/init.go:32 +Jan 14 03:58:40.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] DNS + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] DNS + tear down framework | framework.go:193 +STEP: Destroying namespace "dns-5713" for this suite. 01/14/23 03:58:40.43 +------------------------------ +• [SLOW TEST] [7.166 seconds] +[sig-network] DNS +test/e2e/network/common/framework.go:23 + should provide DNS for pods for Subdomain [Conformance] + test/e2e/network/dns.go:290 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] DNS + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:58:33.269 + Jan 14 03:58:33.269: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename dns 01/14/23 03:58:33.27 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:58:33.284 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:58:33.289 + [BeforeEach] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:31 + [It] should provide DNS for pods for Subdomain [Conformance] + test/e2e/network/dns.go:290 + STEP: Creating a test headless service 01/14/23 03:58:33.291 + STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5713.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-5713.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5713.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5713.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5713.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-5713.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5713.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-5713.svc.cluster.local;sleep 1; done + 01/14/23 03:58:33.294 + STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5713.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-5713.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5713.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-5713.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5713.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-5713.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5713.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-5713.svc.cluster.local;sleep 1; done + 01/14/23 03:58:33.294 + STEP: creating a pod to probe DNS 01/14/23 03:58:33.294 + STEP: submitting the pod to kubernetes 01/14/23 03:58:33.295 + Jan 14 03:58:33.304: INFO: Waiting up to 15m0s for pod "dns-test-af7cc046-a9bd-4d9b-a185-e85fc80ff73a" in namespace "dns-5713" to be "running" + Jan 14 03:58:33.307: INFO: Pod "dns-test-af7cc046-a9bd-4d9b-a185-e85fc80ff73a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.733305ms + Jan 14 03:58:35.312: INFO: Pod "dns-test-af7cc046-a9bd-4d9b-a185-e85fc80ff73a": Phase="Running", Reason="", readiness=true. Elapsed: 2.007161735s + Jan 14 03:58:35.312: INFO: Pod "dns-test-af7cc046-a9bd-4d9b-a185-e85fc80ff73a" satisfied condition "running" + STEP: retrieving the pod 01/14/23 03:58:35.312 + STEP: looking for the results for each expected name from probers 01/14/23 03:58:35.315 + Jan 14 03:58:35.319: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5713.svc.cluster.local from pod dns-5713/dns-test-af7cc046-a9bd-4d9b-a185-e85fc80ff73a: the server could not find the requested resource (get pods dns-test-af7cc046-a9bd-4d9b-a185-e85fc80ff73a) + Jan 14 03:58:35.322: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5713.svc.cluster.local from pod dns-5713/dns-test-af7cc046-a9bd-4d9b-a185-e85fc80ff73a: the server could not find the requested resource (get pods dns-test-af7cc046-a9bd-4d9b-a185-e85fc80ff73a) + Jan 14 03:58:35.328: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5713.svc.cluster.local from pod dns-5713/dns-test-af7cc046-a9bd-4d9b-a185-e85fc80ff73a: the server could not find the requested resource (get pods dns-test-af7cc046-a9bd-4d9b-a185-e85fc80ff73a) + Jan 14 03:58:35.334: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5713.svc.cluster.local from pod dns-5713/dns-test-af7cc046-a9bd-4d9b-a185-e85fc80ff73a: the server could not find the requested resource (get pods dns-test-af7cc046-a9bd-4d9b-a185-e85fc80ff73a) + Jan 14 03:58:35.337: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5713.svc.cluster.local from pod dns-5713/dns-test-af7cc046-a9bd-4d9b-a185-e85fc80ff73a: the server could not find the requested resource (get pods dns-test-af7cc046-a9bd-4d9b-a185-e85fc80ff73a) + Jan 14 03:58:35.339: INFO: Lookups using dns-5713/dns-test-af7cc046-a9bd-4d9b-a185-e85fc80ff73a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5713.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5713.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5713.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5713.svc.cluster.local jessie_udp@dns-test-service-2.dns-5713.svc.cluster.local] + + Jan 14 03:58:40.367: INFO: DNS probes using dns-5713/dns-test-af7cc046-a9bd-4d9b-a185-e85fc80ff73a succeeded + + STEP: deleting the pod 01/14/23 03:58:40.367 + STEP: deleting the test headless service 01/14/23 03:58:40.407 + [AfterEach] [sig-network] DNS + test/e2e/framework/node/init/init.go:32 + Jan 14 03:58:40.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] DNS + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] DNS + tear down framework | framework.go:193 + STEP: Destroying namespace "dns-5713" for this suite. 01/14/23 03:58:40.43 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSS +------------------------------ +[sig-api-machinery] server version + should find the server version [Conformance] + test/e2e/apimachinery/server_version.go:39 +[BeforeEach] [sig-api-machinery] server version + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:58:40.436 +Jan 14 03:58:40.436: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename server-version 01/14/23 03:58:40.437 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:58:40.452 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:58:40.455 +[BeforeEach] [sig-api-machinery] server version + test/e2e/framework/metrics/init/init.go:31 +[It] should find the server version [Conformance] + test/e2e/apimachinery/server_version.go:39 +STEP: Request ServerVersion 01/14/23 03:58:40.458 +STEP: Confirm major version 01/14/23 03:58:40.458 +Jan 14 03:58:40.459: INFO: Major version: 1 +STEP: Confirm minor version 01/14/23 03:58:40.459 +Jan 14 03:58:40.459: INFO: cleanMinorVersion: 26 +Jan 14 03:58:40.459: INFO: Minor version: 26+ +[AfterEach] [sig-api-machinery] server version + test/e2e/framework/node/init/init.go:32 +Jan 14 03:58:40.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] server version + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] server version + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] server version + tear down framework | framework.go:193 +STEP: Destroying namespace "server-version-8169" for this suite. 01/14/23 03:58:40.464 +------------------------------ +• [0.032 seconds] +[sig-api-machinery] server version +test/e2e/apimachinery/framework.go:23 + should find the server version [Conformance] + test/e2e/apimachinery/server_version.go:39 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] server version + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:58:40.436 + Jan 14 03:58:40.436: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename server-version 01/14/23 03:58:40.437 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:58:40.452 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:58:40.455 + [BeforeEach] [sig-api-machinery] server version + test/e2e/framework/metrics/init/init.go:31 + [It] should find the server version [Conformance] + test/e2e/apimachinery/server_version.go:39 + STEP: Request ServerVersion 01/14/23 03:58:40.458 + STEP: Confirm major version 01/14/23 03:58:40.458 + Jan 14 03:58:40.459: INFO: Major version: 1 + STEP: Confirm minor version 01/14/23 03:58:40.459 + Jan 14 03:58:40.459: INFO: cleanMinorVersion: 26 + Jan 14 03:58:40.459: INFO: Minor version: 26+ + [AfterEach] [sig-api-machinery] server version + test/e2e/framework/node/init/init.go:32 + Jan 14 03:58:40.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] server version + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] server version + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] server version + tear down framework | framework.go:193 + STEP: Destroying namespace "server-version-8169" for this suite. 01/14/23 03:58:40.464 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow composing env vars into new env vars [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:44 +[BeforeEach] [sig-node] Variable Expansion + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:58:40.469 +Jan 14 03:58:40.469: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename var-expansion 01/14/23 03:58:40.47 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:58:40.484 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:58:40.486 +[BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:31 +[It] should allow composing env vars into new env vars [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:44 +STEP: Creating a pod to test env composition 01/14/23 03:58:40.489 +Jan 14 03:58:40.497: INFO: Waiting up to 5m0s for pod "var-expansion-e416263e-6912-4d89-bcd5-4d033518fe6d" in namespace "var-expansion-9384" to be "Succeeded or Failed" +Jan 14 03:58:40.500: INFO: Pod "var-expansion-e416263e-6912-4d89-bcd5-4d033518fe6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.706929ms +Jan 14 03:58:42.505: INFO: Pod "var-expansion-e416263e-6912-4d89-bcd5-4d033518fe6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007857174s +Jan 14 03:58:44.504: INFO: Pod "var-expansion-e416263e-6912-4d89-bcd5-4d033518fe6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007162987s +STEP: Saw pod success 01/14/23 03:58:44.504 +Jan 14 03:58:44.504: INFO: Pod "var-expansion-e416263e-6912-4d89-bcd5-4d033518fe6d" satisfied condition "Succeeded or Failed" +Jan 14 03:58:44.507: INFO: Trying to get logs from node 10.0.1.99 pod var-expansion-e416263e-6912-4d89-bcd5-4d033518fe6d container dapi-container: +STEP: delete the pod 01/14/23 03:58:44.513 +Jan 14 03:58:44.523: INFO: Waiting for pod var-expansion-e416263e-6912-4d89-bcd5-4d033518fe6d to disappear +Jan 14 03:58:44.526: INFO: Pod var-expansion-e416263e-6912-4d89-bcd5-4d033518fe6d no longer exists +[AfterEach] [sig-node] Variable Expansion + test/e2e/framework/node/init/init.go:32 +Jan 14 03:58:44.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Variable Expansion + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Variable Expansion + tear down framework | framework.go:193 +STEP: Destroying namespace "var-expansion-9384" for this suite. 01/14/23 03:58:44.53 +------------------------------ +• [4.066 seconds] +[sig-node] Variable Expansion +test/e2e/common/node/framework.go:23 + should allow composing env vars into new env vars [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:44 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Variable Expansion + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:58:40.469 + Jan 14 03:58:40.469: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename var-expansion 01/14/23 03:58:40.47 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:58:40.484 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:58:40.486 + [BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:31 + [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:44 + STEP: Creating a pod to test env composition 01/14/23 03:58:40.489 + Jan 14 03:58:40.497: INFO: Waiting up to 5m0s for pod "var-expansion-e416263e-6912-4d89-bcd5-4d033518fe6d" in namespace "var-expansion-9384" to be "Succeeded or Failed" + Jan 14 03:58:40.500: INFO: Pod "var-expansion-e416263e-6912-4d89-bcd5-4d033518fe6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.706929ms + Jan 14 03:58:42.505: INFO: Pod "var-expansion-e416263e-6912-4d89-bcd5-4d033518fe6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007857174s + Jan 14 03:58:44.504: INFO: Pod "var-expansion-e416263e-6912-4d89-bcd5-4d033518fe6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007162987s + STEP: Saw pod success 01/14/23 03:58:44.504 + Jan 14 03:58:44.504: INFO: Pod "var-expansion-e416263e-6912-4d89-bcd5-4d033518fe6d" satisfied condition "Succeeded or Failed" + Jan 14 03:58:44.507: INFO: Trying to get logs from node 10.0.1.99 pod var-expansion-e416263e-6912-4d89-bcd5-4d033518fe6d container dapi-container: + STEP: delete the pod 01/14/23 03:58:44.513 + Jan 14 03:58:44.523: INFO: Waiting for pod var-expansion-e416263e-6912-4d89-bcd5-4d033518fe6d to disappear + Jan 14 03:58:44.526: INFO: Pod var-expansion-e416263e-6912-4d89-bcd5-4d033518fe6d no longer exists + [AfterEach] [sig-node] Variable Expansion + test/e2e/framework/node/init/init.go:32 + Jan 14 03:58:44.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Variable Expansion + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Variable Expansion + tear down framework | framework.go:193 + STEP: Destroying namespace "var-expansion-9384" for this suite. 01/14/23 03:58:44.53 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-node] Pods + should get a host IP [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:204 +[BeforeEach] [sig-node] Pods + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:58:44.535 +Jan 14 03:58:44.535: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename pods 01/14/23 03:58:44.536 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:58:44.549 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:58:44.551 +[BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 +[It] should get a host IP [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:204 +STEP: creating pod 01/14/23 03:58:44.554 +Jan 14 03:58:44.561: INFO: Waiting up to 5m0s for pod "pod-hostip-12a3ce77-9361-4927-ad8f-65d7feb51b6c" in namespace "pods-8266" to be "running and ready" +Jan 14 03:58:44.564: INFO: Pod "pod-hostip-12a3ce77-9361-4927-ad8f-65d7feb51b6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.884879ms +Jan 14 03:58:44.564: INFO: The phase of Pod pod-hostip-12a3ce77-9361-4927-ad8f-65d7feb51b6c is Pending, waiting for it to be Running (with Ready = true) +Jan 14 03:58:46.570: INFO: Pod "pod-hostip-12a3ce77-9361-4927-ad8f-65d7feb51b6c": Phase="Running", Reason="", readiness=true. Elapsed: 2.008930868s +Jan 14 03:58:46.570: INFO: The phase of Pod pod-hostip-12a3ce77-9361-4927-ad8f-65d7feb51b6c is Running (Ready = true) +Jan 14 03:58:46.570: INFO: Pod "pod-hostip-12a3ce77-9361-4927-ad8f-65d7feb51b6c" satisfied condition "running and ready" +Jan 14 03:58:46.577: INFO: Pod pod-hostip-12a3ce77-9361-4927-ad8f-65d7feb51b6c has hostIP: 10.0.1.99 +[AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 +Jan 14 03:58:46.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 +STEP: Destroying namespace "pods-8266" for this suite. 01/14/23 03:58:46.582 +------------------------------ +• [2.052 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should get a host IP [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:204 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Pods + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:58:44.535 + Jan 14 03:58:44.535: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename pods 01/14/23 03:58:44.536 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:58:44.549 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:58:44.551 + [BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 + [It] should get a host IP [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:204 + STEP: creating pod 01/14/23 03:58:44.554 + Jan 14 03:58:44.561: INFO: Waiting up to 5m0s for pod "pod-hostip-12a3ce77-9361-4927-ad8f-65d7feb51b6c" in namespace "pods-8266" to be "running and ready" + Jan 14 03:58:44.564: INFO: Pod "pod-hostip-12a3ce77-9361-4927-ad8f-65d7feb51b6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.884879ms + Jan 14 03:58:44.564: INFO: The phase of Pod pod-hostip-12a3ce77-9361-4927-ad8f-65d7feb51b6c is Pending, waiting for it to be Running (with Ready = true) + Jan 14 03:58:46.570: INFO: Pod "pod-hostip-12a3ce77-9361-4927-ad8f-65d7feb51b6c": Phase="Running", Reason="", readiness=true. Elapsed: 2.008930868s + Jan 14 03:58:46.570: INFO: The phase of Pod pod-hostip-12a3ce77-9361-4927-ad8f-65d7feb51b6c is Running (Ready = true) + Jan 14 03:58:46.570: INFO: Pod "pod-hostip-12a3ce77-9361-4927-ad8f-65d7feb51b6c" satisfied condition "running and ready" + Jan 14 03:58:46.577: INFO: Pod pod-hostip-12a3ce77-9361-4927-ad8f-65d7feb51b6c has hostIP: 10.0.1.99 + [AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 + Jan 14 03:58:46.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 + STEP: Destroying namespace "pods-8266" for this suite. 01/14/23 03:58:46.582 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should apply changes to a namespace status [Conformance] + test/e2e/apimachinery/namespace.go:299 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:58:46.588 +Jan 14 03:58:46.588: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename namespaces 01/14/23 03:58:46.589 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:58:46.603 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:58:46.606 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:31 +[It] should apply changes to a namespace status [Conformance] + test/e2e/apimachinery/namespace.go:299 +STEP: Read namespace status 01/14/23 03:58:46.608 +Jan 14 03:58:46.611: INFO: Status: v1.NamespaceStatus{Phase:"Active", Conditions:[]v1.NamespaceCondition(nil)} +STEP: Patch namespace status 01/14/23 03:58:46.611 +Jan 14 03:58:46.615: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusPatch", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Patched by an e2e test"} +STEP: Update namespace status 01/14/23 03:58:46.615 +Jan 14 03:58:46.623: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Updated by an e2e test"} +[AfterEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/node/init/init.go:32 +Jan 14 03:58:46.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "namespaces-2743" for this suite. 01/14/23 03:58:46.627 +------------------------------ +• [0.047 seconds] +[sig-api-machinery] Namespaces [Serial] +test/e2e/apimachinery/framework.go:23 + should apply changes to a namespace status [Conformance] + test/e2e/apimachinery/namespace.go:299 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Namespaces [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:58:46.588 + Jan 14 03:58:46.588: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename namespaces 01/14/23 03:58:46.589 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:58:46.603 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:58:46.606 + [BeforeEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:31 + [It] should apply changes to a namespace status [Conformance] + test/e2e/apimachinery/namespace.go:299 + STEP: Read namespace status 01/14/23 03:58:46.608 + Jan 14 03:58:46.611: INFO: Status: v1.NamespaceStatus{Phase:"Active", Conditions:[]v1.NamespaceCondition(nil)} + STEP: Patch namespace status 01/14/23 03:58:46.611 + Jan 14 03:58:46.615: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusPatch", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Patched by an e2e test"} + STEP: Update namespace status 01/14/23 03:58:46.615 + Jan 14 03:58:46.623: INFO: Status.Condition: v1.NamespaceCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Updated by an e2e test"} + [AfterEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/node/init/init.go:32 + Jan 14 03:58:46.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "namespaces-2743" for this suite. 01/14/23 03:58:46.627 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-network] Services + should be able to create a functioning NodePort service [Conformance] + test/e2e/network/service.go:1302 +[BeforeEach] [sig-network] Services + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:58:46.635 +Jan 14 03:58:46.635: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename services 01/14/23 03:58:46.635 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:58:46.65 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:58:46.653 +[BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should be able to create a functioning NodePort service [Conformance] + test/e2e/network/service.go:1302 +STEP: creating service nodeport-test with type=NodePort in namespace services-3770 01/14/23 03:58:46.655 +STEP: creating replication controller nodeport-test in namespace services-3770 01/14/23 03:58:46.667 +I0114 03:58:46.675934 25 runners.go:193] Created replication controller with name: nodeport-test, namespace: services-3770, replica count: 2 +I0114 03:58:49.728174 25 runners.go:193] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Jan 14 03:58:49.728: INFO: Creating new exec pod +Jan 14 03:58:49.738: INFO: Waiting up to 5m0s for pod "execpod5hrmx" in namespace "services-3770" to be "running" +Jan 14 03:58:49.740: INFO: Pod "execpod5hrmx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.708924ms +Jan 14 03:58:51.745: INFO: Pod "execpod5hrmx": Phase="Running", Reason="", readiness=true. Elapsed: 2.007465414s +Jan 14 03:58:51.745: INFO: Pod "execpod5hrmx" satisfied condition "running" +Jan 14 03:58:52.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-3770 exec execpod5hrmx -- /bin/sh -x -c nc -v -z -w 2 nodeport-test 80' +Jan 14 03:58:52.865: INFO: stderr: "+ nc -v -z -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" +Jan 14 03:58:52.865: INFO: stdout: "" +Jan 14 03:58:52.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-3770 exec execpod5hrmx -- /bin/sh -x -c nc -v -z -w 2 10.55.253.219 80' +Jan 14 03:58:52.977: INFO: stderr: "+ nc -v -z -w 2 10.55.253.219 80\nConnection to 10.55.253.219 80 port [tcp/http] succeeded!\n" +Jan 14 03:58:52.977: INFO: stdout: "" +Jan 14 03:58:52.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-3770 exec execpod5hrmx -- /bin/sh -x -c nc -v -z -w 2 10.0.1.212 31615' +Jan 14 03:58:53.088: INFO: stderr: "+ nc -v -z -w 2 10.0.1.212 31615\nConnection to 10.0.1.212 31615 port [tcp/*] succeeded!\n" +Jan 14 03:58:53.088: INFO: stdout: "" +Jan 14 03:58:53.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-3770 exec execpod5hrmx -- /bin/sh -x -c nc -v -z -w 2 10.0.1.99 31615' +Jan 14 03:58:53.201: INFO: stderr: "+ nc -v -z -w 2 10.0.1.99 31615\nConnection to 10.0.1.99 31615 port [tcp/*] succeeded!\n" +Jan 14 03:58:53.201: INFO: stdout: "" +[AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 +Jan 14 03:58:53.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 +STEP: Destroying namespace "services-3770" for this suite. 01/14/23 03:58:53.206 +------------------------------ +• [SLOW TEST] [6.577 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should be able to create a functioning NodePort service [Conformance] + test/e2e/network/service.go:1302 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:58:46.635 + Jan 14 03:58:46.635: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename services 01/14/23 03:58:46.635 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:58:46.65 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:58:46.653 + [BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should be able to create a functioning NodePort service [Conformance] + test/e2e/network/service.go:1302 + STEP: creating service nodeport-test with type=NodePort in namespace services-3770 01/14/23 03:58:46.655 + STEP: creating replication controller nodeport-test in namespace services-3770 01/14/23 03:58:46.667 + I0114 03:58:46.675934 25 runners.go:193] Created replication controller with name: nodeport-test, namespace: services-3770, replica count: 2 + I0114 03:58:49.728174 25 runners.go:193] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Jan 14 03:58:49.728: INFO: Creating new exec pod + Jan 14 03:58:49.738: INFO: Waiting up to 5m0s for pod "execpod5hrmx" in namespace "services-3770" to be "running" + Jan 14 03:58:49.740: INFO: Pod "execpod5hrmx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.708924ms + Jan 14 03:58:51.745: INFO: Pod "execpod5hrmx": Phase="Running", Reason="", readiness=true. Elapsed: 2.007465414s + Jan 14 03:58:51.745: INFO: Pod "execpod5hrmx" satisfied condition "running" + Jan 14 03:58:52.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-3770 exec execpod5hrmx -- /bin/sh -x -c nc -v -z -w 2 nodeport-test 80' + Jan 14 03:58:52.865: INFO: stderr: "+ nc -v -z -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" + Jan 14 03:58:52.865: INFO: stdout: "" + Jan 14 03:58:52.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-3770 exec execpod5hrmx -- /bin/sh -x -c nc -v -z -w 2 10.55.253.219 80' + Jan 14 03:58:52.977: INFO: stderr: "+ nc -v -z -w 2 10.55.253.219 80\nConnection to 10.55.253.219 80 port [tcp/http] succeeded!\n" + Jan 14 03:58:52.977: INFO: stdout: "" + Jan 14 03:58:52.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-3770 exec execpod5hrmx -- /bin/sh -x -c nc -v -z -w 2 10.0.1.212 31615' + Jan 14 03:58:53.088: INFO: stderr: "+ nc -v -z -w 2 10.0.1.212 31615\nConnection to 10.0.1.212 31615 port [tcp/*] succeeded!\n" + Jan 14 03:58:53.088: INFO: stdout: "" + Jan 14 03:58:53.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-3770 exec execpod5hrmx -- /bin/sh -x -c nc -v -z -w 2 10.0.1.99 31615' + Jan 14 03:58:53.201: INFO: stderr: "+ nc -v -z -w 2 10.0.1.99 31615\nConnection to 10.0.1.99 31615 port [tcp/*] succeeded!\n" + Jan 14 03:58:53.201: INFO: stdout: "" + [AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 + Jan 14 03:58:53.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 + STEP: Destroying namespace "services-3770" for this suite. 01/14/23 03:58:53.206 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should apply a finalizer to a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:394 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:58:53.212 +Jan 14 03:58:53.212: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename namespaces 01/14/23 03:58:53.213 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:58:53.228 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:58:53.23 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:31 +[It] should apply a finalizer to a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:394 +STEP: Creating namespace "e2e-ns-8vx75" 01/14/23 03:58:53.232 +Jan 14 03:58:53.246: INFO: Namespace "e2e-ns-8vx75-19" has []v1.FinalizerName{"kubernetes"} +STEP: Adding e2e finalizer to namespace "e2e-ns-8vx75-19" 01/14/23 03:58:53.246 +Jan 14 03:58:53.252: INFO: Namespace "e2e-ns-8vx75-19" has []v1.FinalizerName{"kubernetes", "e2e.example.com/fakeFinalizer"} +STEP: Removing e2e finalizer from namespace "e2e-ns-8vx75-19" 01/14/23 03:58:53.252 +Jan 14 03:58:53.258: INFO: Namespace "e2e-ns-8vx75-19" has []v1.FinalizerName{"kubernetes"} +[AfterEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/node/init/init.go:32 +Jan 14 03:58:53.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "namespaces-1242" for this suite. 01/14/23 03:58:53.262 +STEP: Destroying namespace "e2e-ns-8vx75-19" for this suite. 01/14/23 03:58:53.267 +------------------------------ +• [0.059 seconds] +[sig-api-machinery] Namespaces [Serial] +test/e2e/apimachinery/framework.go:23 + should apply a finalizer to a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:394 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Namespaces [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:58:53.212 + Jan 14 03:58:53.212: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename namespaces 01/14/23 03:58:53.213 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:58:53.228 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:58:53.23 + [BeforeEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:31 + [It] should apply a finalizer to a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:394 + STEP: Creating namespace "e2e-ns-8vx75" 01/14/23 03:58:53.232 + Jan 14 03:58:53.246: INFO: Namespace "e2e-ns-8vx75-19" has []v1.FinalizerName{"kubernetes"} + STEP: Adding e2e finalizer to namespace "e2e-ns-8vx75-19" 01/14/23 03:58:53.246 + Jan 14 03:58:53.252: INFO: Namespace "e2e-ns-8vx75-19" has []v1.FinalizerName{"kubernetes", "e2e.example.com/fakeFinalizer"} + STEP: Removing e2e finalizer from namespace "e2e-ns-8vx75-19" 01/14/23 03:58:53.252 + Jan 14 03:58:53.258: INFO: Namespace "e2e-ns-8vx75-19" has []v1.FinalizerName{"kubernetes"} + [AfterEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/node/init/init.go:32 + Jan 14 03:58:53.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "namespaces-1242" for this suite. 01/14/23 03:58:53.262 + STEP: Destroying namespace "e2e-ns-8vx75-19" for this suite. 01/14/23 03:58:53.267 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should list, patch and delete a collection of StatefulSets [Conformance] + test/e2e/apps/statefulset.go:908 +[BeforeEach] [sig-apps] StatefulSet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:58:53.272 +Jan 14 03:58:53.272: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename statefulset 01/14/23 03:58:53.272 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:58:53.286 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:58:53.288 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 +STEP: Creating service test in namespace statefulset-9660 01/14/23 03:58:53.291 +[It] should list, patch and delete a collection of StatefulSets [Conformance] + test/e2e/apps/statefulset.go:908 +Jan 14 03:58:53.302: INFO: Found 0 stateful pods, waiting for 1 +Jan 14 03:59:03.308: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: patching the StatefulSet 01/14/23 03:59:03.314 +W0114 03:59:03.322824 25 warnings.go:70] unknown field "spec.template.spec.TerminationGracePeriodSeconds" +Jan 14 03:59:03.330: INFO: Found 1 stateful pods, waiting for 2 +Jan 14 03:59:13.336: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true +Jan 14 03:59:13.336: INFO: Waiting for pod test-ss-1 to enter Running - Ready=true, currently Running - Ready=true +STEP: Listing all StatefulSets 01/14/23 03:59:13.341 +STEP: Delete all of the StatefulSets 01/14/23 03:59:13.344 +STEP: Verify that StatefulSets have been deleted 01/14/23 03:59:13.35 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 +Jan 14 03:59:13.352: INFO: Deleting all statefulset in ns statefulset-9660 +[AfterEach] [sig-apps] StatefulSet + test/e2e/framework/node/init/init.go:32 +Jan 14 03:59:13.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] StatefulSet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] StatefulSet + tear down framework | framework.go:193 +STEP: Destroying namespace "statefulset-9660" for this suite. 01/14/23 03:59:13.363 +------------------------------ +• [SLOW TEST] [20.099 seconds] +[sig-apps] StatefulSet +test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:103 + should list, patch and delete a collection of StatefulSets [Conformance] + test/e2e/apps/statefulset.go:908 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] StatefulSet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:58:53.272 + Jan 14 03:58:53.272: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename statefulset 01/14/23 03:58:53.272 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:58:53.286 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:58:53.288 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 + [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 + STEP: Creating service test in namespace statefulset-9660 01/14/23 03:58:53.291 + [It] should list, patch and delete a collection of StatefulSets [Conformance] + test/e2e/apps/statefulset.go:908 + Jan 14 03:58:53.302: INFO: Found 0 stateful pods, waiting for 1 + Jan 14 03:59:03.308: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true + STEP: patching the StatefulSet 01/14/23 03:59:03.314 + W0114 03:59:03.322824 25 warnings.go:70] unknown field "spec.template.spec.TerminationGracePeriodSeconds" + Jan 14 03:59:03.330: INFO: Found 1 stateful pods, waiting for 2 + Jan 14 03:59:13.336: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true + Jan 14 03:59:13.336: INFO: Waiting for pod test-ss-1 to enter Running - Ready=true, currently Running - Ready=true + STEP: Listing all StatefulSets 01/14/23 03:59:13.341 + STEP: Delete all of the StatefulSets 01/14/23 03:59:13.344 + STEP: Verify that StatefulSets have been deleted 01/14/23 03:59:13.35 + [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 + Jan 14 03:59:13.352: INFO: Deleting all statefulset in ns statefulset-9660 + [AfterEach] [sig-apps] StatefulSet + test/e2e/framework/node/init/init.go:32 + Jan 14 03:59:13.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] StatefulSet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] StatefulSet + tear down framework | framework.go:193 + STEP: Destroying namespace "statefulset-9660" for this suite. 01/14/23 03:59:13.363 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-auth] ServiceAccounts + should run through the lifecycle of a ServiceAccount [Conformance] + test/e2e/auth/service_accounts.go:649 +[BeforeEach] [sig-auth] ServiceAccounts + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:59:13.371 +Jan 14 03:59:13.371: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename svcaccounts 01/14/23 03:59:13.372 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:59:13.385 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:59:13.387 +[BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:31 +[It] should run through the lifecycle of a ServiceAccount [Conformance] + test/e2e/auth/service_accounts.go:649 +STEP: creating a ServiceAccount 01/14/23 03:59:13.389 +STEP: watching for the ServiceAccount to be added 01/14/23 03:59:13.395 +STEP: patching the ServiceAccount 01/14/23 03:59:13.396 +STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) 01/14/23 03:59:13.4 +STEP: deleting the ServiceAccount 01/14/23 03:59:13.403 +[AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/node/init/init.go:32 +Jan 14 03:59:13.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + tear down framework | framework.go:193 +STEP: Destroying namespace "svcaccounts-8613" for this suite. 01/14/23 03:59:13.416 +------------------------------ +• [0.051 seconds] +[sig-auth] ServiceAccounts +test/e2e/auth/framework.go:23 + should run through the lifecycle of a ServiceAccount [Conformance] + test/e2e/auth/service_accounts.go:649 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-auth] ServiceAccounts + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:59:13.371 + Jan 14 03:59:13.371: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename svcaccounts 01/14/23 03:59:13.372 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:59:13.385 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:59:13.387 + [BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:31 + [It] should run through the lifecycle of a ServiceAccount [Conformance] + test/e2e/auth/service_accounts.go:649 + STEP: creating a ServiceAccount 01/14/23 03:59:13.389 + STEP: watching for the ServiceAccount to be added 01/14/23 03:59:13.395 + STEP: patching the ServiceAccount 01/14/23 03:59:13.396 + STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) 01/14/23 03:59:13.4 + STEP: deleting the ServiceAccount 01/14/23 03:59:13.403 + [AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/node/init/init.go:32 + Jan 14 03:59:13.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + tear down framework | framework.go:193 + STEP: Destroying namespace "svcaccounts-8613" for this suite. 01/14/23 03:59:13.416 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + should have monotonically increasing restart count [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:199 +[BeforeEach] [sig-node] Probing container + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 03:59:13.422 +Jan 14 03:59:13.423: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename container-probe 01/14/23 03:59:13.423 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:59:13.435 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:59:13.437 +[BeforeEach] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 +[It] should have monotonically increasing restart count [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:199 +STEP: Creating pod liveness-83c9fe06-6bc8-438d-a1de-3cfde90f1f93 in namespace container-probe-5989 01/14/23 03:59:13.439 +Jan 14 03:59:13.448: INFO: Waiting up to 5m0s for pod "liveness-83c9fe06-6bc8-438d-a1de-3cfde90f1f93" in namespace "container-probe-5989" to be "not pending" +Jan 14 03:59:13.451: INFO: Pod "liveness-83c9fe06-6bc8-438d-a1de-3cfde90f1f93": Phase="Pending", Reason="", readiness=false. Elapsed: 3.071756ms +Jan 14 03:59:15.456: INFO: Pod "liveness-83c9fe06-6bc8-438d-a1de-3cfde90f1f93": Phase="Running", Reason="", readiness=true. Elapsed: 2.007999191s +Jan 14 03:59:15.456: INFO: Pod "liveness-83c9fe06-6bc8-438d-a1de-3cfde90f1f93" satisfied condition "not pending" +Jan 14 03:59:15.456: INFO: Started pod liveness-83c9fe06-6bc8-438d-a1de-3cfde90f1f93 in namespace container-probe-5989 +STEP: checking the pod's current state and verifying that restartCount is present 01/14/23 03:59:15.456 +Jan 14 03:59:15.460: INFO: Initial restart count of pod liveness-83c9fe06-6bc8-438d-a1de-3cfde90f1f93 is 0 +Jan 14 03:59:35.511: INFO: Restart count of pod container-probe-5989/liveness-83c9fe06-6bc8-438d-a1de-3cfde90f1f93 is now 1 (20.051794769s elapsed) +Jan 14 03:59:55.561: INFO: Restart count of pod container-probe-5989/liveness-83c9fe06-6bc8-438d-a1de-3cfde90f1f93 is now 2 (40.101482698s elapsed) +Jan 14 04:00:15.615: INFO: Restart count of pod container-probe-5989/liveness-83c9fe06-6bc8-438d-a1de-3cfde90f1f93 is now 3 (1m0.155800972s elapsed) +Jan 14 04:00:35.667: INFO: Restart count of pod container-probe-5989/liveness-83c9fe06-6bc8-438d-a1de-3cfde90f1f93 is now 4 (1m20.207504489s elapsed) +Jan 14 04:01:45.859: INFO: Restart count of pod container-probe-5989/liveness-83c9fe06-6bc8-438d-a1de-3cfde90f1f93 is now 5 (2m30.399864853s elapsed) +STEP: deleting the pod 01/14/23 04:01:45.859 +[AfterEach] [sig-node] Probing container + test/e2e/framework/node/init/init.go:32 +Jan 14 04:01:45.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Probing container + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Probing container + tear down framework | framework.go:193 +STEP: Destroying namespace "container-probe-5989" for this suite. 01/14/23 04:01:45.88 +------------------------------ +• [SLOW TEST] [152.464 seconds] +[sig-node] Probing container +test/e2e/common/node/framework.go:23 + should have monotonically increasing restart count [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:199 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Probing container + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 03:59:13.422 + Jan 14 03:59:13.423: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename container-probe 01/14/23 03:59:13.423 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 03:59:13.435 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 03:59:13.437 + [BeforeEach] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 + [It] should have monotonically increasing restart count [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:199 + STEP: Creating pod liveness-83c9fe06-6bc8-438d-a1de-3cfde90f1f93 in namespace container-probe-5989 01/14/23 03:59:13.439 + Jan 14 03:59:13.448: INFO: Waiting up to 5m0s for pod "liveness-83c9fe06-6bc8-438d-a1de-3cfde90f1f93" in namespace "container-probe-5989" to be "not pending" + Jan 14 03:59:13.451: INFO: Pod "liveness-83c9fe06-6bc8-438d-a1de-3cfde90f1f93": Phase="Pending", Reason="", readiness=false. Elapsed: 3.071756ms + Jan 14 03:59:15.456: INFO: Pod "liveness-83c9fe06-6bc8-438d-a1de-3cfde90f1f93": Phase="Running", Reason="", readiness=true. Elapsed: 2.007999191s + Jan 14 03:59:15.456: INFO: Pod "liveness-83c9fe06-6bc8-438d-a1de-3cfde90f1f93" satisfied condition "not pending" + Jan 14 03:59:15.456: INFO: Started pod liveness-83c9fe06-6bc8-438d-a1de-3cfde90f1f93 in namespace container-probe-5989 + STEP: checking the pod's current state and verifying that restartCount is present 01/14/23 03:59:15.456 + Jan 14 03:59:15.460: INFO: Initial restart count of pod liveness-83c9fe06-6bc8-438d-a1de-3cfde90f1f93 is 0 + Jan 14 03:59:35.511: INFO: Restart count of pod container-probe-5989/liveness-83c9fe06-6bc8-438d-a1de-3cfde90f1f93 is now 1 (20.051794769s elapsed) + Jan 14 03:59:55.561: INFO: Restart count of pod container-probe-5989/liveness-83c9fe06-6bc8-438d-a1de-3cfde90f1f93 is now 2 (40.101482698s elapsed) + Jan 14 04:00:15.615: INFO: Restart count of pod container-probe-5989/liveness-83c9fe06-6bc8-438d-a1de-3cfde90f1f93 is now 3 (1m0.155800972s elapsed) + Jan 14 04:00:35.667: INFO: Restart count of pod container-probe-5989/liveness-83c9fe06-6bc8-438d-a1de-3cfde90f1f93 is now 4 (1m20.207504489s elapsed) + Jan 14 04:01:45.859: INFO: Restart count of pod container-probe-5989/liveness-83c9fe06-6bc8-438d-a1de-3cfde90f1f93 is now 5 (2m30.399864853s elapsed) + STEP: deleting the pod 01/14/23 04:01:45.859 + [AfterEach] [sig-node] Probing container + test/e2e/framework/node/init/init.go:32 + Jan 14 04:01:45.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Probing container + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Probing container + tear down framework | framework.go:193 + STEP: Destroying namespace "container-probe-5989" for this suite. 01/14/23 04:01:45.88 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-network] Ingress API + should support creating Ingress API operations [Conformance] + test/e2e/network/ingress.go:552 +[BeforeEach] [sig-network] Ingress API + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:01:45.886 +Jan 14 04:01:45.886: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename ingress 01/14/23 04:01:45.887 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:01:45.9 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:01:45.902 +[BeforeEach] [sig-network] Ingress API + test/e2e/framework/metrics/init/init.go:31 +[It] should support creating Ingress API operations [Conformance] + test/e2e/network/ingress.go:552 +STEP: getting /apis 01/14/23 04:01:45.905 +STEP: getting /apis/networking.k8s.io 01/14/23 04:01:45.907 +STEP: getting /apis/networking.k8s.iov1 01/14/23 04:01:45.908 +STEP: creating 01/14/23 04:01:45.909 +STEP: getting 01/14/23 04:01:45.92 +STEP: listing 01/14/23 04:01:45.923 +STEP: watching 01/14/23 04:01:45.925 +Jan 14 04:01:45.925: INFO: starting watch +STEP: cluster-wide listing 01/14/23 04:01:45.926 +STEP: cluster-wide watching 01/14/23 04:01:45.928 +Jan 14 04:01:45.928: INFO: starting watch +STEP: patching 01/14/23 04:01:45.929 +STEP: updating 01/14/23 04:01:45.933 +Jan 14 04:01:45.940: INFO: waiting for watch events with expected annotations +Jan 14 04:01:45.940: INFO: saw patched and updated annotations +STEP: patching /status 01/14/23 04:01:45.94 +STEP: updating /status 01/14/23 04:01:45.945 +STEP: get /status 01/14/23 04:01:45.951 +STEP: deleting 01/14/23 04:01:45.953 +STEP: deleting a collection 01/14/23 04:01:45.962 +[AfterEach] [sig-network] Ingress API + test/e2e/framework/node/init/init.go:32 +Jan 14 04:01:45.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Ingress API + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Ingress API + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Ingress API + tear down framework | framework.go:193 +STEP: Destroying namespace "ingress-2009" for this suite. 01/14/23 04:01:45.978 +------------------------------ +• [0.096 seconds] +[sig-network] Ingress API +test/e2e/network/common/framework.go:23 + should support creating Ingress API operations [Conformance] + test/e2e/network/ingress.go:552 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Ingress API + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:01:45.886 + Jan 14 04:01:45.886: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename ingress 01/14/23 04:01:45.887 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:01:45.9 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:01:45.902 + [BeforeEach] [sig-network] Ingress API + test/e2e/framework/metrics/init/init.go:31 + [It] should support creating Ingress API operations [Conformance] + test/e2e/network/ingress.go:552 + STEP: getting /apis 01/14/23 04:01:45.905 + STEP: getting /apis/networking.k8s.io 01/14/23 04:01:45.907 + STEP: getting /apis/networking.k8s.iov1 01/14/23 04:01:45.908 + STEP: creating 01/14/23 04:01:45.909 + STEP: getting 01/14/23 04:01:45.92 + STEP: listing 01/14/23 04:01:45.923 + STEP: watching 01/14/23 04:01:45.925 + Jan 14 04:01:45.925: INFO: starting watch + STEP: cluster-wide listing 01/14/23 04:01:45.926 + STEP: cluster-wide watching 01/14/23 04:01:45.928 + Jan 14 04:01:45.928: INFO: starting watch + STEP: patching 01/14/23 04:01:45.929 + STEP: updating 01/14/23 04:01:45.933 + Jan 14 04:01:45.940: INFO: waiting for watch events with expected annotations + Jan 14 04:01:45.940: INFO: saw patched and updated annotations + STEP: patching /status 01/14/23 04:01:45.94 + STEP: updating /status 01/14/23 04:01:45.945 + STEP: get /status 01/14/23 04:01:45.951 + STEP: deleting 01/14/23 04:01:45.953 + STEP: deleting a collection 01/14/23 04:01:45.962 + [AfterEach] [sig-network] Ingress API + test/e2e/framework/node/init/init.go:32 + Jan 14 04:01:45.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Ingress API + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Ingress API + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Ingress API + tear down framework | framework.go:193 + STEP: Destroying namespace "ingress-2009" for this suite. 01/14/23 04:01:45.978 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:89 +[BeforeEach] [sig-storage] Projected configMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:01:45.984 +Jan 14 04:01:45.984: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename projected 01/14/23 04:01:45.984 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:01:45.998 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:01:46 +[BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:89 +STEP: Creating configMap with name projected-configmap-test-volume-map-1b3fe994-8376-45f5-b67a-4777fe1fadb7 01/14/23 04:01:46.002 +STEP: Creating a pod to test consume configMaps 01/14/23 04:01:46.007 +Jan 14 04:01:46.022: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fd38e2db-9efb-4262-a3a7-4e48653f6419" in namespace "projected-2585" to be "Succeeded or Failed" +Jan 14 04:01:46.025: INFO: Pod "pod-projected-configmaps-fd38e2db-9efb-4262-a3a7-4e48653f6419": Phase="Pending", Reason="", readiness=false. Elapsed: 2.881419ms +Jan 14 04:01:48.029: INFO: Pod "pod-projected-configmaps-fd38e2db-9efb-4262-a3a7-4e48653f6419": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007383633s +Jan 14 04:01:50.030: INFO: Pod "pod-projected-configmaps-fd38e2db-9efb-4262-a3a7-4e48653f6419": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007772516s +STEP: Saw pod success 01/14/23 04:01:50.03 +Jan 14 04:01:50.030: INFO: Pod "pod-projected-configmaps-fd38e2db-9efb-4262-a3a7-4e48653f6419" satisfied condition "Succeeded or Failed" +Jan 14 04:01:50.033: INFO: Trying to get logs from node 10.0.1.106 pod pod-projected-configmaps-fd38e2db-9efb-4262-a3a7-4e48653f6419 container agnhost-container: +STEP: delete the pod 01/14/23 04:01:50.051 +Jan 14 04:01:50.067: INFO: Waiting for pod pod-projected-configmaps-fd38e2db-9efb-4262-a3a7-4e48653f6419 to disappear +Jan 14 04:01:50.070: INFO: Pod pod-projected-configmaps-fd38e2db-9efb-4262-a3a7-4e48653f6419 no longer exists +[AfterEach] [sig-storage] Projected configMap + test/e2e/framework/node/init/init.go:32 +Jan 14 04:01:50.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected configMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected configMap + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-2585" for this suite. 01/14/23 04:01:50.074 +------------------------------ +• [4.096 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:89 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected configMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:01:45.984 + Jan 14 04:01:45.984: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename projected 01/14/23 04:01:45.984 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:01:45.998 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:01:46 + [BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:89 + STEP: Creating configMap with name projected-configmap-test-volume-map-1b3fe994-8376-45f5-b67a-4777fe1fadb7 01/14/23 04:01:46.002 + STEP: Creating a pod to test consume configMaps 01/14/23 04:01:46.007 + Jan 14 04:01:46.022: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fd38e2db-9efb-4262-a3a7-4e48653f6419" in namespace "projected-2585" to be "Succeeded or Failed" + Jan 14 04:01:46.025: INFO: Pod "pod-projected-configmaps-fd38e2db-9efb-4262-a3a7-4e48653f6419": Phase="Pending", Reason="", readiness=false. Elapsed: 2.881419ms + Jan 14 04:01:48.029: INFO: Pod "pod-projected-configmaps-fd38e2db-9efb-4262-a3a7-4e48653f6419": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007383633s + Jan 14 04:01:50.030: INFO: Pod "pod-projected-configmaps-fd38e2db-9efb-4262-a3a7-4e48653f6419": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007772516s + STEP: Saw pod success 01/14/23 04:01:50.03 + Jan 14 04:01:50.030: INFO: Pod "pod-projected-configmaps-fd38e2db-9efb-4262-a3a7-4e48653f6419" satisfied condition "Succeeded or Failed" + Jan 14 04:01:50.033: INFO: Trying to get logs from node 10.0.1.106 pod pod-projected-configmaps-fd38e2db-9efb-4262-a3a7-4e48653f6419 container agnhost-container: + STEP: delete the pod 01/14/23 04:01:50.051 + Jan 14 04:01:50.067: INFO: Waiting for pod pod-projected-configmaps-fd38e2db-9efb-4262-a3a7-4e48653f6419 to disappear + Jan 14 04:01:50.070: INFO: Pod pod-projected-configmaps-fd38e2db-9efb-4262-a3a7-4e48653f6419 no longer exists + [AfterEach] [sig-storage] Projected configMap + test/e2e/framework/node/init/init.go:32 + Jan 14 04:01:50.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected configMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected configMap + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-2585" for this suite. 01/14/23 04:01:50.074 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-apps] ReplicaSet + Replace and Patch tests [Conformance] + test/e2e/apps/replica_set.go:154 +[BeforeEach] [sig-apps] ReplicaSet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:01:50.08 +Jan 14 04:01:50.080: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename replicaset 01/14/23 04:01:50.08 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:01:50.094 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:01:50.096 +[BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:31 +[It] Replace and Patch tests [Conformance] + test/e2e/apps/replica_set.go:154 +Jan 14 04:01:50.109: INFO: Pod name sample-pod: Found 0 pods out of 1 +Jan 14 04:01:55.115: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running 01/14/23 04:01:55.115 +STEP: Scaling up "test-rs" replicaset 01/14/23 04:01:55.115 +Jan 14 04:01:55.123: INFO: Updating replica set "test-rs" +STEP: patching the ReplicaSet 01/14/23 04:01:55.123 +W0114 04:01:55.133446 25 warnings.go:70] unknown field "spec.template.spec.TerminationGracePeriodSeconds" +Jan 14 04:01:55.134: INFO: observed ReplicaSet test-rs in namespace replicaset-7818 with ReadyReplicas 1, AvailableReplicas 1 +Jan 14 04:01:55.152: INFO: observed ReplicaSet test-rs in namespace replicaset-7818 with ReadyReplicas 1, AvailableReplicas 1 +Jan 14 04:01:55.167: INFO: observed ReplicaSet test-rs in namespace replicaset-7818 with ReadyReplicas 1, AvailableReplicas 1 +Jan 14 04:01:55.173: INFO: observed ReplicaSet test-rs in namespace replicaset-7818 with ReadyReplicas 1, AvailableReplicas 1 +Jan 14 04:01:55.763: INFO: observed ReplicaSet test-rs in namespace replicaset-7818 with ReadyReplicas 2, AvailableReplicas 2 +Jan 14 04:01:56.111: INFO: observed Replicaset test-rs in namespace replicaset-7818 with ReadyReplicas 3 found true +[AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/node/init/init.go:32 +Jan 14 04:01:56.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] ReplicaSet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] ReplicaSet + tear down framework | framework.go:193 +STEP: Destroying namespace "replicaset-7818" for this suite. 01/14/23 04:01:56.116 +------------------------------ +• [SLOW TEST] [6.044 seconds] +[sig-apps] ReplicaSet +test/e2e/apps/framework.go:23 + Replace and Patch tests [Conformance] + test/e2e/apps/replica_set.go:154 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ReplicaSet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:01:50.08 + Jan 14 04:01:50.080: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename replicaset 01/14/23 04:01:50.08 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:01:50.094 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:01:50.096 + [BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:31 + [It] Replace and Patch tests [Conformance] + test/e2e/apps/replica_set.go:154 + Jan 14 04:01:50.109: INFO: Pod name sample-pod: Found 0 pods out of 1 + Jan 14 04:01:55.115: INFO: Pod name sample-pod: Found 1 pods out of 1 + STEP: ensuring each pod is running 01/14/23 04:01:55.115 + STEP: Scaling up "test-rs" replicaset 01/14/23 04:01:55.115 + Jan 14 04:01:55.123: INFO: Updating replica set "test-rs" + STEP: patching the ReplicaSet 01/14/23 04:01:55.123 + W0114 04:01:55.133446 25 warnings.go:70] unknown field "spec.template.spec.TerminationGracePeriodSeconds" + Jan 14 04:01:55.134: INFO: observed ReplicaSet test-rs in namespace replicaset-7818 with ReadyReplicas 1, AvailableReplicas 1 + Jan 14 04:01:55.152: INFO: observed ReplicaSet test-rs in namespace replicaset-7818 with ReadyReplicas 1, AvailableReplicas 1 + Jan 14 04:01:55.167: INFO: observed ReplicaSet test-rs in namespace replicaset-7818 with ReadyReplicas 1, AvailableReplicas 1 + Jan 14 04:01:55.173: INFO: observed ReplicaSet test-rs in namespace replicaset-7818 with ReadyReplicas 1, AvailableReplicas 1 + Jan 14 04:01:55.763: INFO: observed ReplicaSet test-rs in namespace replicaset-7818 with ReadyReplicas 2, AvailableReplicas 2 + Jan 14 04:01:56.111: INFO: observed Replicaset test-rs in namespace replicaset-7818 with ReadyReplicas 3 found true + [AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/node/init/init.go:32 + Jan 14 04:01:56.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] ReplicaSet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] ReplicaSet + tear down framework | framework.go:193 + STEP: Destroying namespace "replicaset-7818" for this suite. 01/14/23 04:01:56.116 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:261 +[BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:01:56.124 +Jan 14 04:01:56.124: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename downward-api 01/14/23 04:01:56.124 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:01:56.151 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:01:56.153 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 +[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:261 +STEP: Creating a pod to test downward API volume plugin 01/14/23 04:01:56.18 +Jan 14 04:01:56.204: INFO: Waiting up to 5m0s for pod "downwardapi-volume-76b14669-0285-40fe-bbe2-8f19a79fc4b6" in namespace "downward-api-108" to be "Succeeded or Failed" +Jan 14 04:01:56.235: INFO: Pod "downwardapi-volume-76b14669-0285-40fe-bbe2-8f19a79fc4b6": Phase="Pending", Reason="", readiness=false. Elapsed: 31.116257ms +Jan 14 04:01:58.240: INFO: Pod "downwardapi-volume-76b14669-0285-40fe-bbe2-8f19a79fc4b6": Phase="Running", Reason="", readiness=false. Elapsed: 2.036899809s +Jan 14 04:02:00.242: INFO: Pod "downwardapi-volume-76b14669-0285-40fe-bbe2-8f19a79fc4b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038642412s +STEP: Saw pod success 01/14/23 04:02:00.242 +Jan 14 04:02:00.242: INFO: Pod "downwardapi-volume-76b14669-0285-40fe-bbe2-8f19a79fc4b6" satisfied condition "Succeeded or Failed" +Jan 14 04:02:00.246: INFO: Trying to get logs from node 10.0.1.99 pod downwardapi-volume-76b14669-0285-40fe-bbe2-8f19a79fc4b6 container client-container: +STEP: delete the pod 01/14/23 04:02:00.259 +Jan 14 04:02:00.270: INFO: Waiting for pod downwardapi-volume-76b14669-0285-40fe-bbe2-8f19a79fc4b6 to disappear +Jan 14 04:02:00.273: INFO: Pod downwardapi-volume-76b14669-0285-40fe-bbe2-8f19a79fc4b6 no longer exists +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 +Jan 14 04:02:00.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 +STEP: Destroying namespace "downward-api-108" for this suite. 01/14/23 04:02:00.278 +------------------------------ +• [4.160 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:261 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:01:56.124 + Jan 14 04:01:56.124: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename downward-api 01/14/23 04:01:56.124 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:01:56.151 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:01:56.153 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 + [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:261 + STEP: Creating a pod to test downward API volume plugin 01/14/23 04:01:56.18 + Jan 14 04:01:56.204: INFO: Waiting up to 5m0s for pod "downwardapi-volume-76b14669-0285-40fe-bbe2-8f19a79fc4b6" in namespace "downward-api-108" to be "Succeeded or Failed" + Jan 14 04:01:56.235: INFO: Pod "downwardapi-volume-76b14669-0285-40fe-bbe2-8f19a79fc4b6": Phase="Pending", Reason="", readiness=false. Elapsed: 31.116257ms + Jan 14 04:01:58.240: INFO: Pod "downwardapi-volume-76b14669-0285-40fe-bbe2-8f19a79fc4b6": Phase="Running", Reason="", readiness=false. Elapsed: 2.036899809s + Jan 14 04:02:00.242: INFO: Pod "downwardapi-volume-76b14669-0285-40fe-bbe2-8f19a79fc4b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038642412s + STEP: Saw pod success 01/14/23 04:02:00.242 + Jan 14 04:02:00.242: INFO: Pod "downwardapi-volume-76b14669-0285-40fe-bbe2-8f19a79fc4b6" satisfied condition "Succeeded or Failed" + Jan 14 04:02:00.246: INFO: Trying to get logs from node 10.0.1.99 pod downwardapi-volume-76b14669-0285-40fe-bbe2-8f19a79fc4b6 container client-container: + STEP: delete the pod 01/14/23 04:02:00.259 + Jan 14 04:02:00.270: INFO: Waiting for pod downwardapi-volume-76b14669-0285-40fe-bbe2-8f19a79fc4b6 to disappear + Jan 14 04:02:00.273: INFO: Pod downwardapi-volume-76b14669-0285-40fe-bbe2-8f19a79fc4b6 no longer exists + [AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 + Jan 14 04:02:00.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 + STEP: Destroying namespace "downward-api-108" for this suite. 01/14/23 04:02:00.278 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-storage] Secrets + optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:205 +[BeforeEach] [sig-storage] Secrets + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:02:00.284 +Jan 14 04:02:00.284: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename secrets 01/14/23 04:02:00.285 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:02:00.301 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:02:00.303 +[BeforeEach] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:31 +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:205 +STEP: Creating secret with name s-test-opt-del-bf51e247-1514-4c89-9fcf-06c9cff17d04 01/14/23 04:02:00.31 +STEP: Creating secret with name s-test-opt-upd-c6530259-14b7-435a-a45d-cd3ce6f6c073 01/14/23 04:02:00.314 +STEP: Creating the pod 01/14/23 04:02:00.319 +Jan 14 04:02:00.329: INFO: Waiting up to 5m0s for pod "pod-secrets-dd428947-2765-46e2-b297-51daa654117c" in namespace "secrets-4778" to be "running and ready" +Jan 14 04:02:00.332: INFO: Pod "pod-secrets-dd428947-2765-46e2-b297-51daa654117c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.435959ms +Jan 14 04:02:00.332: INFO: The phase of Pod pod-secrets-dd428947-2765-46e2-b297-51daa654117c is Pending, waiting for it to be Running (with Ready = true) +Jan 14 04:02:02.337: INFO: Pod "pod-secrets-dd428947-2765-46e2-b297-51daa654117c": Phase="Running", Reason="", readiness=true. Elapsed: 2.008557835s +Jan 14 04:02:02.337: INFO: The phase of Pod pod-secrets-dd428947-2765-46e2-b297-51daa654117c is Running (Ready = true) +Jan 14 04:02:02.337: INFO: Pod "pod-secrets-dd428947-2765-46e2-b297-51daa654117c" satisfied condition "running and ready" +STEP: Deleting secret s-test-opt-del-bf51e247-1514-4c89-9fcf-06c9cff17d04 01/14/23 04:02:02.358 +STEP: Updating secret s-test-opt-upd-c6530259-14b7-435a-a45d-cd3ce6f6c073 01/14/23 04:02:02.363 +STEP: Creating secret with name s-test-opt-create-dd403048-c9b3-4448-ae77-d2b268c92482 01/14/23 04:02:02.367 +STEP: waiting to observe update in volume 01/14/23 04:02:02.372 +[AfterEach] [sig-storage] Secrets + test/e2e/framework/node/init/init.go:32 +Jan 14 04:02:06.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Secrets + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Secrets + tear down framework | framework.go:193 +STEP: Destroying namespace "secrets-4778" for this suite. 01/14/23 04:02:06.41 +------------------------------ +• [SLOW TEST] [6.131 seconds] +[sig-storage] Secrets +test/e2e/common/storage/framework.go:23 + optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:205 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Secrets + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:02:00.284 + Jan 14 04:02:00.284: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename secrets 01/14/23 04:02:00.285 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:02:00.301 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:02:00.303 + [BeforeEach] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:31 + [It] optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:205 + STEP: Creating secret with name s-test-opt-del-bf51e247-1514-4c89-9fcf-06c9cff17d04 01/14/23 04:02:00.31 + STEP: Creating secret with name s-test-opt-upd-c6530259-14b7-435a-a45d-cd3ce6f6c073 01/14/23 04:02:00.314 + STEP: Creating the pod 01/14/23 04:02:00.319 + Jan 14 04:02:00.329: INFO: Waiting up to 5m0s for pod "pod-secrets-dd428947-2765-46e2-b297-51daa654117c" in namespace "secrets-4778" to be "running and ready" + Jan 14 04:02:00.332: INFO: Pod "pod-secrets-dd428947-2765-46e2-b297-51daa654117c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.435959ms + Jan 14 04:02:00.332: INFO: The phase of Pod pod-secrets-dd428947-2765-46e2-b297-51daa654117c is Pending, waiting for it to be Running (with Ready = true) + Jan 14 04:02:02.337: INFO: Pod "pod-secrets-dd428947-2765-46e2-b297-51daa654117c": Phase="Running", Reason="", readiness=true. Elapsed: 2.008557835s + Jan 14 04:02:02.337: INFO: The phase of Pod pod-secrets-dd428947-2765-46e2-b297-51daa654117c is Running (Ready = true) + Jan 14 04:02:02.337: INFO: Pod "pod-secrets-dd428947-2765-46e2-b297-51daa654117c" satisfied condition "running and ready" + STEP: Deleting secret s-test-opt-del-bf51e247-1514-4c89-9fcf-06c9cff17d04 01/14/23 04:02:02.358 + STEP: Updating secret s-test-opt-upd-c6530259-14b7-435a-a45d-cd3ce6f6c073 01/14/23 04:02:02.363 + STEP: Creating secret with name s-test-opt-create-dd403048-c9b3-4448-ae77-d2b268c92482 01/14/23 04:02:02.367 + STEP: waiting to observe update in volume 01/14/23 04:02:02.372 + [AfterEach] [sig-storage] Secrets + test/e2e/framework/node/init/init.go:32 + Jan 14 04:02:06.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Secrets + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Secrets + tear down framework | framework.go:193 + STEP: Destroying namespace "secrets-4778" for this suite. 01/14/23 04:02:06.41 + << End Captured GinkgoWriter Output +------------------------------ +[sig-node] Security Context + should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + test/e2e/node/security_context.go:164 +[BeforeEach] [sig-node] Security Context + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:02:06.415 +Jan 14 04:02:06.415: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename security-context 01/14/23 04:02:06.416 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:02:06.46 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:02:06.462 +[BeforeEach] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:31 +[It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + test/e2e/node/security_context.go:164 +STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser 01/14/23 04:02:06.465 +Jan 14 04:02:06.474: INFO: Waiting up to 5m0s for pod "security-context-306ddeb6-6fe7-49b5-b614-b25d5ca3b01d" in namespace "security-context-5332" to be "Succeeded or Failed" +Jan 14 04:02:06.477: INFO: Pod "security-context-306ddeb6-6fe7-49b5-b614-b25d5ca3b01d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.259916ms +Jan 14 04:02:08.482: INFO: Pod "security-context-306ddeb6-6fe7-49b5-b614-b25d5ca3b01d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008657354s +Jan 14 04:02:10.482: INFO: Pod "security-context-306ddeb6-6fe7-49b5-b614-b25d5ca3b01d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008175824s +STEP: Saw pod success 01/14/23 04:02:10.482 +Jan 14 04:02:10.482: INFO: Pod "security-context-306ddeb6-6fe7-49b5-b614-b25d5ca3b01d" satisfied condition "Succeeded or Failed" +Jan 14 04:02:10.485: INFO: Trying to get logs from node 10.0.1.99 pod security-context-306ddeb6-6fe7-49b5-b614-b25d5ca3b01d container test-container: +STEP: delete the pod 01/14/23 04:02:10.491 +Jan 14 04:02:10.504: INFO: Waiting for pod security-context-306ddeb6-6fe7-49b5-b614-b25d5ca3b01d to disappear +Jan 14 04:02:10.508: INFO: Pod security-context-306ddeb6-6fe7-49b5-b614-b25d5ca3b01d no longer exists +[AfterEach] [sig-node] Security Context + test/e2e/framework/node/init/init.go:32 +Jan 14 04:02:10.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Security Context + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Security Context + tear down framework | framework.go:193 +STEP: Destroying namespace "security-context-5332" for this suite. 01/14/23 04:02:10.512 +------------------------------ +• [4.102 seconds] +[sig-node] Security Context +test/e2e/node/framework.go:23 + should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + test/e2e/node/security_context.go:164 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Security Context + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:02:06.415 + Jan 14 04:02:06.415: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename security-context 01/14/23 04:02:06.416 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:02:06.46 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:02:06.462 + [BeforeEach] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:31 + [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] + test/e2e/node/security_context.go:164 + STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser 01/14/23 04:02:06.465 + Jan 14 04:02:06.474: INFO: Waiting up to 5m0s for pod "security-context-306ddeb6-6fe7-49b5-b614-b25d5ca3b01d" in namespace "security-context-5332" to be "Succeeded or Failed" + Jan 14 04:02:06.477: INFO: Pod "security-context-306ddeb6-6fe7-49b5-b614-b25d5ca3b01d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.259916ms + Jan 14 04:02:08.482: INFO: Pod "security-context-306ddeb6-6fe7-49b5-b614-b25d5ca3b01d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008657354s + Jan 14 04:02:10.482: INFO: Pod "security-context-306ddeb6-6fe7-49b5-b614-b25d5ca3b01d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008175824s + STEP: Saw pod success 01/14/23 04:02:10.482 + Jan 14 04:02:10.482: INFO: Pod "security-context-306ddeb6-6fe7-49b5-b614-b25d5ca3b01d" satisfied condition "Succeeded or Failed" + Jan 14 04:02:10.485: INFO: Trying to get logs from node 10.0.1.99 pod security-context-306ddeb6-6fe7-49b5-b614-b25d5ca3b01d container test-container: + STEP: delete the pod 01/14/23 04:02:10.491 + Jan 14 04:02:10.504: INFO: Waiting for pod security-context-306ddeb6-6fe7-49b5-b614-b25d5ca3b01d to disappear + Jan 14 04:02:10.508: INFO: Pod security-context-306ddeb6-6fe7-49b5-b614-b25d5ca3b01d no longer exists + [AfterEach] [sig-node] Security Context + test/e2e/framework/node/init/init.go:32 + Jan 14 04:02:10.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Security Context + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Security Context + tear down framework | framework.go:193 + STEP: Destroying namespace "security-context-5332" for this suite. 01/14/23 04:02:10.512 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be immutable if `immutable` field is set [Conformance] + test/e2e/common/storage/secrets_volume.go:386 +[BeforeEach] [sig-storage] Secrets + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:02:10.521 +Jan 14 04:02:10.521: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename secrets 01/14/23 04:02:10.522 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:02:10.536 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:02:10.538 +[BeforeEach] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:31 +[It] should be immutable if `immutable` field is set [Conformance] + test/e2e/common/storage/secrets_volume.go:386 +[AfterEach] [sig-storage] Secrets + test/e2e/framework/node/init/init.go:32 +Jan 14 04:02:10.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Secrets + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Secrets + tear down framework | framework.go:193 +STEP: Destroying namespace "secrets-8185" for this suite. 01/14/23 04:02:10.572 +------------------------------ +• [0.056 seconds] +[sig-storage] Secrets +test/e2e/common/storage/framework.go:23 + should be immutable if `immutable` field is set [Conformance] + test/e2e/common/storage/secrets_volume.go:386 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Secrets + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:02:10.521 + Jan 14 04:02:10.521: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename secrets 01/14/23 04:02:10.522 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:02:10.536 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:02:10.538 + [BeforeEach] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:31 + [It] should be immutable if `immutable` field is set [Conformance] + test/e2e/common/storage/secrets_volume.go:386 + [AfterEach] [sig-storage] Secrets + test/e2e/framework/node/init/init.go:32 + Jan 14 04:02:10.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Secrets + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Secrets + tear down framework | framework.go:193 + STEP: Destroying namespace "secrets-8185" for this suite. 01/14/23 04:02:10.572 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-node] Pods + should be updated [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:344 +[BeforeEach] [sig-node] Pods + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:02:10.577 +Jan 14 04:02:10.577: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename pods 01/14/23 04:02:10.578 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:02:10.591 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:02:10.593 +[BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 +[It] should be updated [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:344 +STEP: creating the pod 01/14/23 04:02:10.595 +STEP: submitting the pod to kubernetes 01/14/23 04:02:10.595 +Jan 14 04:02:10.603: INFO: Waiting up to 5m0s for pod "pod-update-65a0d58a-f7cb-4b34-b76c-921686f54880" in namespace "pods-6343" to be "running and ready" +Jan 14 04:02:10.606: INFO: Pod "pod-update-65a0d58a-f7cb-4b34-b76c-921686f54880": Phase="Pending", Reason="", readiness=false. Elapsed: 2.729548ms +Jan 14 04:02:10.606: INFO: The phase of Pod pod-update-65a0d58a-f7cb-4b34-b76c-921686f54880 is Pending, waiting for it to be Running (with Ready = true) +Jan 14 04:02:12.611: INFO: Pod "pod-update-65a0d58a-f7cb-4b34-b76c-921686f54880": Phase="Running", Reason="", readiness=true. Elapsed: 2.007418674s +Jan 14 04:02:12.611: INFO: The phase of Pod pod-update-65a0d58a-f7cb-4b34-b76c-921686f54880 is Running (Ready = true) +Jan 14 04:02:12.611: INFO: Pod "pod-update-65a0d58a-f7cb-4b34-b76c-921686f54880" satisfied condition "running and ready" +STEP: verifying the pod is in kubernetes 01/14/23 04:02:12.614 +STEP: updating the pod 01/14/23 04:02:12.618 +Jan 14 04:02:13.131: INFO: Successfully updated pod "pod-update-65a0d58a-f7cb-4b34-b76c-921686f54880" +Jan 14 04:02:13.131: INFO: Waiting up to 5m0s for pod "pod-update-65a0d58a-f7cb-4b34-b76c-921686f54880" in namespace "pods-6343" to be "running" +Jan 14 04:02:13.134: INFO: Pod "pod-update-65a0d58a-f7cb-4b34-b76c-921686f54880": Phase="Running", Reason="", readiness=true. Elapsed: 2.828072ms +Jan 14 04:02:13.134: INFO: Pod "pod-update-65a0d58a-f7cb-4b34-b76c-921686f54880" satisfied condition "running" +STEP: verifying the updated pod is in kubernetes 01/14/23 04:02:13.134 +Jan 14 04:02:13.137: INFO: Pod update OK +[AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 +Jan 14 04:02:13.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 +STEP: Destroying namespace "pods-6343" for this suite. 01/14/23 04:02:13.142 +------------------------------ +• [2.570 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should be updated [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:344 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Pods + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:02:10.577 + Jan 14 04:02:10.577: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename pods 01/14/23 04:02:10.578 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:02:10.591 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:02:10.593 + [BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 + [It] should be updated [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:344 + STEP: creating the pod 01/14/23 04:02:10.595 + STEP: submitting the pod to kubernetes 01/14/23 04:02:10.595 + Jan 14 04:02:10.603: INFO: Waiting up to 5m0s for pod "pod-update-65a0d58a-f7cb-4b34-b76c-921686f54880" in namespace "pods-6343" to be "running and ready" + Jan 14 04:02:10.606: INFO: Pod "pod-update-65a0d58a-f7cb-4b34-b76c-921686f54880": Phase="Pending", Reason="", readiness=false. Elapsed: 2.729548ms + Jan 14 04:02:10.606: INFO: The phase of Pod pod-update-65a0d58a-f7cb-4b34-b76c-921686f54880 is Pending, waiting for it to be Running (with Ready = true) + Jan 14 04:02:12.611: INFO: Pod "pod-update-65a0d58a-f7cb-4b34-b76c-921686f54880": Phase="Running", Reason="", readiness=true. Elapsed: 2.007418674s + Jan 14 04:02:12.611: INFO: The phase of Pod pod-update-65a0d58a-f7cb-4b34-b76c-921686f54880 is Running (Ready = true) + Jan 14 04:02:12.611: INFO: Pod "pod-update-65a0d58a-f7cb-4b34-b76c-921686f54880" satisfied condition "running and ready" + STEP: verifying the pod is in kubernetes 01/14/23 04:02:12.614 + STEP: updating the pod 01/14/23 04:02:12.618 + Jan 14 04:02:13.131: INFO: Successfully updated pod "pod-update-65a0d58a-f7cb-4b34-b76c-921686f54880" + Jan 14 04:02:13.131: INFO: Waiting up to 5m0s for pod "pod-update-65a0d58a-f7cb-4b34-b76c-921686f54880" in namespace "pods-6343" to be "running" + Jan 14 04:02:13.134: INFO: Pod "pod-update-65a0d58a-f7cb-4b34-b76c-921686f54880": Phase="Running", Reason="", readiness=true. Elapsed: 2.828072ms + Jan 14 04:02:13.134: INFO: Pod "pod-update-65a0d58a-f7cb-4b34-b76c-921686f54880" satisfied condition "running" + STEP: verifying the updated pod is in kubernetes 01/14/23 04:02:13.134 + Jan 14 04:02:13.137: INFO: Pod update OK + [AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 + Jan 14 04:02:13.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 + STEP: Destroying namespace "pods-6343" for this suite. 01/14/23 04:02:13.142 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + should be able to convert a non homogeneous list of CRs [Conformance] + test/e2e/apimachinery/crd_conversion_webhook.go:184 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:02:13.15 +Jan 14 04:02:13.150: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename crd-webhook 01/14/23 04:02:13.15 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:02:13.163 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:02:13.165 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/crd_conversion_webhook.go:128 +STEP: Setting up server cert 01/14/23 04:02:13.168 +STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication 01/14/23 04:02:13.658 +STEP: Deploying the custom resource conversion webhook pod 01/14/23 04:02:13.665 +STEP: Wait for the deployment to be ready 01/14/23 04:02:13.676 +Jan 14 04:02:13.683: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 01/14/23 04:02:15.692 +STEP: Verifying the service has paired with the endpoint 01/14/23 04:02:15.7 +Jan 14 04:02:16.701: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 +[It] should be able to convert a non homogeneous list of CRs [Conformance] + test/e2e/apimachinery/crd_conversion_webhook.go:184 +Jan 14 04:02:16.704: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Creating a v1 custom resource 01/14/23 04:02:19.271 +STEP: Create a v2 custom resource 01/14/23 04:02:19.288 +STEP: List CRs in v1 01/14/23 04:02:19.328 +STEP: List CRs in v2 01/14/23 04:02:19.332 +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:02:19.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/crd_conversion_webhook.go:139 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "crd-webhook-1008" for this suite. 01/14/23 04:02:19.964 +------------------------------ +• [SLOW TEST] [6.823 seconds] +[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should be able to convert a non homogeneous list of CRs [Conformance] + test/e2e/apimachinery/crd_conversion_webhook.go:184 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:02:13.15 + Jan 14 04:02:13.150: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename crd-webhook 01/14/23 04:02:13.15 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:02:13.163 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:02:13.165 + [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/crd_conversion_webhook.go:128 + STEP: Setting up server cert 01/14/23 04:02:13.168 + STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication 01/14/23 04:02:13.658 + STEP: Deploying the custom resource conversion webhook pod 01/14/23 04:02:13.665 + STEP: Wait for the deployment to be ready 01/14/23 04:02:13.676 + Jan 14 04:02:13.683: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 01/14/23 04:02:15.692 + STEP: Verifying the service has paired with the endpoint 01/14/23 04:02:15.7 + Jan 14 04:02:16.701: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 + [It] should be able to convert a non homogeneous list of CRs [Conformance] + test/e2e/apimachinery/crd_conversion_webhook.go:184 + Jan 14 04:02:16.704: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Creating a v1 custom resource 01/14/23 04:02:19.271 + STEP: Create a v2 custom resource 01/14/23 04:02:19.288 + STEP: List CRs in v1 01/14/23 04:02:19.328 + STEP: List CRs in v2 01/14/23 04:02:19.332 + [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:02:19.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/crd_conversion_webhook.go:139 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "crd-webhook-1008" for this suite. 01/14/23 04:02:19.964 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-auth] ServiceAccounts + ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + test/e2e/auth/service_accounts.go:531 +[BeforeEach] [sig-auth] ServiceAccounts + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:02:19.973 +Jan 14 04:02:19.973: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename svcaccounts 01/14/23 04:02:19.973 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:02:20.055 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:02:20.058 +[BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:31 +[It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + test/e2e/auth/service_accounts.go:531 +Jan 14 04:02:20.130: INFO: created pod +Jan 14 04:02:20.130: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-811" to be "Succeeded or Failed" +Jan 14 04:02:20.245: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 114.84766ms +Jan 14 04:02:22.250: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120021147s +Jan 14 04:02:24.249: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.119238727s +STEP: Saw pod success 01/14/23 04:02:24.249 +Jan 14 04:02:24.249: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" +Jan 14 04:02:54.251: INFO: polling logs +Jan 14 04:02:54.258: INFO: Pod logs: +I0114 04:02:20.673300 1 log.go:198] OK: Got token +I0114 04:02:20.673331 1 log.go:198] validating with in-cluster discovery +I0114 04:02:20.673554 1 log.go:198] OK: got issuer https://kubernetes.default.svc.cluster.local +I0114 04:02:20.673576 1 log.go:198] Full, not-validated claims: +openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-811:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1673669540, NotBefore:1673668940, IssuedAt:1673668940, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-811", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"82ed8a90-77bb-442d-9d24-9dd39fe8b696"}}} +I0114 04:02:20.684633 1 log.go:198] OK: Constructed OIDC provider for issuer https://kubernetes.default.svc.cluster.local +I0114 04:02:20.689907 1 log.go:198] OK: Validated signature on JWT +I0114 04:02:20.689976 1 log.go:198] OK: Got valid claims from token! +I0114 04:02:20.689997 1 log.go:198] Full, validated claims: +&openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-811:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1673669540, NotBefore:1673668940, IssuedAt:1673668940, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-811", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"82ed8a90-77bb-442d-9d24-9dd39fe8b696"}}} + +Jan 14 04:02:54.258: INFO: completed pod +[AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/node/init/init.go:32 +Jan 14 04:02:54.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + tear down framework | framework.go:193 +STEP: Destroying namespace "svcaccounts-811" for this suite. 01/14/23 04:02:54.269 +------------------------------ +• [SLOW TEST] [34.301 seconds] +[sig-auth] ServiceAccounts +test/e2e/auth/framework.go:23 + ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + test/e2e/auth/service_accounts.go:531 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-auth] ServiceAccounts + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:02:19.973 + Jan 14 04:02:19.973: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename svcaccounts 01/14/23 04:02:19.973 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:02:20.055 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:02:20.058 + [BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:31 + [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] + test/e2e/auth/service_accounts.go:531 + Jan 14 04:02:20.130: INFO: created pod + Jan 14 04:02:20.130: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-811" to be "Succeeded or Failed" + Jan 14 04:02:20.245: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 114.84766ms + Jan 14 04:02:22.250: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120021147s + Jan 14 04:02:24.249: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.119238727s + STEP: Saw pod success 01/14/23 04:02:24.249 + Jan 14 04:02:24.249: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" + Jan 14 04:02:54.251: INFO: polling logs + Jan 14 04:02:54.258: INFO: Pod logs: + I0114 04:02:20.673300 1 log.go:198] OK: Got token + I0114 04:02:20.673331 1 log.go:198] validating with in-cluster discovery + I0114 04:02:20.673554 1 log.go:198] OK: got issuer https://kubernetes.default.svc.cluster.local + I0114 04:02:20.673576 1 log.go:198] Full, not-validated claims: + openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-811:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1673669540, NotBefore:1673668940, IssuedAt:1673668940, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-811", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"82ed8a90-77bb-442d-9d24-9dd39fe8b696"}}} + I0114 04:02:20.684633 1 log.go:198] OK: Constructed OIDC provider for issuer https://kubernetes.default.svc.cluster.local + I0114 04:02:20.689907 1 log.go:198] OK: Validated signature on JWT + I0114 04:02:20.689976 1 log.go:198] OK: Got valid claims from token! + I0114 04:02:20.689997 1 log.go:198] Full, validated claims: + &openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-811:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1673669540, NotBefore:1673668940, IssuedAt:1673668940, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-811", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"82ed8a90-77bb-442d-9d24-9dd39fe8b696"}}} + + Jan 14 04:02:54.258: INFO: completed pod + [AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/node/init/init.go:32 + Jan 14 04:02:54.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + tear down framework | framework.go:193 + STEP: Destroying namespace "svcaccounts-811" for this suite. 01/14/23 04:02:54.269 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-cli] Kubectl client Kubectl server-side dry-run + should check if kubectl can dry-run update Pods [Conformance] + test/e2e/kubectl/kubectl.go:962 +[BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:02:54.274 +Jan 14 04:02:54.274: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename kubectl 01/14/23 04:02:54.275 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:02:54.293 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:02:54.296 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[It] should check if kubectl can dry-run update Pods [Conformance] + test/e2e/kubectl/kubectl.go:962 +STEP: running the image registry.k8s.io/e2e-test-images/httpd:2.4.38-4 01/14/23 04:02:54.299 +Jan 14 04:02:54.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-8801 run e2e-test-httpd-pod --image=registry.k8s.io/e2e-test-images/httpd:2.4.38-4 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' +Jan 14 04:02:54.371: INFO: stderr: "" +Jan 14 04:02:54.371: INFO: stdout: "pod/e2e-test-httpd-pod created\n" +STEP: replace the image in the pod with server-side dry-run 01/14/23 04:02:54.371 +Jan 14 04:02:54.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-8801 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "registry.k8s.io/e2e-test-images/busybox:1.29-4"}]}} --dry-run=server' +Jan 14 04:02:55.075: INFO: stderr: "" +Jan 14 04:02:55.075: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" +STEP: verifying the pod e2e-test-httpd-pod has the right image registry.k8s.io/e2e-test-images/httpd:2.4.38-4 01/14/23 04:02:55.075 +Jan 14 04:02:55.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-8801 delete pods e2e-test-httpd-pod' +Jan 14 04:02:58.032: INFO: stderr: "" +Jan 14 04:02:58.032: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 +Jan 14 04:02:58.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 +STEP: Destroying namespace "kubectl-8801" for this suite. 01/14/23 04:02:58.037 +------------------------------ +• [3.769 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl server-side dry-run + test/e2e/kubectl/kubectl.go:956 + should check if kubectl can dry-run update Pods [Conformance] + test/e2e/kubectl/kubectl.go:962 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:02:54.274 + Jan 14 04:02:54.274: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename kubectl 01/14/23 04:02:54.275 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:02:54.293 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:02:54.296 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [It] should check if kubectl can dry-run update Pods [Conformance] + test/e2e/kubectl/kubectl.go:962 + STEP: running the image registry.k8s.io/e2e-test-images/httpd:2.4.38-4 01/14/23 04:02:54.299 + Jan 14 04:02:54.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-8801 run e2e-test-httpd-pod --image=registry.k8s.io/e2e-test-images/httpd:2.4.38-4 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' + Jan 14 04:02:54.371: INFO: stderr: "" + Jan 14 04:02:54.371: INFO: stdout: "pod/e2e-test-httpd-pod created\n" + STEP: replace the image in the pod with server-side dry-run 01/14/23 04:02:54.371 + Jan 14 04:02:54.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-8801 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "registry.k8s.io/e2e-test-images/busybox:1.29-4"}]}} --dry-run=server' + Jan 14 04:02:55.075: INFO: stderr: "" + Jan 14 04:02:55.075: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" + STEP: verifying the pod e2e-test-httpd-pod has the right image registry.k8s.io/e2e-test-images/httpd:2.4.38-4 01/14/23 04:02:55.075 + Jan 14 04:02:55.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-8801 delete pods e2e-test-httpd-pod' + Jan 14 04:02:58.032: INFO: stderr: "" + Jan 14 04:02:58.032: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 + Jan 14 04:02:58.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 + STEP: Destroying namespace "kubectl-8801" for this suite. 01/14/23 04:02:58.037 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:125 +[BeforeEach] [sig-storage] Secrets + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:02:58.045 +Jan 14 04:02:58.045: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename secrets 01/14/23 04:02:58.045 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:02:58.062 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:02:58.064 +[BeforeEach] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:125 +STEP: Creating secret with name secret-test-d962d8dc-aee8-430f-81b1-b0904c75cf14 01/14/23 04:02:58.068 +STEP: Creating a pod to test consume secrets 01/14/23 04:02:58.073 +Jan 14 04:02:58.083: INFO: Waiting up to 5m0s for pod "pod-secrets-b97c3047-acfa-4154-adbc-ec55734e3ea9" in namespace "secrets-6807" to be "Succeeded or Failed" +Jan 14 04:02:58.088: INFO: Pod "pod-secrets-b97c3047-acfa-4154-adbc-ec55734e3ea9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.741892ms +Jan 14 04:03:00.092: INFO: Pod "pod-secrets-b97c3047-acfa-4154-adbc-ec55734e3ea9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009125009s +Jan 14 04:03:02.093: INFO: Pod "pod-secrets-b97c3047-acfa-4154-adbc-ec55734e3ea9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010196134s +STEP: Saw pod success 01/14/23 04:03:02.094 +Jan 14 04:03:02.094: INFO: Pod "pod-secrets-b97c3047-acfa-4154-adbc-ec55734e3ea9" satisfied condition "Succeeded or Failed" +Jan 14 04:03:02.097: INFO: Trying to get logs from node 10.0.1.106 pod pod-secrets-b97c3047-acfa-4154-adbc-ec55734e3ea9 container secret-volume-test: +STEP: delete the pod 01/14/23 04:03:02.103 +Jan 14 04:03:02.122: INFO: Waiting for pod pod-secrets-b97c3047-acfa-4154-adbc-ec55734e3ea9 to disappear +Jan 14 04:03:02.125: INFO: Pod pod-secrets-b97c3047-acfa-4154-adbc-ec55734e3ea9 no longer exists +[AfterEach] [sig-storage] Secrets + test/e2e/framework/node/init/init.go:32 +Jan 14 04:03:02.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Secrets + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Secrets + tear down framework | framework.go:193 +STEP: Destroying namespace "secrets-6807" for this suite. 01/14/23 04:03:02.13 +------------------------------ +• [4.091 seconds] +[sig-storage] Secrets +test/e2e/common/storage/framework.go:23 + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:125 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Secrets + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:02:58.045 + Jan 14 04:02:58.045: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename secrets 01/14/23 04:02:58.045 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:02:58.062 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:02:58.064 + [BeforeEach] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:125 + STEP: Creating secret with name secret-test-d962d8dc-aee8-430f-81b1-b0904c75cf14 01/14/23 04:02:58.068 + STEP: Creating a pod to test consume secrets 01/14/23 04:02:58.073 + Jan 14 04:02:58.083: INFO: Waiting up to 5m0s for pod "pod-secrets-b97c3047-acfa-4154-adbc-ec55734e3ea9" in namespace "secrets-6807" to be "Succeeded or Failed" + Jan 14 04:02:58.088: INFO: Pod "pod-secrets-b97c3047-acfa-4154-adbc-ec55734e3ea9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.741892ms + Jan 14 04:03:00.092: INFO: Pod "pod-secrets-b97c3047-acfa-4154-adbc-ec55734e3ea9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009125009s + Jan 14 04:03:02.093: INFO: Pod "pod-secrets-b97c3047-acfa-4154-adbc-ec55734e3ea9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010196134s + STEP: Saw pod success 01/14/23 04:03:02.094 + Jan 14 04:03:02.094: INFO: Pod "pod-secrets-b97c3047-acfa-4154-adbc-ec55734e3ea9" satisfied condition "Succeeded or Failed" + Jan 14 04:03:02.097: INFO: Trying to get logs from node 10.0.1.106 pod pod-secrets-b97c3047-acfa-4154-adbc-ec55734e3ea9 container secret-volume-test: + STEP: delete the pod 01/14/23 04:03:02.103 + Jan 14 04:03:02.122: INFO: Waiting for pod pod-secrets-b97c3047-acfa-4154-adbc-ec55734e3ea9 to disappear + Jan 14 04:03:02.125: INFO: Pod pod-secrets-b97c3047-acfa-4154-adbc-ec55734e3ea9 no longer exists + [AfterEach] [sig-storage] Secrets + test/e2e/framework/node/init/init.go:32 + Jan 14 04:03:02.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Secrets + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Secrets + tear down framework | framework.go:193 + STEP: Destroying namespace "secrets-6807" for this suite. 01/14/23 04:03:02.13 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test when starting a container that exits + should run with the expected status [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:52 +[BeforeEach] [sig-node] Container Runtime + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:03:02.136 +Jan 14 04:03:02.136: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename container-runtime 01/14/23 04:03:02.137 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:03:02.154 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:03:02.156 +[BeforeEach] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:31 +[It] should run with the expected status [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:52 +STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' 01/14/23 04:03:02.168 +STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' 01/14/23 04:03:17.267 +STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition 01/14/23 04:03:17.27 +STEP: Container 'terminate-cmd-rpa': should get the expected 'State' 01/14/23 04:03:17.276 +STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] 01/14/23 04:03:17.276 +STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' 01/14/23 04:03:17.295 +STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' 01/14/23 04:03:19.307 +STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition 01/14/23 04:03:21.319 +STEP: Container 'terminate-cmd-rpof': should get the expected 'State' 01/14/23 04:03:21.325 +STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] 01/14/23 04:03:21.325 +STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' 01/14/23 04:03:21.348 +STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' 01/14/23 04:03:22.357 +STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition 01/14/23 04:03:24.369 +STEP: Container 'terminate-cmd-rpn': should get the expected 'State' 01/14/23 04:03:24.375 +STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] 01/14/23 04:03:24.375 +[AfterEach] [sig-node] Container Runtime + test/e2e/framework/node/init/init.go:32 +Jan 14 04:03:24.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Container Runtime + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Container Runtime + tear down framework | framework.go:193 +STEP: Destroying namespace "container-runtime-1627" for this suite. 01/14/23 04:03:24.406 +------------------------------ +• [SLOW TEST] [22.278 seconds] +[sig-node] Container Runtime +test/e2e/common/node/framework.go:23 + blackbox test + test/e2e/common/node/runtime.go:44 + when starting a container that exits + test/e2e/common/node/runtime.go:45 + should run with the expected status [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:52 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Container Runtime + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:03:02.136 + Jan 14 04:03:02.136: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename container-runtime 01/14/23 04:03:02.137 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:03:02.154 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:03:02.156 + [BeforeEach] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:31 + [It] should run with the expected status [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:52 + STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' 01/14/23 04:03:02.168 + STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' 01/14/23 04:03:17.267 + STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition 01/14/23 04:03:17.27 + STEP: Container 'terminate-cmd-rpa': should get the expected 'State' 01/14/23 04:03:17.276 + STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] 01/14/23 04:03:17.276 + STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' 01/14/23 04:03:17.295 + STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' 01/14/23 04:03:19.307 + STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition 01/14/23 04:03:21.319 + STEP: Container 'terminate-cmd-rpof': should get the expected 'State' 01/14/23 04:03:21.325 + STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] 01/14/23 04:03:21.325 + STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' 01/14/23 04:03:21.348 + STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' 01/14/23 04:03:22.357 + STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition 01/14/23 04:03:24.369 + STEP: Container 'terminate-cmd-rpn': should get the expected 'State' 01/14/23 04:03:24.375 + STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] 01/14/23 04:03:24.375 + [AfterEach] [sig-node] Container Runtime + test/e2e/framework/node/init/init.go:32 + Jan 14 04:03:24.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Container Runtime + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Container Runtime + tear down framework | framework.go:193 + STEP: Destroying namespace "container-runtime-1627" for this suite. 01/14/23 04:03:24.406 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Update Demo + should scale a replication controller [Conformance] + test/e2e/kubectl/kubectl.go:352 +[BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:03:24.415 +Jan 14 04:03:24.415: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename kubectl 01/14/23 04:03:24.416 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:03:24.429 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:03:24.431 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[BeforeEach] Update Demo + test/e2e/kubectl/kubectl.go:326 +[It] should scale a replication controller [Conformance] + test/e2e/kubectl/kubectl.go:352 +STEP: creating a replication controller 01/14/23 04:03:24.434 +Jan 14 04:03:24.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 create -f -' +Jan 14 04:03:24.608: INFO: stderr: "" +Jan 14 04:03:24.608: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. 01/14/23 04:03:24.608 +Jan 14 04:03:24.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Jan 14 04:03:24.676: INFO: stderr: "" +Jan 14 04:03:24.676: INFO: stdout: "update-demo-nautilus-2h2c8 update-demo-nautilus-7g2ck " +Jan 14 04:03:24.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 get pods update-demo-nautilus-2h2c8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Jan 14 04:03:24.741: INFO: stderr: "" +Jan 14 04:03:24.741: INFO: stdout: "" +Jan 14 04:03:24.741: INFO: update-demo-nautilus-2h2c8 is created but not running +Jan 14 04:03:29.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Jan 14 04:03:29.809: INFO: stderr: "" +Jan 14 04:03:29.809: INFO: stdout: "update-demo-nautilus-2h2c8 update-demo-nautilus-7g2ck " +Jan 14 04:03:29.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 get pods update-demo-nautilus-2h2c8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Jan 14 04:03:29.873: INFO: stderr: "" +Jan 14 04:03:29.873: INFO: stdout: "true" +Jan 14 04:03:29.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 get pods update-demo-nautilus-2h2c8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Jan 14 04:03:29.936: INFO: stderr: "" +Jan 14 04:03:29.936: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" +Jan 14 04:03:29.936: INFO: validating pod update-demo-nautilus-2h2c8 +Jan 14 04:03:29.942: INFO: got data: { + "image": "nautilus.jpg" +} + +Jan 14 04:03:29.942: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jan 14 04:03:29.942: INFO: update-demo-nautilus-2h2c8 is verified up and running +Jan 14 04:03:29.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 get pods update-demo-nautilus-7g2ck -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Jan 14 04:03:30.006: INFO: stderr: "" +Jan 14 04:03:30.006: INFO: stdout: "true" +Jan 14 04:03:30.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 get pods update-demo-nautilus-7g2ck -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Jan 14 04:03:30.070: INFO: stderr: "" +Jan 14 04:03:30.070: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" +Jan 14 04:03:30.070: INFO: validating pod update-demo-nautilus-7g2ck +Jan 14 04:03:30.076: INFO: got data: { + "image": "nautilus.jpg" +} + +Jan 14 04:03:30.076: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jan 14 04:03:30.076: INFO: update-demo-nautilus-7g2ck is verified up and running +STEP: scaling down the replication controller 01/14/23 04:03:30.076 +Jan 14 04:03:30.077: INFO: scanned /root for discovery docs: +Jan 14 04:03:30.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 scale rc update-demo-nautilus --replicas=1 --timeout=5m' +Jan 14 04:03:31.159: INFO: stderr: "" +Jan 14 04:03:31.159: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. 01/14/23 04:03:31.159 +Jan 14 04:03:31.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Jan 14 04:03:31.223: INFO: stderr: "" +Jan 14 04:03:31.223: INFO: stdout: "update-demo-nautilus-7g2ck " +Jan 14 04:03:31.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 get pods update-demo-nautilus-7g2ck -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Jan 14 04:03:31.288: INFO: stderr: "" +Jan 14 04:03:31.288: INFO: stdout: "true" +Jan 14 04:03:31.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 get pods update-demo-nautilus-7g2ck -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Jan 14 04:03:31.351: INFO: stderr: "" +Jan 14 04:03:31.351: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" +Jan 14 04:03:31.351: INFO: validating pod update-demo-nautilus-7g2ck +Jan 14 04:03:31.356: INFO: got data: { + "image": "nautilus.jpg" +} + +Jan 14 04:03:31.356: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jan 14 04:03:31.356: INFO: update-demo-nautilus-7g2ck is verified up and running +STEP: scaling up the replication controller 01/14/23 04:03:31.356 +Jan 14 04:03:31.357: INFO: scanned /root for discovery docs: +Jan 14 04:03:31.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 scale rc update-demo-nautilus --replicas=2 --timeout=5m' +Jan 14 04:03:32.437: INFO: stderr: "" +Jan 14 04:03:32.437: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. 01/14/23 04:03:32.437 +Jan 14 04:03:32.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Jan 14 04:03:32.504: INFO: stderr: "" +Jan 14 04:03:32.504: INFO: stdout: "update-demo-nautilus-7g2ck update-demo-nautilus-jbv4p " +Jan 14 04:03:32.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 get pods update-demo-nautilus-7g2ck -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Jan 14 04:03:32.566: INFO: stderr: "" +Jan 14 04:03:32.566: INFO: stdout: "true" +Jan 14 04:03:32.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 get pods update-demo-nautilus-7g2ck -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Jan 14 04:03:32.634: INFO: stderr: "" +Jan 14 04:03:32.634: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" +Jan 14 04:03:32.634: INFO: validating pod update-demo-nautilus-7g2ck +Jan 14 04:03:32.638: INFO: got data: { + "image": "nautilus.jpg" +} + +Jan 14 04:03:32.638: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jan 14 04:03:32.638: INFO: update-demo-nautilus-7g2ck is verified up and running +Jan 14 04:03:32.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 get pods update-demo-nautilus-jbv4p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Jan 14 04:03:32.701: INFO: stderr: "" +Jan 14 04:03:32.701: INFO: stdout: "true" +Jan 14 04:03:32.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 get pods update-demo-nautilus-jbv4p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Jan 14 04:03:32.766: INFO: stderr: "" +Jan 14 04:03:32.766: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" +Jan 14 04:03:32.766: INFO: validating pod update-demo-nautilus-jbv4p +Jan 14 04:03:32.771: INFO: got data: { + "image": "nautilus.jpg" +} + +Jan 14 04:03:32.771: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jan 14 04:03:32.771: INFO: update-demo-nautilus-jbv4p is verified up and running +STEP: using delete to clean up resources 01/14/23 04:03:32.771 +Jan 14 04:03:32.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 delete --grace-period=0 --force -f -' +Jan 14 04:03:32.838: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jan 14 04:03:32.838: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Jan 14 04:03:32.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 get rc,svc -l name=update-demo --no-headers' +Jan 14 04:03:32.906: INFO: stderr: "No resources found in kubectl-4537 namespace.\n" +Jan 14 04:03:32.906: INFO: stdout: "" +Jan 14 04:03:32.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Jan 14 04:03:32.972: INFO: stderr: "" +Jan 14 04:03:32.972: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 +Jan 14 04:03:32.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 +STEP: Destroying namespace "kubectl-4537" for this suite. 01/14/23 04:03:32.978 +------------------------------ +• [SLOW TEST] [8.568 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Update Demo + test/e2e/kubectl/kubectl.go:324 + should scale a replication controller [Conformance] + test/e2e/kubectl/kubectl.go:352 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:03:24.415 + Jan 14 04:03:24.415: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename kubectl 01/14/23 04:03:24.416 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:03:24.429 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:03:24.431 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [BeforeEach] Update Demo + test/e2e/kubectl/kubectl.go:326 + [It] should scale a replication controller [Conformance] + test/e2e/kubectl/kubectl.go:352 + STEP: creating a replication controller 01/14/23 04:03:24.434 + Jan 14 04:03:24.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 create -f -' + Jan 14 04:03:24.608: INFO: stderr: "" + Jan 14 04:03:24.608: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" + STEP: waiting for all containers in name=update-demo pods to come up. 01/14/23 04:03:24.608 + Jan 14 04:03:24.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Jan 14 04:03:24.676: INFO: stderr: "" + Jan 14 04:03:24.676: INFO: stdout: "update-demo-nautilus-2h2c8 update-demo-nautilus-7g2ck " + Jan 14 04:03:24.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 get pods update-demo-nautilus-2h2c8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Jan 14 04:03:24.741: INFO: stderr: "" + Jan 14 04:03:24.741: INFO: stdout: "" + Jan 14 04:03:24.741: INFO: update-demo-nautilus-2h2c8 is created but not running + Jan 14 04:03:29.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Jan 14 04:03:29.809: INFO: stderr: "" + Jan 14 04:03:29.809: INFO: stdout: "update-demo-nautilus-2h2c8 update-demo-nautilus-7g2ck " + Jan 14 04:03:29.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 get pods update-demo-nautilus-2h2c8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Jan 14 04:03:29.873: INFO: stderr: "" + Jan 14 04:03:29.873: INFO: stdout: "true" + Jan 14 04:03:29.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 get pods update-demo-nautilus-2h2c8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' + Jan 14 04:03:29.936: INFO: stderr: "" + Jan 14 04:03:29.936: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" + Jan 14 04:03:29.936: INFO: validating pod update-demo-nautilus-2h2c8 + Jan 14 04:03:29.942: INFO: got data: { + "image": "nautilus.jpg" + } + + Jan 14 04:03:29.942: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . + Jan 14 04:03:29.942: INFO: update-demo-nautilus-2h2c8 is verified up and running + Jan 14 04:03:29.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 get pods update-demo-nautilus-7g2ck -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Jan 14 04:03:30.006: INFO: stderr: "" + Jan 14 04:03:30.006: INFO: stdout: "true" + Jan 14 04:03:30.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 get pods update-demo-nautilus-7g2ck -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' + Jan 14 04:03:30.070: INFO: stderr: "" + Jan 14 04:03:30.070: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" + Jan 14 04:03:30.070: INFO: validating pod update-demo-nautilus-7g2ck + Jan 14 04:03:30.076: INFO: got data: { + "image": "nautilus.jpg" + } + + Jan 14 04:03:30.076: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . + Jan 14 04:03:30.076: INFO: update-demo-nautilus-7g2ck is verified up and running + STEP: scaling down the replication controller 01/14/23 04:03:30.076 + Jan 14 04:03:30.077: INFO: scanned /root for discovery docs: + Jan 14 04:03:30.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 scale rc update-demo-nautilus --replicas=1 --timeout=5m' + Jan 14 04:03:31.159: INFO: stderr: "" + Jan 14 04:03:31.159: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" + STEP: waiting for all containers in name=update-demo pods to come up. 01/14/23 04:03:31.159 + Jan 14 04:03:31.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Jan 14 04:03:31.223: INFO: stderr: "" + Jan 14 04:03:31.223: INFO: stdout: "update-demo-nautilus-7g2ck " + Jan 14 04:03:31.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 get pods update-demo-nautilus-7g2ck -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Jan 14 04:03:31.288: INFO: stderr: "" + Jan 14 04:03:31.288: INFO: stdout: "true" + Jan 14 04:03:31.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 get pods update-demo-nautilus-7g2ck -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' + Jan 14 04:03:31.351: INFO: stderr: "" + Jan 14 04:03:31.351: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" + Jan 14 04:03:31.351: INFO: validating pod update-demo-nautilus-7g2ck + Jan 14 04:03:31.356: INFO: got data: { + "image": "nautilus.jpg" + } + + Jan 14 04:03:31.356: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . + Jan 14 04:03:31.356: INFO: update-demo-nautilus-7g2ck is verified up and running + STEP: scaling up the replication controller 01/14/23 04:03:31.356 + Jan 14 04:03:31.357: INFO: scanned /root for discovery docs: + Jan 14 04:03:31.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 scale rc update-demo-nautilus --replicas=2 --timeout=5m' + Jan 14 04:03:32.437: INFO: stderr: "" + Jan 14 04:03:32.437: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" + STEP: waiting for all containers in name=update-demo pods to come up. 01/14/23 04:03:32.437 + Jan 14 04:03:32.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Jan 14 04:03:32.504: INFO: stderr: "" + Jan 14 04:03:32.504: INFO: stdout: "update-demo-nautilus-7g2ck update-demo-nautilus-jbv4p " + Jan 14 04:03:32.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 get pods update-demo-nautilus-7g2ck -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Jan 14 04:03:32.566: INFO: stderr: "" + Jan 14 04:03:32.566: INFO: stdout: "true" + Jan 14 04:03:32.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 get pods update-demo-nautilus-7g2ck -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' + Jan 14 04:03:32.634: INFO: stderr: "" + Jan 14 04:03:32.634: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" + Jan 14 04:03:32.634: INFO: validating pod update-demo-nautilus-7g2ck + Jan 14 04:03:32.638: INFO: got data: { + "image": "nautilus.jpg" + } + + Jan 14 04:03:32.638: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . + Jan 14 04:03:32.638: INFO: update-demo-nautilus-7g2ck is verified up and running + Jan 14 04:03:32.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 get pods update-demo-nautilus-jbv4p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Jan 14 04:03:32.701: INFO: stderr: "" + Jan 14 04:03:32.701: INFO: stdout: "true" + Jan 14 04:03:32.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 get pods update-demo-nautilus-jbv4p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' + Jan 14 04:03:32.766: INFO: stderr: "" + Jan 14 04:03:32.766: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" + Jan 14 04:03:32.766: INFO: validating pod update-demo-nautilus-jbv4p + Jan 14 04:03:32.771: INFO: got data: { + "image": "nautilus.jpg" + } + + Jan 14 04:03:32.771: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . + Jan 14 04:03:32.771: INFO: update-demo-nautilus-jbv4p is verified up and running + STEP: using delete to clean up resources 01/14/23 04:03:32.771 + Jan 14 04:03:32.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 delete --grace-period=0 --force -f -' + Jan 14 04:03:32.838: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" + Jan 14 04:03:32.838: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" + Jan 14 04:03:32.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 get rc,svc -l name=update-demo --no-headers' + Jan 14 04:03:32.906: INFO: stderr: "No resources found in kubectl-4537 namespace.\n" + Jan 14 04:03:32.906: INFO: stdout: "" + Jan 14 04:03:32.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4537 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' + Jan 14 04:03:32.972: INFO: stderr: "" + Jan 14 04:03:32.972: INFO: stdout: "" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 + Jan 14 04:03:32.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 + STEP: Destroying namespace "kubectl-4537" for this suite. 01/14/23 04:03:32.978 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should adopt matching pods on creation and release no longer matching pods [Conformance] + test/e2e/apps/replica_set.go:131 +[BeforeEach] [sig-apps] ReplicaSet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:03:32.984 +Jan 14 04:03:32.984: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename replicaset 01/14/23 04:03:32.985 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:03:32.999 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:03:33.001 +[BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:31 +[It] should adopt matching pods on creation and release no longer matching pods [Conformance] + test/e2e/apps/replica_set.go:131 +STEP: Given a Pod with a 'name' label pod-adoption-release is created 01/14/23 04:03:33.003 +Jan 14 04:03:33.014: INFO: Waiting up to 5m0s for pod "pod-adoption-release" in namespace "replicaset-2566" to be "running and ready" +Jan 14 04:03:33.017: INFO: Pod "pod-adoption-release": Phase="Pending", Reason="", readiness=false. Elapsed: 2.87917ms +Jan 14 04:03:33.017: INFO: The phase of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) +Jan 14 04:03:35.022: INFO: Pod "pod-adoption-release": Phase="Running", Reason="", readiness=true. Elapsed: 2.007784656s +Jan 14 04:03:35.022: INFO: The phase of Pod pod-adoption-release is Running (Ready = true) +Jan 14 04:03:35.022: INFO: Pod "pod-adoption-release" satisfied condition "running and ready" +STEP: When a replicaset with a matching selector is created 01/14/23 04:03:35.025 +STEP: Then the orphan pod is adopted 01/14/23 04:03:35.032 +STEP: When the matched label of one of its pods change 01/14/23 04:03:36.059 +Jan 14 04:03:36.063: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 +STEP: Then the pod is released 01/14/23 04:03:36.12 +[AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/node/init/init.go:32 +Jan 14 04:03:36.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] ReplicaSet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] ReplicaSet + tear down framework | framework.go:193 +STEP: Destroying namespace "replicaset-2566" for this suite. 01/14/23 04:03:36.141 +------------------------------ +• [3.163 seconds] +[sig-apps] ReplicaSet +test/e2e/apps/framework.go:23 + should adopt matching pods on creation and release no longer matching pods [Conformance] + test/e2e/apps/replica_set.go:131 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ReplicaSet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:03:32.984 + Jan 14 04:03:32.984: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename replicaset 01/14/23 04:03:32.985 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:03:32.999 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:03:33.001 + [BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:31 + [It] should adopt matching pods on creation and release no longer matching pods [Conformance] + test/e2e/apps/replica_set.go:131 + STEP: Given a Pod with a 'name' label pod-adoption-release is created 01/14/23 04:03:33.003 + Jan 14 04:03:33.014: INFO: Waiting up to 5m0s for pod "pod-adoption-release" in namespace "replicaset-2566" to be "running and ready" + Jan 14 04:03:33.017: INFO: Pod "pod-adoption-release": Phase="Pending", Reason="", readiness=false. Elapsed: 2.87917ms + Jan 14 04:03:33.017: INFO: The phase of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) + Jan 14 04:03:35.022: INFO: Pod "pod-adoption-release": Phase="Running", Reason="", readiness=true. Elapsed: 2.007784656s + Jan 14 04:03:35.022: INFO: The phase of Pod pod-adoption-release is Running (Ready = true) + Jan 14 04:03:35.022: INFO: Pod "pod-adoption-release" satisfied condition "running and ready" + STEP: When a replicaset with a matching selector is created 01/14/23 04:03:35.025 + STEP: Then the orphan pod is adopted 01/14/23 04:03:35.032 + STEP: When the matched label of one of its pods change 01/14/23 04:03:36.059 + Jan 14 04:03:36.063: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 + STEP: Then the pod is released 01/14/23 04:03:36.12 + [AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/node/init/init.go:32 + Jan 14 04:03:36.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] ReplicaSet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] ReplicaSet + tear down framework | framework.go:193 + STEP: Destroying namespace "replicaset-2566" for this suite. 01/14/23 04:03:36.141 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:248 +[BeforeEach] [sig-node] Container Runtime + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:03:36.147 +Jan 14 04:03:36.147: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename container-runtime 01/14/23 04:03:36.148 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:03:36.235 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:03:36.238 +[BeforeEach] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:31 +[It] should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:248 +STEP: create the container 01/14/23 04:03:36.24 +STEP: wait for the container to reach Succeeded 01/14/23 04:03:36.287 +STEP: get the container status 01/14/23 04:03:39.318 +STEP: the container should be terminated 01/14/23 04:03:39.321 +STEP: the termination message should be set 01/14/23 04:03:39.321 +Jan 14 04:03:39.321: INFO: Expected: &{OK} to match Container's Termination Message: OK -- +STEP: delete the container 01/14/23 04:03:39.321 +[AfterEach] [sig-node] Container Runtime + test/e2e/framework/node/init/init.go:32 +Jan 14 04:03:39.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Container Runtime + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Container Runtime + tear down framework | framework.go:193 +STEP: Destroying namespace "container-runtime-293" for this suite. 01/14/23 04:03:39.338 +------------------------------ +• [3.197 seconds] +[sig-node] Container Runtime +test/e2e/common/node/framework.go:23 + blackbox test + test/e2e/common/node/runtime.go:44 + on terminated container + test/e2e/common/node/runtime.go:137 + should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:248 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Container Runtime + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:03:36.147 + Jan 14 04:03:36.147: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename container-runtime 01/14/23 04:03:36.148 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:03:36.235 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:03:36.238 + [BeforeEach] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:31 + [It] should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:248 + STEP: create the container 01/14/23 04:03:36.24 + STEP: wait for the container to reach Succeeded 01/14/23 04:03:36.287 + STEP: get the container status 01/14/23 04:03:39.318 + STEP: the container should be terminated 01/14/23 04:03:39.321 + STEP: the termination message should be set 01/14/23 04:03:39.321 + Jan 14 04:03:39.321: INFO: Expected: &{OK} to match Container's Termination Message: OK -- + STEP: delete the container 01/14/23 04:03:39.321 + [AfterEach] [sig-node] Container Runtime + test/e2e/framework/node/init/init.go:32 + Jan 14 04:03:39.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Container Runtime + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Container Runtime + tear down framework | framework.go:193 + STEP: Destroying namespace "container-runtime-293" for this suite. 01/14/23 04:03:39.338 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-node] RuntimeClass + should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:156 +[BeforeEach] [sig-node] RuntimeClass + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:03:39.344 +Jan 14 04:03:39.344: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename runtimeclass 01/14/23 04:03:39.345 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:03:39.358 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:03:39.36 +[BeforeEach] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:31 +[It] should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:156 +STEP: Deleting RuntimeClass runtimeclass-5983-delete-me 01/14/23 04:03:39.37 +STEP: Waiting for the RuntimeClass to disappear 01/14/23 04:03:39.381 +[AfterEach] [sig-node] RuntimeClass + test/e2e/framework/node/init/init.go:32 +Jan 14 04:03:39.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] RuntimeClass + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] RuntimeClass + tear down framework | framework.go:193 +STEP: Destroying namespace "runtimeclass-5983" for this suite. 01/14/23 04:03:39.397 +------------------------------ +• [0.059 seconds] +[sig-node] RuntimeClass +test/e2e/common/node/framework.go:23 + should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:156 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] RuntimeClass + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:03:39.344 + Jan 14 04:03:39.344: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename runtimeclass 01/14/23 04:03:39.345 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:03:39.358 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:03:39.36 + [BeforeEach] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:31 + [It] should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:156 + STEP: Deleting RuntimeClass runtimeclass-5983-delete-me 01/14/23 04:03:39.37 + STEP: Waiting for the RuntimeClass to disappear 01/14/23 04:03:39.381 + [AfterEach] [sig-node] RuntimeClass + test/e2e/framework/node/init/init.go:32 + Jan 14 04:03:39.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] RuntimeClass + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] RuntimeClass + tear down framework | framework.go:193 + STEP: Destroying namespace "runtimeclass-5983" for this suite. 01/14/23 04:03:39.397 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Proxy server + should support proxy with --port 0 [Conformance] + test/e2e/kubectl/kubectl.go:1787 +[BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:03:39.403 +Jan 14 04:03:39.403: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename kubectl 01/14/23 04:03:39.404 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:03:39.421 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:03:39.423 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[It] should support proxy with --port 0 [Conformance] + test/e2e/kubectl/kubectl.go:1787 +STEP: starting the proxy server 01/14/23 04:03:39.426 +Jan 14 04:03:39.426: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-5787 proxy -p 0 --disable-filter' +STEP: curling proxy /api/ output 01/14/23 04:03:39.473 +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 +Jan 14 04:03:39.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 +STEP: Destroying namespace "kubectl-5787" for this suite. 01/14/23 04:03:39.485 +------------------------------ +• [0.086 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Proxy server + test/e2e/kubectl/kubectl.go:1780 + should support proxy with --port 0 [Conformance] + test/e2e/kubectl/kubectl.go:1787 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:03:39.403 + Jan 14 04:03:39.403: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename kubectl 01/14/23 04:03:39.404 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:03:39.421 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:03:39.423 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [It] should support proxy with --port 0 [Conformance] + test/e2e/kubectl/kubectl.go:1787 + STEP: starting the proxy server 01/14/23 04:03:39.426 + Jan 14 04:03:39.426: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-5787 proxy -p 0 --disable-filter' + STEP: curling proxy /api/ output 01/14/23 04:03:39.473 + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 + Jan 14 04:03:39.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 + STEP: Destroying namespace "kubectl-5787" for this suite. 01/14/23 04:03:39.485 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should validate Statefulset Status endpoints [Conformance] + test/e2e/apps/statefulset.go:977 +[BeforeEach] [sig-apps] StatefulSet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:03:39.49 +Jan 14 04:03:39.490: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename statefulset 01/14/23 04:03:39.491 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:03:39.505 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:03:39.508 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 +STEP: Creating service test in namespace statefulset-6819 01/14/23 04:03:39.511 +[It] should validate Statefulset Status endpoints [Conformance] + test/e2e/apps/statefulset.go:977 +STEP: Creating statefulset ss in namespace statefulset-6819 01/14/23 04:03:39.517 +Jan 14 04:03:39.524: INFO: Found 0 stateful pods, waiting for 1 +Jan 14 04:03:49.529: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Patch Statefulset to include a label 01/14/23 04:03:49.534 +STEP: Getting /status 01/14/23 04:03:49.54 +Jan 14 04:03:49.544: INFO: StatefulSet ss has Conditions: []v1.StatefulSetCondition(nil) +STEP: updating the StatefulSet Status 01/14/23 04:03:49.544 +Jan 14 04:03:49.551: INFO: updatedStatus.Conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the statefulset status to be updated 01/14/23 04:03:49.551 +Jan 14 04:03:49.552: INFO: Observed &StatefulSet event: ADDED +Jan 14 04:03:49.552: INFO: Found Statefulset ss in namespace statefulset-6819 with labels: map[e2e:testing] annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Jan 14 04:03:49.552: INFO: Statefulset ss has an updated status +STEP: patching the Statefulset Status 01/14/23 04:03:49.552 +Jan 14 04:03:49.552: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Jan 14 04:03:49.559: INFO: Patched status conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} +STEP: watching for the Statefulset status to be patched 01/14/23 04:03:49.559 +Jan 14 04:03:49.560: INFO: Observed &StatefulSet event: ADDED +Jan 14 04:03:49.560: INFO: Observed Statefulset ss in namespace statefulset-6819 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Jan 14 04:03:49.561: INFO: Observed &StatefulSet event: MODIFIED +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 +Jan 14 04:03:49.561: INFO: Deleting all statefulset in ns statefulset-6819 +Jan 14 04:03:49.563: INFO: Scaling statefulset ss to 0 +Jan 14 04:03:59.582: INFO: Waiting for statefulset status.replicas updated to 0 +Jan 14 04:03:59.585: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + test/e2e/framework/node/init/init.go:32 +Jan 14 04:03:59.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] StatefulSet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] StatefulSet + tear down framework | framework.go:193 +STEP: Destroying namespace "statefulset-6819" for this suite. 01/14/23 04:03:59.598 +------------------------------ +• [SLOW TEST] [20.113 seconds] +[sig-apps] StatefulSet +test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:103 + should validate Statefulset Status endpoints [Conformance] + test/e2e/apps/statefulset.go:977 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] StatefulSet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:03:39.49 + Jan 14 04:03:39.490: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename statefulset 01/14/23 04:03:39.491 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:03:39.505 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:03:39.508 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 + [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 + STEP: Creating service test in namespace statefulset-6819 01/14/23 04:03:39.511 + [It] should validate Statefulset Status endpoints [Conformance] + test/e2e/apps/statefulset.go:977 + STEP: Creating statefulset ss in namespace statefulset-6819 01/14/23 04:03:39.517 + Jan 14 04:03:39.524: INFO: Found 0 stateful pods, waiting for 1 + Jan 14 04:03:49.529: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true + STEP: Patch Statefulset to include a label 01/14/23 04:03:49.534 + STEP: Getting /status 01/14/23 04:03:49.54 + Jan 14 04:03:49.544: INFO: StatefulSet ss has Conditions: []v1.StatefulSetCondition(nil) + STEP: updating the StatefulSet Status 01/14/23 04:03:49.544 + Jan 14 04:03:49.551: INFO: updatedStatus.Conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} + STEP: watching for the statefulset status to be updated 01/14/23 04:03:49.551 + Jan 14 04:03:49.552: INFO: Observed &StatefulSet event: ADDED + Jan 14 04:03:49.552: INFO: Found Statefulset ss in namespace statefulset-6819 with labels: map[e2e:testing] annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} + Jan 14 04:03:49.552: INFO: Statefulset ss has an updated status + STEP: patching the Statefulset Status 01/14/23 04:03:49.552 + Jan 14 04:03:49.552: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} + Jan 14 04:03:49.559: INFO: Patched status conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} + STEP: watching for the Statefulset status to be patched 01/14/23 04:03:49.559 + Jan 14 04:03:49.560: INFO: Observed &StatefulSet event: ADDED + Jan 14 04:03:49.560: INFO: Observed Statefulset ss in namespace statefulset-6819 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} + Jan 14 04:03:49.561: INFO: Observed &StatefulSet event: MODIFIED + [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 + Jan 14 04:03:49.561: INFO: Deleting all statefulset in ns statefulset-6819 + Jan 14 04:03:49.563: INFO: Scaling statefulset ss to 0 + Jan 14 04:03:59.582: INFO: Waiting for statefulset status.replicas updated to 0 + Jan 14 04:03:59.585: INFO: Deleting statefulset ss + [AfterEach] [sig-apps] StatefulSet + test/e2e/framework/node/init/init.go:32 + Jan 14 04:03:59.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] StatefulSet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] StatefulSet + tear down framework | framework.go:193 + STEP: Destroying namespace "statefulset-6819" for this suite. 01/14/23 04:03:59.598 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-node] Kubelet when scheduling a read only busybox container + should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:184 +[BeforeEach] [sig-node] Kubelet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:03:59.604 +Jan 14 04:03:59.604: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename kubelet-test 01/14/23 04:03:59.605 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:03:59.624 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:03:59.626 +[BeforeEach] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 +[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:184 +Jan 14 04:03:59.637: INFO: Waiting up to 5m0s for pod "busybox-readonly-fs6d154bce-5bfd-499a-9d34-391c0dd854f1" in namespace "kubelet-test-2855" to be "running and ready" +Jan 14 04:03:59.640: INFO: Pod "busybox-readonly-fs6d154bce-5bfd-499a-9d34-391c0dd854f1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.017513ms +Jan 14 04:03:59.640: INFO: The phase of Pod busybox-readonly-fs6d154bce-5bfd-499a-9d34-391c0dd854f1 is Pending, waiting for it to be Running (with Ready = true) +Jan 14 04:04:01.645: INFO: Pod "busybox-readonly-fs6d154bce-5bfd-499a-9d34-391c0dd854f1": Phase="Running", Reason="", readiness=true. Elapsed: 2.008649441s +Jan 14 04:04:01.645: INFO: The phase of Pod busybox-readonly-fs6d154bce-5bfd-499a-9d34-391c0dd854f1 is Running (Ready = true) +Jan 14 04:04:01.645: INFO: Pod "busybox-readonly-fs6d154bce-5bfd-499a-9d34-391c0dd854f1" satisfied condition "running and ready" +[AfterEach] [sig-node] Kubelet + test/e2e/framework/node/init/init.go:32 +Jan 14 04:04:01.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Kubelet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Kubelet + tear down framework | framework.go:193 +STEP: Destroying namespace "kubelet-test-2855" for this suite. 01/14/23 04:04:01.662 +------------------------------ +• [2.064 seconds] +[sig-node] Kubelet +test/e2e/common/node/framework.go:23 + when scheduling a read only busybox container + test/e2e/common/node/kubelet.go:175 + should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:184 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Kubelet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:03:59.604 + Jan 14 04:03:59.604: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename kubelet-test 01/14/23 04:03:59.605 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:03:59.624 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:03:59.626 + [BeforeEach] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 + [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:184 + Jan 14 04:03:59.637: INFO: Waiting up to 5m0s for pod "busybox-readonly-fs6d154bce-5bfd-499a-9d34-391c0dd854f1" in namespace "kubelet-test-2855" to be "running and ready" + Jan 14 04:03:59.640: INFO: Pod "busybox-readonly-fs6d154bce-5bfd-499a-9d34-391c0dd854f1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.017513ms + Jan 14 04:03:59.640: INFO: The phase of Pod busybox-readonly-fs6d154bce-5bfd-499a-9d34-391c0dd854f1 is Pending, waiting for it to be Running (with Ready = true) + Jan 14 04:04:01.645: INFO: Pod "busybox-readonly-fs6d154bce-5bfd-499a-9d34-391c0dd854f1": Phase="Running", Reason="", readiness=true. Elapsed: 2.008649441s + Jan 14 04:04:01.645: INFO: The phase of Pod busybox-readonly-fs6d154bce-5bfd-499a-9d34-391c0dd854f1 is Running (Ready = true) + Jan 14 04:04:01.645: INFO: Pod "busybox-readonly-fs6d154bce-5bfd-499a-9d34-391c0dd854f1" satisfied condition "running and ready" + [AfterEach] [sig-node] Kubelet + test/e2e/framework/node/init/init.go:32 + Jan 14 04:04:01.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Kubelet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Kubelet + tear down framework | framework.go:193 + STEP: Destroying namespace "kubelet-test-2855" for this suite. 01/14/23 04:04:01.662 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-node] Pods + should contain environment variables for services [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:444 +[BeforeEach] [sig-node] Pods + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:04:01.668 +Jan 14 04:04:01.668: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename pods 01/14/23 04:04:01.669 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:04:01.686 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:04:01.688 +[BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 +[It] should contain environment variables for services [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:444 +Jan 14 04:04:01.702: INFO: Waiting up to 5m0s for pod "server-envvars-cd28faa8-89c4-43a3-9d8d-167371bb39f5" in namespace "pods-8740" to be "running and ready" +Jan 14 04:04:01.705: INFO: Pod "server-envvars-cd28faa8-89c4-43a3-9d8d-167371bb39f5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.084418ms +Jan 14 04:04:01.705: INFO: The phase of Pod server-envvars-cd28faa8-89c4-43a3-9d8d-167371bb39f5 is Pending, waiting for it to be Running (with Ready = true) +Jan 14 04:04:03.710: INFO: Pod "server-envvars-cd28faa8-89c4-43a3-9d8d-167371bb39f5": Phase="Running", Reason="", readiness=true. Elapsed: 2.008260168s +Jan 14 04:04:03.710: INFO: The phase of Pod server-envvars-cd28faa8-89c4-43a3-9d8d-167371bb39f5 is Running (Ready = true) +Jan 14 04:04:03.710: INFO: Pod "server-envvars-cd28faa8-89c4-43a3-9d8d-167371bb39f5" satisfied condition "running and ready" +Jan 14 04:04:03.732: INFO: Waiting up to 5m0s for pod "client-envvars-6117cf04-a55d-4f86-93b5-c09b1a2eac36" in namespace "pods-8740" to be "Succeeded or Failed" +Jan 14 04:04:03.735: INFO: Pod "client-envvars-6117cf04-a55d-4f86-93b5-c09b1a2eac36": Phase="Pending", Reason="", readiness=false. Elapsed: 3.014616ms +Jan 14 04:04:05.740: INFO: Pod "client-envvars-6117cf04-a55d-4f86-93b5-c09b1a2eac36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007806135s +Jan 14 04:04:07.740: INFO: Pod "client-envvars-6117cf04-a55d-4f86-93b5-c09b1a2eac36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007681219s +STEP: Saw pod success 01/14/23 04:04:07.74 +Jan 14 04:04:07.740: INFO: Pod "client-envvars-6117cf04-a55d-4f86-93b5-c09b1a2eac36" satisfied condition "Succeeded or Failed" +Jan 14 04:04:07.743: INFO: Trying to get logs from node 10.0.1.106 pod client-envvars-6117cf04-a55d-4f86-93b5-c09b1a2eac36 container env3cont: +STEP: delete the pod 01/14/23 04:04:07.749 +Jan 14 04:04:07.761: INFO: Waiting for pod client-envvars-6117cf04-a55d-4f86-93b5-c09b1a2eac36 to disappear +Jan 14 04:04:07.765: INFO: Pod client-envvars-6117cf04-a55d-4f86-93b5-c09b1a2eac36 no longer exists +[AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 +Jan 14 04:04:07.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 +STEP: Destroying namespace "pods-8740" for this suite. 01/14/23 04:04:07.769 +------------------------------ +• [SLOW TEST] [6.106 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should contain environment variables for services [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:444 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Pods + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:04:01.668 + Jan 14 04:04:01.668: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename pods 01/14/23 04:04:01.669 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:04:01.686 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:04:01.688 + [BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 + [It] should contain environment variables for services [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:444 + Jan 14 04:04:01.702: INFO: Waiting up to 5m0s for pod "server-envvars-cd28faa8-89c4-43a3-9d8d-167371bb39f5" in namespace "pods-8740" to be "running and ready" + Jan 14 04:04:01.705: INFO: Pod "server-envvars-cd28faa8-89c4-43a3-9d8d-167371bb39f5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.084418ms + Jan 14 04:04:01.705: INFO: The phase of Pod server-envvars-cd28faa8-89c4-43a3-9d8d-167371bb39f5 is Pending, waiting for it to be Running (with Ready = true) + Jan 14 04:04:03.710: INFO: Pod "server-envvars-cd28faa8-89c4-43a3-9d8d-167371bb39f5": Phase="Running", Reason="", readiness=true. Elapsed: 2.008260168s + Jan 14 04:04:03.710: INFO: The phase of Pod server-envvars-cd28faa8-89c4-43a3-9d8d-167371bb39f5 is Running (Ready = true) + Jan 14 04:04:03.710: INFO: Pod "server-envvars-cd28faa8-89c4-43a3-9d8d-167371bb39f5" satisfied condition "running and ready" + Jan 14 04:04:03.732: INFO: Waiting up to 5m0s for pod "client-envvars-6117cf04-a55d-4f86-93b5-c09b1a2eac36" in namespace "pods-8740" to be "Succeeded or Failed" + Jan 14 04:04:03.735: INFO: Pod "client-envvars-6117cf04-a55d-4f86-93b5-c09b1a2eac36": Phase="Pending", Reason="", readiness=false. Elapsed: 3.014616ms + Jan 14 04:04:05.740: INFO: Pod "client-envvars-6117cf04-a55d-4f86-93b5-c09b1a2eac36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007806135s + Jan 14 04:04:07.740: INFO: Pod "client-envvars-6117cf04-a55d-4f86-93b5-c09b1a2eac36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007681219s + STEP: Saw pod success 01/14/23 04:04:07.74 + Jan 14 04:04:07.740: INFO: Pod "client-envvars-6117cf04-a55d-4f86-93b5-c09b1a2eac36" satisfied condition "Succeeded or Failed" + Jan 14 04:04:07.743: INFO: Trying to get logs from node 10.0.1.106 pod client-envvars-6117cf04-a55d-4f86-93b5-c09b1a2eac36 container env3cont: + STEP: delete the pod 01/14/23 04:04:07.749 + Jan 14 04:04:07.761: INFO: Waiting for pod client-envvars-6117cf04-a55d-4f86-93b5-c09b1a2eac36 to disappear + Jan 14 04:04:07.765: INFO: Pod client-envvars-6117cf04-a55d-4f86-93b5-c09b1a2eac36 no longer exists + [AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 + Jan 14 04:04:07.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 + STEP: Destroying namespace "pods-8740" for this suite. 01/14/23 04:04:07.769 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl describe + should check if kubectl describe prints relevant information for rc and pods [Conformance] + test/e2e/kubectl/kubectl.go:1276 +[BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:04:07.775 +Jan 14 04:04:07.775: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename kubectl 01/14/23 04:04:07.776 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:04:07.789 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:04:07.791 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[It] should check if kubectl describe prints relevant information for rc and pods [Conformance] + test/e2e/kubectl/kubectl.go:1276 +Jan 14 04:04:07.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4627 create -f -' +Jan 14 04:04:07.969: INFO: stderr: "" +Jan 14 04:04:07.969: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +Jan 14 04:04:07.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4627 create -f -' +Jan 14 04:04:08.684: INFO: stderr: "" +Jan 14 04:04:08.684: INFO: stdout: "service/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. 01/14/23 04:04:08.684 +Jan 14 04:04:09.689: INFO: Selector matched 1 pods for map[app:agnhost] +Jan 14 04:04:09.689: INFO: Found 1 / 1 +Jan 14 04:04:09.689: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Jan 14 04:04:09.692: INFO: Selector matched 1 pods for map[app:agnhost] +Jan 14 04:04:09.692: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Jan 14 04:04:09.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4627 describe pod agnhost-primary-jcsdw' +Jan 14 04:04:09.763: INFO: stderr: "" +Jan 14 04:04:09.763: INFO: stdout: "Name: agnhost-primary-jcsdw\nNamespace: kubectl-4627\nPriority: 0\nService Account: default\nNode: 10.0.1.106/10.0.1.106\nStart Time: Sat, 14 Jan 2023 04:04:07 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: tke.cloud.tencent.com/networks-status:\n [{\n \"name\": \"tke-bridge\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.52.1.20\"\n ],\n \"mac\": \"ba:0e:59:e0:cf:e6\",\n \"default\": true,\n \"dns\": {}\n }]\nStatus: Running\nIP: 10.52.1.20\nIPs:\n IP: 10.52.1.20\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://49d7b2c16661d1e79751290bf62c39cbfd297b87364792777e22361e68899316\n Image: registry.k8s.io/e2e-test-images/agnhost:2.43\n Image ID: registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sat, 14 Jan 2023 04:04:08 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vwk94 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-vwk94:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 1s default-scheduler Successfully assigned kubectl-4627/agnhost-primary-jcsdw to 10.0.1.106\n Normal Pulled 1s kubelet Container image \"registry.k8s.io/e2e-test-images/agnhost:2.43\" already present on machine\n Normal Created 1s kubelet Created container agnhost-primary\n Normal Started 1s kubelet Started container agnhost-primary\n" +Jan 14 04:04:09.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4627 describe rc agnhost-primary' +Jan 14 04:04:09.840: INFO: stderr: "" +Jan 14 04:04:09.840: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-4627\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: registry.k8s.io/e2e-test-images/agnhost:2.43\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 2s replication-controller Created pod: agnhost-primary-jcsdw\n" +Jan 14 04:04:09.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4627 describe service agnhost-primary' +Jan 14 04:04:09.910: INFO: stderr: "" +Jan 14 04:04:09.910: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-4627\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.55.252.103\nIPs: 10.55.252.103\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.52.1.20:6379\nSession Affinity: None\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal EnsuringService 1s service-controller Deleted Loadbalancer\n Normal EnsureServiceSuccess 1s service-controller Service Sync Success. RetrunCode: S2000\n" +Jan 14 04:04:09.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4627 describe node 10.0.1.106' +Jan 14 04:04:10.011: INFO: stderr: "" +Jan 14 04:04:10.011: INFO: stdout: "Name: 10.0.1.106\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=S3.2XLARGE16\n beta.kubernetes.io/os=linux\n cloud.tencent.com/node-instance-id=ins-fe9gbs5o\n failure-domain.beta.kubernetes.io/region=sg\n failure-domain.beta.kubernetes.io/zone=900001\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=10.0.1.106\n kubernetes.io/os=linux\n node.kubernetes.io/instance-type=S3.2XLARGE16\n topology.com.tencent.cloud.csi.cbs/zone=ap-singapore-1\n topology.kubernetes.io/region=sg\n topology.kubernetes.io/zone=900001\nAnnotations: csi.volume.kubernetes.io/nodeid: {\"com.tencent.cloud.csi.cbs\":\"ins-fe9gbs5o\"}\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Fri, 13 Jan 2023 08:11:19 +0000\nTaints: \nUnschedulable: false\nLease:\n HolderIdentity: 10.0.1.106\n AcquireTime: \n RenewTime: Sat, 14 Jan 2023 04:04:04 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Fri, 13 Jan 2023 08:11:28 +0000 Fri, 13 Jan 2023 08:11:28 +0000 RouteCreated RouteController created a route\n MemoryPressure False Sat, 14 Jan 2023 04:03:10 +0000 Fri, 13 Jan 2023 08:11:19 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 14 Jan 2023 04:03:10 +0000 Fri, 13 Jan 2023 08:11:19 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 14 Jan 2023 04:03:10 +0000 Fri, 13 Jan 2023 08:11:19 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 14 Jan 2023 04:03:10 +0000 Fri, 13 Jan 2023 08:11:29 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.0.1.106\n ExternalIP: 43.156.133.188\n Hostname: 10.0.1.106\nCapacity:\n cpu: 8\n ephemeral-storage: 206357460Ki\n example.com/fakecpu: 1k\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 16124972Ki\n pods: 61\nAllocatable:\n cpu: 7800m\n ephemeral-storage: 190179034822\n example.com/fakecpu: 1k\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 13296684Ki\n pods: 61\nSystem Info:\n Machine ID: a59e5c75c3a7468fa19e585514349c33\n System UUID: a59e5c75-c3a7-468f-a19e-585514349c33\n Boot ID: 44347bcf-6046-4f8e-9f6b-7a2b04e54e0d\n Kernel Version: 5.4.119-1-tlinux4-0010.2\n OS Image: TencentOS Server 3.2 (Final)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.6.9-tke.2\n Kubelet Version: v1.24.4-tke.3\n Kube-Proxy Version: v1.24.4-tke.3\nPodCIDR: 10.52.1.0/26\nPodCIDRs: 10.52.1.0/26\nProviderID: qcloud:///900001/ins-fe9gbs5o\nNon-terminated Pods: (11 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n default kubernetes-proxy-544fb566b4-fh64j 100m (1%) 0 (0%) 128Mi (0%) 0 (0%) 43m\n kube-system csi-cbs-node-5wf2s 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19h\n kube-system ip-masq-agent-rx9k6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19h\n kube-system kube-proxy-s6xxg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19h\n kube-system tke-bridge-agent-frbcm 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19h\n kube-system tke-cni-agent-nv7pn 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19h\n kube-system tke-monitor-agent-xhdhg 10m (0%) 100m (1%) 30Mi (0%) 100Mi (0%) 19h\n kubectl-4627 agnhost-primary-jcsdw 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3s\n kubelet-test-2855 busybox-readonly-fs6d154bce-5bfd-499a-9d34-391c0dd854f1 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11s\n sonobuoy sonobuoy-systemd-logs-daemon-set-0fc25bd2d6604eac-jkjnf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 29m\n statefulset-8862 ss2-1 0 (0%) 0 (0%) 0 (0%) 0 (0%) 90m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 110m (1%) 100m (1%)\n memory 158Mi (1%) 100Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\n example.com/fakecpu 0 0\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal RegisteredNode 30m node-controller Node 10.0.1.106 event: Registered Node 10.0.1.106 in Controller\n" +Jan 14 04:04:10.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4627 describe namespace kubectl-4627' +Jan 14 04:04:10.079: INFO: stderr: "" +Jan 14 04:04:10.079: INFO: stdout: "Name: kubectl-4627\nLabels: e2e-framework=kubectl\n e2e-run=6e25d48d-27da-41bb-9e53-5ea04a25720d\n kubernetes.io/metadata.name=kubectl-4627\n pod-security.kubernetes.io/enforce=baseline\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 +Jan 14 04:04:10.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 +STEP: Destroying namespace "kubectl-4627" for this suite. 01/14/23 04:04:10.084 +------------------------------ +• [2.314 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl describe + test/e2e/kubectl/kubectl.go:1270 + should check if kubectl describe prints relevant information for rc and pods [Conformance] + test/e2e/kubectl/kubectl.go:1276 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:04:07.775 + Jan 14 04:04:07.775: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename kubectl 01/14/23 04:04:07.776 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:04:07.789 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:04:07.791 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] + test/e2e/kubectl/kubectl.go:1276 + Jan 14 04:04:07.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4627 create -f -' + Jan 14 04:04:07.969: INFO: stderr: "" + Jan 14 04:04:07.969: INFO: stdout: "replicationcontroller/agnhost-primary created\n" + Jan 14 04:04:07.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4627 create -f -' + Jan 14 04:04:08.684: INFO: stderr: "" + Jan 14 04:04:08.684: INFO: stdout: "service/agnhost-primary created\n" + STEP: Waiting for Agnhost primary to start. 01/14/23 04:04:08.684 + Jan 14 04:04:09.689: INFO: Selector matched 1 pods for map[app:agnhost] + Jan 14 04:04:09.689: INFO: Found 1 / 1 + Jan 14 04:04:09.689: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 + Jan 14 04:04:09.692: INFO: Selector matched 1 pods for map[app:agnhost] + Jan 14 04:04:09.692: INFO: ForEach: Found 1 pods from the filter. Now looping through them. + Jan 14 04:04:09.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4627 describe pod agnhost-primary-jcsdw' + Jan 14 04:04:09.763: INFO: stderr: "" + Jan 14 04:04:09.763: INFO: stdout: "Name: agnhost-primary-jcsdw\nNamespace: kubectl-4627\nPriority: 0\nService Account: default\nNode: 10.0.1.106/10.0.1.106\nStart Time: Sat, 14 Jan 2023 04:04:07 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: tke.cloud.tencent.com/networks-status:\n [{\n \"name\": \"tke-bridge\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.52.1.20\"\n ],\n \"mac\": \"ba:0e:59:e0:cf:e6\",\n \"default\": true,\n \"dns\": {}\n }]\nStatus: Running\nIP: 10.52.1.20\nIPs:\n IP: 10.52.1.20\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://49d7b2c16661d1e79751290bf62c39cbfd297b87364792777e22361e68899316\n Image: registry.k8s.io/e2e-test-images/agnhost:2.43\n Image ID: registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sat, 14 Jan 2023 04:04:08 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vwk94 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-vwk94:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 1s default-scheduler Successfully assigned kubectl-4627/agnhost-primary-jcsdw to 10.0.1.106\n Normal Pulled 1s kubelet Container image \"registry.k8s.io/e2e-test-images/agnhost:2.43\" already present on machine\n Normal Created 1s kubelet Created container agnhost-primary\n Normal Started 1s kubelet Started container agnhost-primary\n" + Jan 14 04:04:09.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4627 describe rc agnhost-primary' + Jan 14 04:04:09.840: INFO: stderr: "" + Jan 14 04:04:09.840: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-4627\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: registry.k8s.io/e2e-test-images/agnhost:2.43\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 2s replication-controller Created pod: agnhost-primary-jcsdw\n" + Jan 14 04:04:09.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4627 describe service agnhost-primary' + Jan 14 04:04:09.910: INFO: stderr: "" + Jan 14 04:04:09.910: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-4627\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.55.252.103\nIPs: 10.55.252.103\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.52.1.20:6379\nSession Affinity: None\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal EnsuringService 1s service-controller Deleted Loadbalancer\n Normal EnsureServiceSuccess 1s service-controller Service Sync Success. RetrunCode: S2000\n" + Jan 14 04:04:09.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4627 describe node 10.0.1.106' + Jan 14 04:04:10.011: INFO: stderr: "" + Jan 14 04:04:10.011: INFO: stdout: "Name: 10.0.1.106\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/instance-type=S3.2XLARGE16\n beta.kubernetes.io/os=linux\n cloud.tencent.com/node-instance-id=ins-fe9gbs5o\n failure-domain.beta.kubernetes.io/region=sg\n failure-domain.beta.kubernetes.io/zone=900001\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=10.0.1.106\n kubernetes.io/os=linux\n node.kubernetes.io/instance-type=S3.2XLARGE16\n topology.com.tencent.cloud.csi.cbs/zone=ap-singapore-1\n topology.kubernetes.io/region=sg\n topology.kubernetes.io/zone=900001\nAnnotations: csi.volume.kubernetes.io/nodeid: {\"com.tencent.cloud.csi.cbs\":\"ins-fe9gbs5o\"}\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Fri, 13 Jan 2023 08:11:19 +0000\nTaints: \nUnschedulable: false\nLease:\n HolderIdentity: 10.0.1.106\n AcquireTime: \n RenewTime: Sat, 14 Jan 2023 04:04:04 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Fri, 13 Jan 2023 08:11:28 +0000 Fri, 13 Jan 2023 08:11:28 +0000 RouteCreated RouteController created a route\n MemoryPressure False Sat, 14 Jan 2023 04:03:10 +0000 Fri, 13 Jan 2023 08:11:19 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 14 Jan 2023 04:03:10 +0000 Fri, 13 Jan 2023 08:11:19 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 14 Jan 2023 04:03:10 +0000 Fri, 13 Jan 2023 08:11:19 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 14 Jan 2023 04:03:10 +0000 Fri, 13 Jan 2023 08:11:29 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.0.1.106\n ExternalIP: 43.156.133.188\n Hostname: 10.0.1.106\nCapacity:\n cpu: 8\n ephemeral-storage: 206357460Ki\n example.com/fakecpu: 1k\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 16124972Ki\n pods: 61\nAllocatable:\n cpu: 7800m\n ephemeral-storage: 190179034822\n example.com/fakecpu: 1k\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 13296684Ki\n pods: 61\nSystem Info:\n Machine ID: a59e5c75c3a7468fa19e585514349c33\n System UUID: a59e5c75-c3a7-468f-a19e-585514349c33\n Boot ID: 44347bcf-6046-4f8e-9f6b-7a2b04e54e0d\n Kernel Version: 5.4.119-1-tlinux4-0010.2\n OS Image: TencentOS Server 3.2 (Final)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.6.9-tke.2\n Kubelet Version: v1.24.4-tke.3\n Kube-Proxy Version: v1.24.4-tke.3\nPodCIDR: 10.52.1.0/26\nPodCIDRs: 10.52.1.0/26\nProviderID: qcloud:///900001/ins-fe9gbs5o\nNon-terminated Pods: (11 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n default kubernetes-proxy-544fb566b4-fh64j 100m (1%) 0 (0%) 128Mi (0%) 0 (0%) 43m\n kube-system csi-cbs-node-5wf2s 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19h\n kube-system ip-masq-agent-rx9k6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19h\n kube-system kube-proxy-s6xxg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19h\n kube-system tke-bridge-agent-frbcm 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19h\n kube-system tke-cni-agent-nv7pn 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19h\n kube-system tke-monitor-agent-xhdhg 10m (0%) 100m (1%) 30Mi (0%) 100Mi (0%) 19h\n kubectl-4627 agnhost-primary-jcsdw 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3s\n kubelet-test-2855 busybox-readonly-fs6d154bce-5bfd-499a-9d34-391c0dd854f1 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11s\n sonobuoy sonobuoy-systemd-logs-daemon-set-0fc25bd2d6604eac-jkjnf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 29m\n statefulset-8862 ss2-1 0 (0%) 0 (0%) 0 (0%) 0 (0%) 90m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 110m (1%) 100m (1%)\n memory 158Mi (1%) 100Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\n example.com/fakecpu 0 0\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal RegisteredNode 30m node-controller Node 10.0.1.106 event: Registered Node 10.0.1.106 in Controller\n" + Jan 14 04:04:10.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4627 describe namespace kubectl-4627' + Jan 14 04:04:10.079: INFO: stderr: "" + Jan 14 04:04:10.079: INFO: stdout: "Name: kubectl-4627\nLabels: e2e-framework=kubectl\n e2e-run=6e25d48d-27da-41bb-9e53-5ea04a25720d\n kubernetes.io/metadata.name=kubectl-4627\n pod-security.kubernetes.io/enforce=baseline\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 + Jan 14 04:04:10.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 + STEP: Destroying namespace "kubectl-4627" for this suite. 01/14/23 04:04:10.084 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-api-machinery] Watchers + should observe add, update, and delete watch notifications on configmaps [Conformance] + test/e2e/apimachinery/watch.go:60 +[BeforeEach] [sig-api-machinery] Watchers + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:04:10.089 +Jan 14 04:04:10.089: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename watch 01/14/23 04:04:10.09 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:04:10.109 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:04:10.112 +[BeforeEach] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:31 +[It] should observe add, update, and delete watch notifications on configmaps [Conformance] + test/e2e/apimachinery/watch.go:60 +STEP: creating a watch on configmaps with label A 01/14/23 04:04:10.135 +STEP: creating a watch on configmaps with label B 01/14/23 04:04:10.137 +STEP: creating a watch on configmaps with label A or B 01/14/23 04:04:10.138 +STEP: creating a configmap with label A and ensuring the correct watchers observe the notification 01/14/23 04:04:10.139 +Jan 14 04:04:10.145: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3399 33793564-973d-42ac-9c50-29ad1408a493 427394 0 2023-01-14 04:04:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-14 04:04:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Jan 14 04:04:10.145: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3399 33793564-973d-42ac-9c50-29ad1408a493 427394 0 2023-01-14 04:04:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-14 04:04:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying configmap A and ensuring the correct watchers observe the notification 01/14/23 04:04:10.146 +Jan 14 04:04:10.157: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3399 33793564-973d-42ac-9c50-29ad1408a493 427395 0 2023-01-14 04:04:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-14 04:04:10 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +Jan 14 04:04:10.157: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3399 33793564-973d-42ac-9c50-29ad1408a493 427395 0 2023-01-14 04:04:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-14 04:04:10 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying configmap A again and ensuring the correct watchers observe the notification 01/14/23 04:04:10.157 +Jan 14 04:04:10.165: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3399 33793564-973d-42ac-9c50-29ad1408a493 427396 0 2023-01-14 04:04:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-14 04:04:10 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Jan 14 04:04:10.165: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3399 33793564-973d-42ac-9c50-29ad1408a493 427396 0 2023-01-14 04:04:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-14 04:04:10 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: deleting configmap A and ensuring the correct watchers observe the notification 01/14/23 04:04:10.166 +Jan 14 04:04:10.172: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3399 33793564-973d-42ac-9c50-29ad1408a493 427397 0 2023-01-14 04:04:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-14 04:04:10 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Jan 14 04:04:10.172: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3399 33793564-973d-42ac-9c50-29ad1408a493 427397 0 2023-01-14 04:04:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-14 04:04:10 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: creating a configmap with label B and ensuring the correct watchers observe the notification 01/14/23 04:04:10.172 +Jan 14 04:04:10.179: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3399 ae69c42a-b38e-4785-b5c8-955b8dd78f0b 427398 0 2023-01-14 04:04:10 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-01-14 04:04:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Jan 14 04:04:10.179: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3399 ae69c42a-b38e-4785-b5c8-955b8dd78f0b 427398 0 2023-01-14 04:04:10 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-01-14 04:04:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: deleting configmap B and ensuring the correct watchers observe the notification 01/14/23 04:04:20.18 +Jan 14 04:04:20.187: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3399 ae69c42a-b38e-4785-b5c8-955b8dd78f0b 427492 0 2023-01-14 04:04:10 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-01-14 04:04:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Jan 14 04:04:20.188: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3399 ae69c42a-b38e-4785-b5c8-955b8dd78f0b 427492 0 2023-01-14 04:04:10 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-01-14 04:04:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + test/e2e/framework/node/init/init.go:32 +Jan 14 04:04:30.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Watchers + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Watchers + tear down framework | framework.go:193 +STEP: Destroying namespace "watch-3399" for this suite. 01/14/23 04:04:30.195 +------------------------------ +• [SLOW TEST] [20.111 seconds] +[sig-api-machinery] Watchers +test/e2e/apimachinery/framework.go:23 + should observe add, update, and delete watch notifications on configmaps [Conformance] + test/e2e/apimachinery/watch.go:60 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Watchers + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:04:10.089 + Jan 14 04:04:10.089: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename watch 01/14/23 04:04:10.09 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:04:10.109 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:04:10.112 + [BeforeEach] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:31 + [It] should observe add, update, and delete watch notifications on configmaps [Conformance] + test/e2e/apimachinery/watch.go:60 + STEP: creating a watch on configmaps with label A 01/14/23 04:04:10.135 + STEP: creating a watch on configmaps with label B 01/14/23 04:04:10.137 + STEP: creating a watch on configmaps with label A or B 01/14/23 04:04:10.138 + STEP: creating a configmap with label A and ensuring the correct watchers observe the notification 01/14/23 04:04:10.139 + Jan 14 04:04:10.145: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3399 33793564-973d-42ac-9c50-29ad1408a493 427394 0 2023-01-14 04:04:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-14 04:04:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} + Jan 14 04:04:10.145: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3399 33793564-973d-42ac-9c50-29ad1408a493 427394 0 2023-01-14 04:04:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-14 04:04:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} + STEP: modifying configmap A and ensuring the correct watchers observe the notification 01/14/23 04:04:10.146 + Jan 14 04:04:10.157: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3399 33793564-973d-42ac-9c50-29ad1408a493 427395 0 2023-01-14 04:04:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-14 04:04:10 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} + Jan 14 04:04:10.157: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3399 33793564-973d-42ac-9c50-29ad1408a493 427395 0 2023-01-14 04:04:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-14 04:04:10 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} + STEP: modifying configmap A again and ensuring the correct watchers observe the notification 01/14/23 04:04:10.157 + Jan 14 04:04:10.165: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3399 33793564-973d-42ac-9c50-29ad1408a493 427396 0 2023-01-14 04:04:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-14 04:04:10 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} + Jan 14 04:04:10.165: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3399 33793564-973d-42ac-9c50-29ad1408a493 427396 0 2023-01-14 04:04:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-14 04:04:10 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} + STEP: deleting configmap A and ensuring the correct watchers observe the notification 01/14/23 04:04:10.166 + Jan 14 04:04:10.172: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3399 33793564-973d-42ac-9c50-29ad1408a493 427397 0 2023-01-14 04:04:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-14 04:04:10 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} + Jan 14 04:04:10.172: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3399 33793564-973d-42ac-9c50-29ad1408a493 427397 0 2023-01-14 04:04:10 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-14 04:04:10 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} + STEP: creating a configmap with label B and ensuring the correct watchers observe the notification 01/14/23 04:04:10.172 + Jan 14 04:04:10.179: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3399 ae69c42a-b38e-4785-b5c8-955b8dd78f0b 427398 0 2023-01-14 04:04:10 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-01-14 04:04:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} + Jan 14 04:04:10.179: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3399 ae69c42a-b38e-4785-b5c8-955b8dd78f0b 427398 0 2023-01-14 04:04:10 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-01-14 04:04:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} + STEP: deleting configmap B and ensuring the correct watchers observe the notification 01/14/23 04:04:20.18 + Jan 14 04:04:20.187: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3399 ae69c42a-b38e-4785-b5c8-955b8dd78f0b 427492 0 2023-01-14 04:04:10 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-01-14 04:04:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} + Jan 14 04:04:20.188: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3399 ae69c42a-b38e-4785-b5c8-955b8dd78f0b 427492 0 2023-01-14 04:04:10 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-01-14 04:04:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} + [AfterEach] [sig-api-machinery] Watchers + test/e2e/framework/node/init/init.go:32 + Jan 14 04:04:30.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Watchers + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Watchers + tear down framework | framework.go:193 + STEP: Destroying namespace "watch-3399" for this suite. 01/14/23 04:04:30.195 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's memory request [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:235 +[BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:04:30.201 +Jan 14 04:04:30.201: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename downward-api 01/14/23 04:04:30.202 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:04:30.216 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:04:30.219 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 +[It] should provide container's memory request [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:235 +STEP: Creating a pod to test downward API volume plugin 01/14/23 04:04:30.222 +Jan 14 04:04:30.233: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f955b810-b6b4-4d0a-b29a-5620f67844b9" in namespace "downward-api-8691" to be "Succeeded or Failed" +Jan 14 04:04:30.237: INFO: Pod "downwardapi-volume-f955b810-b6b4-4d0a-b29a-5620f67844b9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.453417ms +Jan 14 04:04:32.242: INFO: Pod "downwardapi-volume-f955b810-b6b4-4d0a-b29a-5620f67844b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008451101s +Jan 14 04:04:34.243: INFO: Pod "downwardapi-volume-f955b810-b6b4-4d0a-b29a-5620f67844b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009578144s +STEP: Saw pod success 01/14/23 04:04:34.243 +Jan 14 04:04:34.243: INFO: Pod "downwardapi-volume-f955b810-b6b4-4d0a-b29a-5620f67844b9" satisfied condition "Succeeded or Failed" +Jan 14 04:04:34.246: INFO: Trying to get logs from node 10.0.1.99 pod downwardapi-volume-f955b810-b6b4-4d0a-b29a-5620f67844b9 container client-container: +STEP: delete the pod 01/14/23 04:04:34.259 +Jan 14 04:04:34.273: INFO: Waiting for pod downwardapi-volume-f955b810-b6b4-4d0a-b29a-5620f67844b9 to disappear +Jan 14 04:04:34.275: INFO: Pod downwardapi-volume-f955b810-b6b4-4d0a-b29a-5620f67844b9 no longer exists +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 +Jan 14 04:04:34.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 +STEP: Destroying namespace "downward-api-8691" for this suite. 01/14/23 04:04:34.28 +------------------------------ +• [4.087 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should provide container's memory request [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:235 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:04:30.201 + Jan 14 04:04:30.201: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename downward-api 01/14/23 04:04:30.202 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:04:30.216 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:04:30.219 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 + [It] should provide container's memory request [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:235 + STEP: Creating a pod to test downward API volume plugin 01/14/23 04:04:30.222 + Jan 14 04:04:30.233: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f955b810-b6b4-4d0a-b29a-5620f67844b9" in namespace "downward-api-8691" to be "Succeeded or Failed" + Jan 14 04:04:30.237: INFO: Pod "downwardapi-volume-f955b810-b6b4-4d0a-b29a-5620f67844b9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.453417ms + Jan 14 04:04:32.242: INFO: Pod "downwardapi-volume-f955b810-b6b4-4d0a-b29a-5620f67844b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008451101s + Jan 14 04:04:34.243: INFO: Pod "downwardapi-volume-f955b810-b6b4-4d0a-b29a-5620f67844b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009578144s + STEP: Saw pod success 01/14/23 04:04:34.243 + Jan 14 04:04:34.243: INFO: Pod "downwardapi-volume-f955b810-b6b4-4d0a-b29a-5620f67844b9" satisfied condition "Succeeded or Failed" + Jan 14 04:04:34.246: INFO: Trying to get logs from node 10.0.1.99 pod downwardapi-volume-f955b810-b6b4-4d0a-b29a-5620f67844b9 container client-container: + STEP: delete the pod 01/14/23 04:04:34.259 + Jan 14 04:04:34.273: INFO: Waiting for pod downwardapi-volume-f955b810-b6b4-4d0a-b29a-5620f67844b9 to disappear + Jan 14 04:04:34.275: INFO: Pod downwardapi-volume-f955b810-b6b4-4d0a-b29a-5620f67844b9 no longer exists + [AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 + Jan 14 04:04:34.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 + STEP: Destroying namespace "downward-api-8691" for this suite. 01/14/23 04:04:34.28 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-network] Services + should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + test/e2e/network/service.go:2250 +[BeforeEach] [sig-network] Services + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:04:34.288 +Jan 14 04:04:34.289: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename services 01/14/23 04:04:34.289 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:04:34.314 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:04:34.317 +[BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + test/e2e/network/service.go:2250 +STEP: creating service in namespace services-6110 01/14/23 04:04:34.319 +STEP: creating service affinity-nodeport-transition in namespace services-6110 01/14/23 04:04:34.319 +STEP: creating replication controller affinity-nodeport-transition in namespace services-6110 01/14/23 04:04:34.333 +I0114 04:04:34.348319 25 runners.go:193] Created replication controller with name: affinity-nodeport-transition, namespace: services-6110, replica count: 3 +I0114 04:04:37.398823 25 runners.go:193] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Jan 14 04:04:37.409: INFO: Creating new exec pod +Jan 14 04:04:37.418: INFO: Waiting up to 5m0s for pod "execpod-affinitywxb82" in namespace "services-6110" to be "running" +Jan 14 04:04:37.421: INFO: Pod "execpod-affinitywxb82": Phase="Pending", Reason="", readiness=false. Elapsed: 3.027743ms +Jan 14 04:04:39.445: INFO: Pod "execpod-affinitywxb82": Phase="Running", Reason="", readiness=true. Elapsed: 2.027580299s +Jan 14 04:04:39.445: INFO: Pod "execpod-affinitywxb82" satisfied condition "running" +Jan 14 04:04:40.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-6110 exec execpod-affinitywxb82 -- /bin/sh -x -c nc -v -z -w 2 affinity-nodeport-transition 80' +Jan 14 04:04:40.565: INFO: stderr: "+ nc -v -z -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" +Jan 14 04:04:40.565: INFO: stdout: "" +Jan 14 04:04:40.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-6110 exec execpod-affinitywxb82 -- /bin/sh -x -c nc -v -z -w 2 10.55.254.101 80' +Jan 14 04:04:40.677: INFO: stderr: "+ nc -v -z -w 2 10.55.254.101 80\nConnection to 10.55.254.101 80 port [tcp/http] succeeded!\n" +Jan 14 04:04:40.677: INFO: stdout: "" +Jan 14 04:04:40.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-6110 exec execpod-affinitywxb82 -- /bin/sh -x -c nc -v -z -w 2 10.0.1.99 30395' +Jan 14 04:04:40.793: INFO: stderr: "+ nc -v -z -w 2 10.0.1.99 30395\nConnection to 10.0.1.99 30395 port [tcp/*] succeeded!\n" +Jan 14 04:04:40.793: INFO: stdout: "" +Jan 14 04:04:40.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-6110 exec execpod-affinitywxb82 -- /bin/sh -x -c nc -v -z -w 2 10.0.1.106 30395' +Jan 14 04:04:40.900: INFO: stderr: "+ nc -v -z -w 2 10.0.1.106 30395\nConnection to 10.0.1.106 30395 port [tcp/*] succeeded!\n" +Jan 14 04:04:40.900: INFO: stdout: "" +Jan 14 04:04:40.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-6110 exec execpod-affinitywxb82 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.0.1.106:30395/ ; done' +Jan 14 04:04:41.080: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n" +Jan 14 04:04:41.080: INFO: stdout: "\naffinity-nodeport-transition-9hx6l\naffinity-nodeport-transition-9hx6l\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-qdkrw\naffinity-nodeport-transition-9hx6l\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-9hx6l\naffinity-nodeport-transition-9hx6l\naffinity-nodeport-transition-qdkrw\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-9hx6l\naffinity-nodeport-transition-9hx6l\naffinity-nodeport-transition-9hx6l\naffinity-nodeport-transition-qdkrw" +Jan 14 04:04:41.080: INFO: Received response from host: affinity-nodeport-transition-9hx6l +Jan 14 04:04:41.080: INFO: Received response from host: affinity-nodeport-transition-9hx6l +Jan 14 04:04:41.080: INFO: Received response from host: affinity-nodeport-transition-rpp55 +Jan 14 04:04:41.080: INFO: Received response from host: affinity-nodeport-transition-rpp55 +Jan 14 04:04:41.080: INFO: Received response from host: affinity-nodeport-transition-qdkrw +Jan 14 04:04:41.080: INFO: Received response from host: affinity-nodeport-transition-9hx6l +Jan 14 04:04:41.080: INFO: Received response from host: affinity-nodeport-transition-rpp55 +Jan 14 04:04:41.080: INFO: Received response from host: affinity-nodeport-transition-9hx6l +Jan 14 04:04:41.080: INFO: Received response from host: affinity-nodeport-transition-9hx6l +Jan 14 04:04:41.080: INFO: Received response from host: affinity-nodeport-transition-qdkrw +Jan 14 04:04:41.080: INFO: Received response from host: affinity-nodeport-transition-rpp55 +Jan 14 04:04:41.080: INFO: Received response from host: affinity-nodeport-transition-rpp55 +Jan 14 04:04:41.080: INFO: Received response from host: affinity-nodeport-transition-9hx6l +Jan 14 04:04:41.080: INFO: Received response from host: affinity-nodeport-transition-9hx6l +Jan 14 04:04:41.080: INFO: Received response from host: affinity-nodeport-transition-9hx6l +Jan 14 04:04:41.080: INFO: Received response from host: affinity-nodeport-transition-qdkrw +Jan 14 04:04:41.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-6110 exec execpod-affinitywxb82 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.0.1.106:30395/ ; done' +Jan 14 04:04:41.249: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n" +Jan 14 04:04:41.249: INFO: stdout: "\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-rpp55" +Jan 14 04:04:41.249: INFO: Received response from host: affinity-nodeport-transition-rpp55 +Jan 14 04:04:41.249: INFO: Received response from host: affinity-nodeport-transition-rpp55 +Jan 14 04:04:41.249: INFO: Received response from host: affinity-nodeport-transition-rpp55 +Jan 14 04:04:41.249: INFO: Received response from host: affinity-nodeport-transition-rpp55 +Jan 14 04:04:41.249: INFO: Received response from host: affinity-nodeport-transition-rpp55 +Jan 14 04:04:41.249: INFO: Received response from host: affinity-nodeport-transition-rpp55 +Jan 14 04:04:41.249: INFO: Received response from host: affinity-nodeport-transition-rpp55 +Jan 14 04:04:41.249: INFO: Received response from host: affinity-nodeport-transition-rpp55 +Jan 14 04:04:41.249: INFO: Received response from host: affinity-nodeport-transition-rpp55 +Jan 14 04:04:41.249: INFO: Received response from host: affinity-nodeport-transition-rpp55 +Jan 14 04:04:41.249: INFO: Received response from host: affinity-nodeport-transition-rpp55 +Jan 14 04:04:41.249: INFO: Received response from host: affinity-nodeport-transition-rpp55 +Jan 14 04:04:41.249: INFO: Received response from host: affinity-nodeport-transition-rpp55 +Jan 14 04:04:41.249: INFO: Received response from host: affinity-nodeport-transition-rpp55 +Jan 14 04:04:41.249: INFO: Received response from host: affinity-nodeport-transition-rpp55 +Jan 14 04:04:41.249: INFO: Received response from host: affinity-nodeport-transition-rpp55 +Jan 14 04:04:41.249: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-6110, will wait for the garbage collector to delete the pods 01/14/23 04:04:41.267 +Jan 14 04:04:41.329: INFO: Deleting ReplicationController affinity-nodeport-transition took: 7.305708ms +Jan 14 04:04:41.430: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 100.970817ms +[AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 +Jan 14 04:04:43.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 +STEP: Destroying namespace "services-6110" for this suite. 01/14/23 04:04:43.455 +------------------------------ +• [SLOW TEST] [9.172 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + test/e2e/network/service.go:2250 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:04:34.288 + Jan 14 04:04:34.289: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename services 01/14/23 04:04:34.289 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:04:34.314 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:04:34.317 + [BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] + test/e2e/network/service.go:2250 + STEP: creating service in namespace services-6110 01/14/23 04:04:34.319 + STEP: creating service affinity-nodeport-transition in namespace services-6110 01/14/23 04:04:34.319 + STEP: creating replication controller affinity-nodeport-transition in namespace services-6110 01/14/23 04:04:34.333 + I0114 04:04:34.348319 25 runners.go:193] Created replication controller with name: affinity-nodeport-transition, namespace: services-6110, replica count: 3 + I0114 04:04:37.398823 25 runners.go:193] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Jan 14 04:04:37.409: INFO: Creating new exec pod + Jan 14 04:04:37.418: INFO: Waiting up to 5m0s for pod "execpod-affinitywxb82" in namespace "services-6110" to be "running" + Jan 14 04:04:37.421: INFO: Pod "execpod-affinitywxb82": Phase="Pending", Reason="", readiness=false. Elapsed: 3.027743ms + Jan 14 04:04:39.445: INFO: Pod "execpod-affinitywxb82": Phase="Running", Reason="", readiness=true. Elapsed: 2.027580299s + Jan 14 04:04:39.445: INFO: Pod "execpod-affinitywxb82" satisfied condition "running" + Jan 14 04:04:40.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-6110 exec execpod-affinitywxb82 -- /bin/sh -x -c nc -v -z -w 2 affinity-nodeport-transition 80' + Jan 14 04:04:40.565: INFO: stderr: "+ nc -v -z -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" + Jan 14 04:04:40.565: INFO: stdout: "" + Jan 14 04:04:40.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-6110 exec execpod-affinitywxb82 -- /bin/sh -x -c nc -v -z -w 2 10.55.254.101 80' + Jan 14 04:04:40.677: INFO: stderr: "+ nc -v -z -w 2 10.55.254.101 80\nConnection to 10.55.254.101 80 port [tcp/http] succeeded!\n" + Jan 14 04:04:40.677: INFO: stdout: "" + Jan 14 04:04:40.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-6110 exec execpod-affinitywxb82 -- /bin/sh -x -c nc -v -z -w 2 10.0.1.99 30395' + Jan 14 04:04:40.793: INFO: stderr: "+ nc -v -z -w 2 10.0.1.99 30395\nConnection to 10.0.1.99 30395 port [tcp/*] succeeded!\n" + Jan 14 04:04:40.793: INFO: stdout: "" + Jan 14 04:04:40.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-6110 exec execpod-affinitywxb82 -- /bin/sh -x -c nc -v -z -w 2 10.0.1.106 30395' + Jan 14 04:04:40.900: INFO: stderr: "+ nc -v -z -w 2 10.0.1.106 30395\nConnection to 10.0.1.106 30395 port [tcp/*] succeeded!\n" + Jan 14 04:04:40.900: INFO: stdout: "" + Jan 14 04:04:40.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-6110 exec execpod-affinitywxb82 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.0.1.106:30395/ ; done' + Jan 14 04:04:41.080: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n" + Jan 14 04:04:41.080: INFO: stdout: "\naffinity-nodeport-transition-9hx6l\naffinity-nodeport-transition-9hx6l\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-qdkrw\naffinity-nodeport-transition-9hx6l\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-9hx6l\naffinity-nodeport-transition-9hx6l\naffinity-nodeport-transition-qdkrw\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-9hx6l\naffinity-nodeport-transition-9hx6l\naffinity-nodeport-transition-9hx6l\naffinity-nodeport-transition-qdkrw" + Jan 14 04:04:41.080: INFO: Received response from host: affinity-nodeport-transition-9hx6l + Jan 14 04:04:41.080: INFO: Received response from host: affinity-nodeport-transition-9hx6l + Jan 14 04:04:41.080: INFO: Received response from host: affinity-nodeport-transition-rpp55 + Jan 14 04:04:41.080: INFO: Received response from host: affinity-nodeport-transition-rpp55 + Jan 14 04:04:41.080: INFO: Received response from host: affinity-nodeport-transition-qdkrw + Jan 14 04:04:41.080: INFO: Received response from host: affinity-nodeport-transition-9hx6l + Jan 14 04:04:41.080: INFO: Received response from host: affinity-nodeport-transition-rpp55 + Jan 14 04:04:41.080: INFO: Received response from host: affinity-nodeport-transition-9hx6l + Jan 14 04:04:41.080: INFO: Received response from host: affinity-nodeport-transition-9hx6l + Jan 14 04:04:41.080: INFO: Received response from host: affinity-nodeport-transition-qdkrw + Jan 14 04:04:41.080: INFO: Received response from host: affinity-nodeport-transition-rpp55 + Jan 14 04:04:41.080: INFO: Received response from host: affinity-nodeport-transition-rpp55 + Jan 14 04:04:41.080: INFO: Received response from host: affinity-nodeport-transition-9hx6l + Jan 14 04:04:41.080: INFO: Received response from host: affinity-nodeport-transition-9hx6l + Jan 14 04:04:41.080: INFO: Received response from host: affinity-nodeport-transition-9hx6l + Jan 14 04:04:41.080: INFO: Received response from host: affinity-nodeport-transition-qdkrw + Jan 14 04:04:41.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-6110 exec execpod-affinitywxb82 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.0.1.106:30395/ ; done' + Jan 14 04:04:41.249: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30395/\n" + Jan 14 04:04:41.249: INFO: stdout: "\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-rpp55\naffinity-nodeport-transition-rpp55" + Jan 14 04:04:41.249: INFO: Received response from host: affinity-nodeport-transition-rpp55 + Jan 14 04:04:41.249: INFO: Received response from host: affinity-nodeport-transition-rpp55 + Jan 14 04:04:41.249: INFO: Received response from host: affinity-nodeport-transition-rpp55 + Jan 14 04:04:41.249: INFO: Received response from host: affinity-nodeport-transition-rpp55 + Jan 14 04:04:41.249: INFO: Received response from host: affinity-nodeport-transition-rpp55 + Jan 14 04:04:41.249: INFO: Received response from host: affinity-nodeport-transition-rpp55 + Jan 14 04:04:41.249: INFO: Received response from host: affinity-nodeport-transition-rpp55 + Jan 14 04:04:41.249: INFO: Received response from host: affinity-nodeport-transition-rpp55 + Jan 14 04:04:41.249: INFO: Received response from host: affinity-nodeport-transition-rpp55 + Jan 14 04:04:41.249: INFO: Received response from host: affinity-nodeport-transition-rpp55 + Jan 14 04:04:41.249: INFO: Received response from host: affinity-nodeport-transition-rpp55 + Jan 14 04:04:41.249: INFO: Received response from host: affinity-nodeport-transition-rpp55 + Jan 14 04:04:41.249: INFO: Received response from host: affinity-nodeport-transition-rpp55 + Jan 14 04:04:41.249: INFO: Received response from host: affinity-nodeport-transition-rpp55 + Jan 14 04:04:41.249: INFO: Received response from host: affinity-nodeport-transition-rpp55 + Jan 14 04:04:41.249: INFO: Received response from host: affinity-nodeport-transition-rpp55 + Jan 14 04:04:41.249: INFO: Cleaning up the exec pod + STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-6110, will wait for the garbage collector to delete the pods 01/14/23 04:04:41.267 + Jan 14 04:04:41.329: INFO: Deleting ReplicationController affinity-nodeport-transition took: 7.305708ms + Jan 14 04:04:41.430: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 100.970817ms + [AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 + Jan 14 04:04:43.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 + STEP: Destroying namespace "services-6110" for this suite. 01/14/23 04:04:43.455 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should delete pods created by rc when not orphaning [Conformance] + test/e2e/apimachinery/garbage_collector.go:312 +[BeforeEach] [sig-api-machinery] Garbage collector + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:04:43.462 +Jan 14 04:04:43.463: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename gc 01/14/23 04:04:43.463 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:04:43.478 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:04:43.48 +[BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:31 +[It] should delete pods created by rc when not orphaning [Conformance] + test/e2e/apimachinery/garbage_collector.go:312 +STEP: create the rc 01/14/23 04:04:43.482 +STEP: delete the rc 01/14/23 04:04:48.494 +STEP: wait for all pods to be garbage collected 01/14/23 04:04:48.502 +STEP: Gathering metrics 01/14/23 04:04:53.509 +Jan 14 04:04:53.534: INFO: Waiting up to 5m0s for pod "kube-controller-manager-10.0.1.231" in namespace "kube-system" to be "running and ready" +Jan 14 04:04:53.537: INFO: Pod "kube-controller-manager-10.0.1.231": Phase="Running", Reason="", readiness=true. Elapsed: 3.247296ms +Jan 14 04:04:53.537: INFO: The phase of Pod kube-controller-manager-10.0.1.231 is Running (Ready = true) +Jan 14 04:04:53.537: INFO: Pod "kube-controller-manager-10.0.1.231" satisfied condition "running and ready" +Jan 14 04:04:53.589: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/node/init/init.go:32 +Jan 14 04:04:53.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + tear down framework | framework.go:193 +STEP: Destroying namespace "gc-7296" for this suite. 01/14/23 04:04:53.595 +------------------------------ +• [SLOW TEST] [10.145 seconds] +[sig-api-machinery] Garbage collector +test/e2e/apimachinery/framework.go:23 + should delete pods created by rc when not orphaning [Conformance] + test/e2e/apimachinery/garbage_collector.go:312 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Garbage collector + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:04:43.462 + Jan 14 04:04:43.463: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename gc 01/14/23 04:04:43.463 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:04:43.478 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:04:43.48 + [BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:31 + [It] should delete pods created by rc when not orphaning [Conformance] + test/e2e/apimachinery/garbage_collector.go:312 + STEP: create the rc 01/14/23 04:04:43.482 + STEP: delete the rc 01/14/23 04:04:48.494 + STEP: wait for all pods to be garbage collected 01/14/23 04:04:48.502 + STEP: Gathering metrics 01/14/23 04:04:53.509 + Jan 14 04:04:53.534: INFO: Waiting up to 5m0s for pod "kube-controller-manager-10.0.1.231" in namespace "kube-system" to be "running and ready" + Jan 14 04:04:53.537: INFO: Pod "kube-controller-manager-10.0.1.231": Phase="Running", Reason="", readiness=true. Elapsed: 3.247296ms + Jan 14 04:04:53.537: INFO: The phase of Pod kube-controller-manager-10.0.1.231 is Running (Ready = true) + Jan 14 04:04:53.537: INFO: Pod "kube-controller-manager-10.0.1.231" satisfied condition "running and ready" + Jan 14 04:04:53.589: INFO: For apiserver_request_total: + For apiserver_request_latency_seconds: + For apiserver_init_events_total: + For garbage_collector_attempt_to_delete_queue_latency: + For garbage_collector_attempt_to_delete_work_duration: + For garbage_collector_attempt_to_orphan_queue_latency: + For garbage_collector_attempt_to_orphan_work_duration: + For garbage_collector_dirty_processing_latency_microseconds: + For garbage_collector_event_processing_latency_microseconds: + For garbage_collector_graph_changes_queue_latency: + For garbage_collector_graph_changes_work_duration: + For garbage_collector_orphan_processing_latency_microseconds: + For namespace_queue_latency: + For namespace_queue_latency_sum: + For namespace_queue_latency_count: + For namespace_retries: + For namespace_work_duration: + For namespace_work_duration_sum: + For namespace_work_duration_count: + For function_duration_seconds: + For errors_total: + For evicted_pods_total: + + [AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/node/init/init.go:32 + Jan 14 04:04:53.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + tear down framework | framework.go:193 + STEP: Destroying namespace "gc-7296" for this suite. 01/14/23 04:04:53.595 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should invoke init containers on a RestartNever pod [Conformance] + test/e2e/common/node/init_container.go:177 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:04:53.608 +Jan 14 04:04:53.608: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename init-container 01/14/23 04:04:53.609 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:04:53.623 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:04:53.625 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/common/node/init_container.go:165 +[It] should invoke init containers on a RestartNever pod [Conformance] + test/e2e/common/node/init_container.go:177 +STEP: creating the pod 01/14/23 04:04:53.627 +Jan 14 04:04:53.627: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:04:58.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + tear down framework | framework.go:193 +STEP: Destroying namespace "init-container-8687" for this suite. 01/14/23 04:04:58.288 +------------------------------ +• [4.685 seconds] +[sig-node] InitContainer [NodeConformance] +test/e2e/common/node/framework.go:23 + should invoke init containers on a RestartNever pod [Conformance] + test/e2e/common/node/init_container.go:177 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] InitContainer [NodeConformance] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:04:53.608 + Jan 14 04:04:53.608: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename init-container 01/14/23 04:04:53.609 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:04:53.623 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:04:53.625 + [BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/common/node/init_container.go:165 + [It] should invoke init containers on a RestartNever pod [Conformance] + test/e2e/common/node/init_container.go:177 + STEP: creating the pod 01/14/23 04:04:53.627 + Jan 14 04:04:53.627: INFO: PodSpec: initContainers in spec.initContainers + [AfterEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:04:58.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + tear down framework | framework.go:193 + STEP: Destroying namespace "init-container-8687" for this suite. 01/14/23 04:04:58.288 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of same group and version but different kinds [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:357 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:04:58.293 +Jan 14 04:04:58.294: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename crd-publish-openapi 01/14/23 04:04:58.294 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:04:58.308 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:04:58.311 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[It] works for multiple CRDs of same group and version but different kinds [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:357 +STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation 01/14/23 04:04:58.313 +Jan 14 04:04:58.313: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +Jan 14 04:05:00.111: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:05:07.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "crd-publish-openapi-2948" for this suite. 01/14/23 04:05:07.317 +------------------------------ +• [SLOW TEST] [9.029 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + works for multiple CRDs of same group and version but different kinds [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:357 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:04:58.293 + Jan 14 04:04:58.294: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename crd-publish-openapi 01/14/23 04:04:58.294 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:04:58.308 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:04:58.311 + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [It] works for multiple CRDs of same group and version but different kinds [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:357 + STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation 01/14/23 04:04:58.313 + Jan 14 04:04:58.313: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + Jan 14 04:05:00.111: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:05:07.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "crd-publish-openapi-2948" for this suite. 01/14/23 04:05:07.317 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:57 +[BeforeEach] [sig-storage] Projected configMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:05:07.323 +Jan 14 04:05:07.323: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename projected 01/14/23 04:05:07.324 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:05:07.339 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:05:07.341 +[BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:57 +STEP: Creating configMap with name projected-configmap-test-volume-0620320d-357f-47bc-98ab-2e99f585635f 01/14/23 04:05:07.344 +STEP: Creating a pod to test consume configMaps 01/14/23 04:05:07.348 +Jan 14 04:05:07.363: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f35f5a43-faf0-4d06-9a6a-3ae40a62a992" in namespace "projected-381" to be "Succeeded or Failed" +Jan 14 04:05:07.366: INFO: Pod "pod-projected-configmaps-f35f5a43-faf0-4d06-9a6a-3ae40a62a992": Phase="Pending", Reason="", readiness=false. Elapsed: 3.067818ms +Jan 14 04:05:09.371: INFO: Pod "pod-projected-configmaps-f35f5a43-faf0-4d06-9a6a-3ae40a62a992": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008053654s +Jan 14 04:05:11.371: INFO: Pod "pod-projected-configmaps-f35f5a43-faf0-4d06-9a6a-3ae40a62a992": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008038763s +STEP: Saw pod success 01/14/23 04:05:11.371 +Jan 14 04:05:11.371: INFO: Pod "pod-projected-configmaps-f35f5a43-faf0-4d06-9a6a-3ae40a62a992" satisfied condition "Succeeded or Failed" +Jan 14 04:05:11.374: INFO: Trying to get logs from node 10.0.1.106 pod pod-projected-configmaps-f35f5a43-faf0-4d06-9a6a-3ae40a62a992 container agnhost-container: +STEP: delete the pod 01/14/23 04:05:11.386 +Jan 14 04:05:11.402: INFO: Waiting for pod pod-projected-configmaps-f35f5a43-faf0-4d06-9a6a-3ae40a62a992 to disappear +Jan 14 04:05:11.405: INFO: Pod pod-projected-configmaps-f35f5a43-faf0-4d06-9a6a-3ae40a62a992 no longer exists +[AfterEach] [sig-storage] Projected configMap + test/e2e/framework/node/init/init.go:32 +Jan 14 04:05:11.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected configMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected configMap + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-381" for this suite. 01/14/23 04:05:11.41 +------------------------------ +• [4.096 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:57 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected configMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:05:07.323 + Jan 14 04:05:07.323: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename projected 01/14/23 04:05:07.324 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:05:07.339 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:05:07.341 + [BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:57 + STEP: Creating configMap with name projected-configmap-test-volume-0620320d-357f-47bc-98ab-2e99f585635f 01/14/23 04:05:07.344 + STEP: Creating a pod to test consume configMaps 01/14/23 04:05:07.348 + Jan 14 04:05:07.363: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f35f5a43-faf0-4d06-9a6a-3ae40a62a992" in namespace "projected-381" to be "Succeeded or Failed" + Jan 14 04:05:07.366: INFO: Pod "pod-projected-configmaps-f35f5a43-faf0-4d06-9a6a-3ae40a62a992": Phase="Pending", Reason="", readiness=false. Elapsed: 3.067818ms + Jan 14 04:05:09.371: INFO: Pod "pod-projected-configmaps-f35f5a43-faf0-4d06-9a6a-3ae40a62a992": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008053654s + Jan 14 04:05:11.371: INFO: Pod "pod-projected-configmaps-f35f5a43-faf0-4d06-9a6a-3ae40a62a992": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008038763s + STEP: Saw pod success 01/14/23 04:05:11.371 + Jan 14 04:05:11.371: INFO: Pod "pod-projected-configmaps-f35f5a43-faf0-4d06-9a6a-3ae40a62a992" satisfied condition "Succeeded or Failed" + Jan 14 04:05:11.374: INFO: Trying to get logs from node 10.0.1.106 pod pod-projected-configmaps-f35f5a43-faf0-4d06-9a6a-3ae40a62a992 container agnhost-container: + STEP: delete the pod 01/14/23 04:05:11.386 + Jan 14 04:05:11.402: INFO: Waiting for pod pod-projected-configmaps-f35f5a43-faf0-4d06-9a6a-3ae40a62a992 to disappear + Jan 14 04:05:11.405: INFO: Pod pod-projected-configmaps-f35f5a43-faf0-4d06-9a6a-3ae40a62a992 no longer exists + [AfterEach] [sig-storage] Projected configMap + test/e2e/framework/node/init/init.go:32 + Jan 14 04:05:11.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected configMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected configMap + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-381" for this suite. 01/14/23 04:05:11.41 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-node] Security Context When creating a container with runAsUser + should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:347 +[BeforeEach] [sig-node] Security Context + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:05:11.419 +Jan 14 04:05:11.419: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename security-context-test 01/14/23 04:05:11.42 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:05:11.435 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:05:11.437 +[BeforeEach] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Security Context + test/e2e/common/node/security_context.go:50 +[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:347 +Jan 14 04:05:11.448: INFO: Waiting up to 5m0s for pod "busybox-user-65534-cfec0f0c-82eb-40ec-9a96-bf63dd713a4c" in namespace "security-context-test-5867" to be "Succeeded or Failed" +Jan 14 04:05:11.451: INFO: Pod "busybox-user-65534-cfec0f0c-82eb-40ec-9a96-bf63dd713a4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.888435ms +Jan 14 04:05:13.456: INFO: Pod "busybox-user-65534-cfec0f0c-82eb-40ec-9a96-bf63dd713a4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00802764s +Jan 14 04:05:15.456: INFO: Pod "busybox-user-65534-cfec0f0c-82eb-40ec-9a96-bf63dd713a4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007961614s +Jan 14 04:05:15.456: INFO: Pod "busybox-user-65534-cfec0f0c-82eb-40ec-9a96-bf63dd713a4c" satisfied condition "Succeeded or Failed" +[AfterEach] [sig-node] Security Context + test/e2e/framework/node/init/init.go:32 +Jan 14 04:05:15.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Security Context + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Security Context + tear down framework | framework.go:193 +STEP: Destroying namespace "security-context-test-5867" for this suite. 01/14/23 04:05:15.462 +------------------------------ +• [4.051 seconds] +[sig-node] Security Context +test/e2e/common/node/framework.go:23 + When creating a container with runAsUser + test/e2e/common/node/security_context.go:309 + should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:347 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Security Context + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:05:11.419 + Jan 14 04:05:11.419: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename security-context-test 01/14/23 04:05:11.42 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:05:11.435 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:05:11.437 + [BeforeEach] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Security Context + test/e2e/common/node/security_context.go:50 + [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:347 + Jan 14 04:05:11.448: INFO: Waiting up to 5m0s for pod "busybox-user-65534-cfec0f0c-82eb-40ec-9a96-bf63dd713a4c" in namespace "security-context-test-5867" to be "Succeeded or Failed" + Jan 14 04:05:11.451: INFO: Pod "busybox-user-65534-cfec0f0c-82eb-40ec-9a96-bf63dd713a4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.888435ms + Jan 14 04:05:13.456: INFO: Pod "busybox-user-65534-cfec0f0c-82eb-40ec-9a96-bf63dd713a4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00802764s + Jan 14 04:05:15.456: INFO: Pod "busybox-user-65534-cfec0f0c-82eb-40ec-9a96-bf63dd713a4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007961614s + Jan 14 04:05:15.456: INFO: Pod "busybox-user-65534-cfec0f0c-82eb-40ec-9a96-bf63dd713a4c" satisfied condition "Succeeded or Failed" + [AfterEach] [sig-node] Security Context + test/e2e/framework/node/init/init.go:32 + Jan 14 04:05:15.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Security Context + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Security Context + tear down framework | framework.go:193 + STEP: Destroying namespace "security-context-test-5867" for this suite. 01/14/23 04:05:15.462 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-instrumentation] Events + should manage the lifecycle of an event [Conformance] + test/e2e/instrumentation/core_events.go:57 +[BeforeEach] [sig-instrumentation] Events + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:05:15.47 +Jan 14 04:05:15.470: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename events 01/14/23 04:05:15.471 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:05:15.486 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:05:15.489 +[BeforeEach] [sig-instrumentation] Events + test/e2e/framework/metrics/init/init.go:31 +[It] should manage the lifecycle of an event [Conformance] + test/e2e/instrumentation/core_events.go:57 +STEP: creating a test event 01/14/23 04:05:15.491 +STEP: listing all events in all namespaces 01/14/23 04:05:15.495 +STEP: patching the test event 01/14/23 04:05:15.5 +STEP: fetching the test event 01/14/23 04:05:15.507 +STEP: updating the test event 01/14/23 04:05:15.511 +STEP: getting the test event 01/14/23 04:05:15.519 +STEP: deleting the test event 01/14/23 04:05:15.522 +STEP: listing all events in all namespaces 01/14/23 04:05:15.528 +[AfterEach] [sig-instrumentation] Events + test/e2e/framework/node/init/init.go:32 +Jan 14 04:05:15.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-instrumentation] Events + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-instrumentation] Events + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-instrumentation] Events + tear down framework | framework.go:193 +STEP: Destroying namespace "events-2684" for this suite. 01/14/23 04:05:15.537 +------------------------------ +• [0.085 seconds] +[sig-instrumentation] Events +test/e2e/instrumentation/common/framework.go:23 + should manage the lifecycle of an event [Conformance] + test/e2e/instrumentation/core_events.go:57 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-instrumentation] Events + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:05:15.47 + Jan 14 04:05:15.470: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename events 01/14/23 04:05:15.471 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:05:15.486 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:05:15.489 + [BeforeEach] [sig-instrumentation] Events + test/e2e/framework/metrics/init/init.go:31 + [It] should manage the lifecycle of an event [Conformance] + test/e2e/instrumentation/core_events.go:57 + STEP: creating a test event 01/14/23 04:05:15.491 + STEP: listing all events in all namespaces 01/14/23 04:05:15.495 + STEP: patching the test event 01/14/23 04:05:15.5 + STEP: fetching the test event 01/14/23 04:05:15.507 + STEP: updating the test event 01/14/23 04:05:15.511 + STEP: getting the test event 01/14/23 04:05:15.519 + STEP: deleting the test event 01/14/23 04:05:15.522 + STEP: listing all events in all namespaces 01/14/23 04:05:15.528 + [AfterEach] [sig-instrumentation] Events + test/e2e/framework/node/init/init.go:32 + Jan 14 04:05:15.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-instrumentation] Events + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-instrumentation] Events + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-instrumentation] Events + tear down framework | framework.go:193 + STEP: Destroying namespace "events-2684" for this suite. 01/14/23 04:05:15.537 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-storage] Projected downwardAPI + should update annotations on modification [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:162 +[BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:05:15.556 +Jan 14 04:05:15.556: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename projected 01/14/23 04:05:15.557 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:05:15.572 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:05:15.574 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 +[It] should update annotations on modification [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:162 +STEP: Creating the pod 01/14/23 04:05:15.576 +Jan 14 04:05:15.585: INFO: Waiting up to 5m0s for pod "annotationupdateadaaa4b3-04f7-46bd-b61c-d271750cbb78" in namespace "projected-6480" to be "running and ready" +Jan 14 04:05:15.588: INFO: Pod "annotationupdateadaaa4b3-04f7-46bd-b61c-d271750cbb78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.719476ms +Jan 14 04:05:15.588: INFO: The phase of Pod annotationupdateadaaa4b3-04f7-46bd-b61c-d271750cbb78 is Pending, waiting for it to be Running (with Ready = true) +Jan 14 04:05:17.593: INFO: Pod "annotationupdateadaaa4b3-04f7-46bd-b61c-d271750cbb78": Phase="Running", Reason="", readiness=true. Elapsed: 2.007377266s +Jan 14 04:05:17.593: INFO: The phase of Pod annotationupdateadaaa4b3-04f7-46bd-b61c-d271750cbb78 is Running (Ready = true) +Jan 14 04:05:17.593: INFO: Pod "annotationupdateadaaa4b3-04f7-46bd-b61c-d271750cbb78" satisfied condition "running and ready" +Jan 14 04:05:18.127: INFO: Successfully updated pod "annotationupdateadaaa4b3-04f7-46bd-b61c-d271750cbb78" +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 +Jan 14 04:05:20.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-6480" for this suite. 01/14/23 04:05:20.145 +------------------------------ +• [4.596 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should update annotations on modification [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:162 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:05:15.556 + Jan 14 04:05:15.556: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename projected 01/14/23 04:05:15.557 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:05:15.572 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:05:15.574 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 + [It] should update annotations on modification [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:162 + STEP: Creating the pod 01/14/23 04:05:15.576 + Jan 14 04:05:15.585: INFO: Waiting up to 5m0s for pod "annotationupdateadaaa4b3-04f7-46bd-b61c-d271750cbb78" in namespace "projected-6480" to be "running and ready" + Jan 14 04:05:15.588: INFO: Pod "annotationupdateadaaa4b3-04f7-46bd-b61c-d271750cbb78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.719476ms + Jan 14 04:05:15.588: INFO: The phase of Pod annotationupdateadaaa4b3-04f7-46bd-b61c-d271750cbb78 is Pending, waiting for it to be Running (with Ready = true) + Jan 14 04:05:17.593: INFO: Pod "annotationupdateadaaa4b3-04f7-46bd-b61c-d271750cbb78": Phase="Running", Reason="", readiness=true. Elapsed: 2.007377266s + Jan 14 04:05:17.593: INFO: The phase of Pod annotationupdateadaaa4b3-04f7-46bd-b61c-d271750cbb78 is Running (Ready = true) + Jan 14 04:05:17.593: INFO: Pod "annotationupdateadaaa4b3-04f7-46bd-b61c-d271750cbb78" satisfied condition "running and ready" + Jan 14 04:05:18.127: INFO: Successfully updated pod "annotationupdateadaaa4b3-04f7-46bd-b61c-d271750cbb78" + [AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 + Jan 14 04:05:20.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-6480" for this suite. 01/14/23 04:05:20.145 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from ExternalName to NodePort [Conformance] + test/e2e/network/service.go:1477 +[BeforeEach] [sig-network] Services + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:05:20.152 +Jan 14 04:05:20.152: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename services 01/14/23 04:05:20.153 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:05:20.171 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:05:20.173 +[BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should be able to change the type from ExternalName to NodePort [Conformance] + test/e2e/network/service.go:1477 +STEP: creating a service externalname-service with the type=ExternalName in namespace services-9672 01/14/23 04:05:20.176 +STEP: changing the ExternalName service to type=NodePort 01/14/23 04:05:20.181 +STEP: creating replication controller externalname-service in namespace services-9672 01/14/23 04:05:20.199 +I0114 04:05:20.205692 25 runners.go:193] Created replication controller with name: externalname-service, namespace: services-9672, replica count: 2 +I0114 04:05:23.258190 25 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Jan 14 04:05:23.258: INFO: Creating new exec pod +Jan 14 04:05:23.269: INFO: Waiting up to 5m0s for pod "execpodfnb7f" in namespace "services-9672" to be "running" +Jan 14 04:05:23.272: INFO: Pod "execpodfnb7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.880201ms +Jan 14 04:05:25.276: INFO: Pod "execpodfnb7f": Phase="Running", Reason="", readiness=true. Elapsed: 2.00719843s +Jan 14 04:05:25.276: INFO: Pod "execpodfnb7f" satisfied condition "running" +Jan 14 04:05:26.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-9672 exec execpodfnb7f -- /bin/sh -x -c nc -v -z -w 2 externalname-service 80' +Jan 14 04:05:26.401: INFO: stderr: "+ nc -v -z -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Jan 14 04:05:26.401: INFO: stdout: "" +Jan 14 04:05:26.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-9672 exec execpodfnb7f -- /bin/sh -x -c nc -v -z -w 2 10.55.253.107 80' +Jan 14 04:05:26.515: INFO: stderr: "+ nc -v -z -w 2 10.55.253.107 80\nConnection to 10.55.253.107 80 port [tcp/http] succeeded!\n" +Jan 14 04:05:26.515: INFO: stdout: "" +Jan 14 04:05:26.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-9672 exec execpodfnb7f -- /bin/sh -x -c nc -v -z -w 2 10.0.1.106 32372' +Jan 14 04:05:26.632: INFO: stderr: "+ nc -v -z -w 2 10.0.1.106 32372\nConnection to 10.0.1.106 32372 port [tcp/*] succeeded!\n" +Jan 14 04:05:26.632: INFO: stdout: "" +Jan 14 04:05:26.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-9672 exec execpodfnb7f -- /bin/sh -x -c nc -v -z -w 2 10.0.1.99 32372' +Jan 14 04:05:26.738: INFO: stderr: "+ nc -v -z -w 2 10.0.1.99 32372\nConnection to 10.0.1.99 32372 port [tcp/*] succeeded!\n" +Jan 14 04:05:26.738: INFO: stdout: "" +Jan 14 04:05:26.738: INFO: Cleaning up the ExternalName to NodePort test service +[AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 +Jan 14 04:05:26.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 +STEP: Destroying namespace "services-9672" for this suite. 01/14/23 04:05:26.762 +------------------------------ +• [SLOW TEST] [6.617 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should be able to change the type from ExternalName to NodePort [Conformance] + test/e2e/network/service.go:1477 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:05:20.152 + Jan 14 04:05:20.152: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename services 01/14/23 04:05:20.153 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:05:20.171 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:05:20.173 + [BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should be able to change the type from ExternalName to NodePort [Conformance] + test/e2e/network/service.go:1477 + STEP: creating a service externalname-service with the type=ExternalName in namespace services-9672 01/14/23 04:05:20.176 + STEP: changing the ExternalName service to type=NodePort 01/14/23 04:05:20.181 + STEP: creating replication controller externalname-service in namespace services-9672 01/14/23 04:05:20.199 + I0114 04:05:20.205692 25 runners.go:193] Created replication controller with name: externalname-service, namespace: services-9672, replica count: 2 + I0114 04:05:23.258190 25 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Jan 14 04:05:23.258: INFO: Creating new exec pod + Jan 14 04:05:23.269: INFO: Waiting up to 5m0s for pod "execpodfnb7f" in namespace "services-9672" to be "running" + Jan 14 04:05:23.272: INFO: Pod "execpodfnb7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.880201ms + Jan 14 04:05:25.276: INFO: Pod "execpodfnb7f": Phase="Running", Reason="", readiness=true. Elapsed: 2.00719843s + Jan 14 04:05:25.276: INFO: Pod "execpodfnb7f" satisfied condition "running" + Jan 14 04:05:26.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-9672 exec execpodfnb7f -- /bin/sh -x -c nc -v -z -w 2 externalname-service 80' + Jan 14 04:05:26.401: INFO: stderr: "+ nc -v -z -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" + Jan 14 04:05:26.401: INFO: stdout: "" + Jan 14 04:05:26.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-9672 exec execpodfnb7f -- /bin/sh -x -c nc -v -z -w 2 10.55.253.107 80' + Jan 14 04:05:26.515: INFO: stderr: "+ nc -v -z -w 2 10.55.253.107 80\nConnection to 10.55.253.107 80 port [tcp/http] succeeded!\n" + Jan 14 04:05:26.515: INFO: stdout: "" + Jan 14 04:05:26.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-9672 exec execpodfnb7f -- /bin/sh -x -c nc -v -z -w 2 10.0.1.106 32372' + Jan 14 04:05:26.632: INFO: stderr: "+ nc -v -z -w 2 10.0.1.106 32372\nConnection to 10.0.1.106 32372 port [tcp/*] succeeded!\n" + Jan 14 04:05:26.632: INFO: stdout: "" + Jan 14 04:05:26.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-9672 exec execpodfnb7f -- /bin/sh -x -c nc -v -z -w 2 10.0.1.99 32372' + Jan 14 04:05:26.738: INFO: stderr: "+ nc -v -z -w 2 10.0.1.99 32372\nConnection to 10.0.1.99 32372 port [tcp/*] succeeded!\n" + Jan 14 04:05:26.738: INFO: stdout: "" + Jan 14 04:05:26.738: INFO: Cleaning up the ExternalName to NodePort test service + [AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 + Jan 14 04:05:26.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 + STEP: Destroying namespace "services-9672" for this suite. 01/14/23 04:05:26.762 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide /etc/hosts entries for the cluster [Conformance] + test/e2e/network/dns.go:117 +[BeforeEach] [sig-network] DNS + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:05:26.769 +Jan 14 04:05:26.769: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename dns 01/14/23 04:05:26.77 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:05:26.787 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:05:26.79 +[BeforeEach] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:31 +[It] should provide /etc/hosts entries for the cluster [Conformance] + test/e2e/network/dns.go:117 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1843.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1843.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;sleep 1; done + 01/14/23 04:05:26.792 +STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1843.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1843.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;sleep 1; done + 01/14/23 04:05:26.793 +STEP: creating a pod to probe /etc/hosts 01/14/23 04:05:26.793 +STEP: submitting the pod to kubernetes 01/14/23 04:05:26.793 +Jan 14 04:05:26.803: INFO: Waiting up to 15m0s for pod "dns-test-4df34c25-d350-4d8d-8136-8e9b72c5db56" in namespace "dns-1843" to be "running" +Jan 14 04:05:26.812: INFO: Pod "dns-test-4df34c25-d350-4d8d-8136-8e9b72c5db56": Phase="Pending", Reason="", readiness=false. Elapsed: 8.915485ms +Jan 14 04:05:28.816: INFO: Pod "dns-test-4df34c25-d350-4d8d-8136-8e9b72c5db56": Phase="Running", Reason="", readiness=true. Elapsed: 2.013558512s +Jan 14 04:05:28.816: INFO: Pod "dns-test-4df34c25-d350-4d8d-8136-8e9b72c5db56" satisfied condition "running" +STEP: retrieving the pod 01/14/23 04:05:28.816 +STEP: looking for the results for each expected name from probers 01/14/23 04:05:28.819 +Jan 14 04:05:28.832: INFO: DNS probes using dns-1843/dns-test-4df34c25-d350-4d8d-8136-8e9b72c5db56 succeeded + +STEP: deleting the pod 01/14/23 04:05:28.832 +[AfterEach] [sig-network] DNS + test/e2e/framework/node/init/init.go:32 +Jan 14 04:05:28.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] DNS + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] DNS + tear down framework | framework.go:193 +STEP: Destroying namespace "dns-1843" for this suite. 01/14/23 04:05:28.853 +------------------------------ +• [2.089 seconds] +[sig-network] DNS +test/e2e/network/common/framework.go:23 + should provide /etc/hosts entries for the cluster [Conformance] + test/e2e/network/dns.go:117 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] DNS + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:05:26.769 + Jan 14 04:05:26.769: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename dns 01/14/23 04:05:26.77 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:05:26.787 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:05:26.79 + [BeforeEach] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:31 + [It] should provide /etc/hosts entries for the cluster [Conformance] + test/e2e/network/dns.go:117 + STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1843.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1843.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;sleep 1; done + 01/14/23 04:05:26.792 + STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1843.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1843.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;sleep 1; done + 01/14/23 04:05:26.793 + STEP: creating a pod to probe /etc/hosts 01/14/23 04:05:26.793 + STEP: submitting the pod to kubernetes 01/14/23 04:05:26.793 + Jan 14 04:05:26.803: INFO: Waiting up to 15m0s for pod "dns-test-4df34c25-d350-4d8d-8136-8e9b72c5db56" in namespace "dns-1843" to be "running" + Jan 14 04:05:26.812: INFO: Pod "dns-test-4df34c25-d350-4d8d-8136-8e9b72c5db56": Phase="Pending", Reason="", readiness=false. Elapsed: 8.915485ms + Jan 14 04:05:28.816: INFO: Pod "dns-test-4df34c25-d350-4d8d-8136-8e9b72c5db56": Phase="Running", Reason="", readiness=true. Elapsed: 2.013558512s + Jan 14 04:05:28.816: INFO: Pod "dns-test-4df34c25-d350-4d8d-8136-8e9b72c5db56" satisfied condition "running" + STEP: retrieving the pod 01/14/23 04:05:28.816 + STEP: looking for the results for each expected name from probers 01/14/23 04:05:28.819 + Jan 14 04:05:28.832: INFO: DNS probes using dns-1843/dns-test-4df34c25-d350-4d8d-8136-8e9b72c5db56 succeeded + + STEP: deleting the pod 01/14/23 04:05:28.832 + [AfterEach] [sig-network] DNS + test/e2e/framework/node/init/init.go:32 + Jan 14 04:05:28.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] DNS + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] DNS + tear down framework | framework.go:193 + STEP: Destroying namespace "dns-1843" for this suite. 01/14/23 04:05:28.853 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch + watch on custom resource definition objects [Conformance] + test/e2e/apimachinery/crd_watch.go:51 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:05:28.86 +Jan 14 04:05:28.860: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename crd-watch 01/14/23 04:05:28.861 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:05:28.876 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:05:28.878 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[It] watch on custom resource definition objects [Conformance] + test/e2e/apimachinery/crd_watch.go:51 +Jan 14 04:05:28.880: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Creating first CR 01/14/23 04:05:31.429 +Jan 14 04:05:31.434: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-14T04:05:31Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-14T04:05:31Z]] name:name1 resourceVersion:428291 uid:5286d1e1-f894-49cb-b77d-1b7a20fc3ec0] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Creating second CR 01/14/23 04:05:41.435 +Jan 14 04:05:41.441: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-14T04:05:41Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-14T04:05:41Z]] name:name2 resourceVersion:428390 uid:0ec25108-a77c-4c1c-95bc-c8fd747701ba] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Modifying first CR 01/14/23 04:05:51.443 +Jan 14 04:05:51.449: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-14T04:05:31Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-14T04:05:51Z]] name:name1 resourceVersion:428439 uid:5286d1e1-f894-49cb-b77d-1b7a20fc3ec0] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Modifying second CR 01/14/23 04:06:01.449 +Jan 14 04:06:01.455: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-14T04:05:41Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-14T04:06:01Z]] name:name2 resourceVersion:428487 uid:0ec25108-a77c-4c1c-95bc-c8fd747701ba] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Deleting first CR 01/14/23 04:06:11.456 +Jan 14 04:06:11.464: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-14T04:05:31Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-14T04:05:51Z]] name:name1 resourceVersion:428536 uid:5286d1e1-f894-49cb-b77d-1b7a20fc3ec0] num:map[num1:9223372036854775807 num2:1000000]]} +STEP: Deleting second CR 01/14/23 04:06:21.465 +Jan 14 04:06:21.472: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-14T04:05:41Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-14T04:06:01Z]] name:name2 resourceVersion:428584 uid:0ec25108-a77c-4c1c-95bc-c8fd747701ba] num:map[num1:9223372036854775807 num2:1000000]]} +[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:06:31.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "crd-watch-6646" for this suite. 01/14/23 04:06:31.991 +------------------------------ +• [SLOW TEST] [63.149 seconds] +[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + CustomResourceDefinition Watch + test/e2e/apimachinery/crd_watch.go:44 + watch on custom resource definition objects [Conformance] + test/e2e/apimachinery/crd_watch.go:51 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:05:28.86 + Jan 14 04:05:28.860: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename crd-watch 01/14/23 04:05:28.861 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:05:28.876 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:05:28.878 + [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [It] watch on custom resource definition objects [Conformance] + test/e2e/apimachinery/crd_watch.go:51 + Jan 14 04:05:28.880: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Creating first CR 01/14/23 04:05:31.429 + Jan 14 04:05:31.434: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-14T04:05:31Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-14T04:05:31Z]] name:name1 resourceVersion:428291 uid:5286d1e1-f894-49cb-b77d-1b7a20fc3ec0] num:map[num1:9223372036854775807 num2:1000000]]} + STEP: Creating second CR 01/14/23 04:05:41.435 + Jan 14 04:05:41.441: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-14T04:05:41Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-14T04:05:41Z]] name:name2 resourceVersion:428390 uid:0ec25108-a77c-4c1c-95bc-c8fd747701ba] num:map[num1:9223372036854775807 num2:1000000]]} + STEP: Modifying first CR 01/14/23 04:05:51.443 + Jan 14 04:05:51.449: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-14T04:05:31Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-14T04:05:51Z]] name:name1 resourceVersion:428439 uid:5286d1e1-f894-49cb-b77d-1b7a20fc3ec0] num:map[num1:9223372036854775807 num2:1000000]]} + STEP: Modifying second CR 01/14/23 04:06:01.449 + Jan 14 04:06:01.455: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-14T04:05:41Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-14T04:06:01Z]] name:name2 resourceVersion:428487 uid:0ec25108-a77c-4c1c-95bc-c8fd747701ba] num:map[num1:9223372036854775807 num2:1000000]]} + STEP: Deleting first CR 01/14/23 04:06:11.456 + Jan 14 04:06:11.464: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-14T04:05:31Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-14T04:05:51Z]] name:name1 resourceVersion:428536 uid:5286d1e1-f894-49cb-b77d-1b7a20fc3ec0] num:map[num1:9223372036854775807 num2:1000000]]} + STEP: Deleting second CR 01/14/23 04:06:21.465 + Jan 14 04:06:21.472: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-14T04:05:41Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-14T04:06:01Z]] name:name2 resourceVersion:428584 uid:0ec25108-a77c-4c1c-95bc-c8fd747701ba] num:map[num1:9223372036854775807 num2:1000000]]} + [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:06:31.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "crd-watch-6646" for this suite. 01/14/23 04:06:31.991 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-instrumentation] Events API + should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + test/e2e/instrumentation/events.go:98 +[BeforeEach] [sig-instrumentation] Events API + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:06:32.009 +Jan 14 04:06:32.009: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename events 01/14/23 04:06:32.01 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:06:32.026 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:06:32.029 +[BeforeEach] [sig-instrumentation] Events API + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-instrumentation] Events API + test/e2e/instrumentation/events.go:84 +[It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + test/e2e/instrumentation/events.go:98 +STEP: creating a test event 01/14/23 04:06:32.031 +STEP: listing events in all namespaces 01/14/23 04:06:32.038 +STEP: listing events in test namespace 01/14/23 04:06:32.043 +STEP: listing events with field selection filtering on source 01/14/23 04:06:32.045 +STEP: listing events with field selection filtering on reportingController 01/14/23 04:06:32.048 +STEP: getting the test event 01/14/23 04:06:32.05 +STEP: patching the test event 01/14/23 04:06:32.052 +STEP: getting the test event 01/14/23 04:06:32.058 +STEP: updating the test event 01/14/23 04:06:32.06 +STEP: getting the test event 01/14/23 04:06:32.067 +STEP: deleting the test event 01/14/23 04:06:32.069 +STEP: listing events in all namespaces 01/14/23 04:06:32.074 +STEP: listing events in test namespace 01/14/23 04:06:32.079 +[AfterEach] [sig-instrumentation] Events API + test/e2e/framework/node/init/init.go:32 +Jan 14 04:06:32.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-instrumentation] Events API + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-instrumentation] Events API + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-instrumentation] Events API + tear down framework | framework.go:193 +STEP: Destroying namespace "events-2773" for this suite. 01/14/23 04:06:32.085 +------------------------------ +• [0.082 seconds] +[sig-instrumentation] Events API +test/e2e/instrumentation/common/framework.go:23 + should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + test/e2e/instrumentation/events.go:98 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-instrumentation] Events API + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:06:32.009 + Jan 14 04:06:32.009: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename events 01/14/23 04:06:32.01 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:06:32.026 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:06:32.029 + [BeforeEach] [sig-instrumentation] Events API + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-instrumentation] Events API + test/e2e/instrumentation/events.go:84 + [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] + test/e2e/instrumentation/events.go:98 + STEP: creating a test event 01/14/23 04:06:32.031 + STEP: listing events in all namespaces 01/14/23 04:06:32.038 + STEP: listing events in test namespace 01/14/23 04:06:32.043 + STEP: listing events with field selection filtering on source 01/14/23 04:06:32.045 + STEP: listing events with field selection filtering on reportingController 01/14/23 04:06:32.048 + STEP: getting the test event 01/14/23 04:06:32.05 + STEP: patching the test event 01/14/23 04:06:32.052 + STEP: getting the test event 01/14/23 04:06:32.058 + STEP: updating the test event 01/14/23 04:06:32.06 + STEP: getting the test event 01/14/23 04:06:32.067 + STEP: deleting the test event 01/14/23 04:06:32.069 + STEP: listing events in all namespaces 01/14/23 04:06:32.074 + STEP: listing events in test namespace 01/14/23 04:06:32.079 + [AfterEach] [sig-instrumentation] Events API + test/e2e/framework/node/init/init.go:32 + Jan 14 04:06:32.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-instrumentation] Events API + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-instrumentation] Events API + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-instrumentation] Events API + tear down framework | framework.go:193 + STEP: Destroying namespace "events-2773" for this suite. 01/14/23 04:06:32.085 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:57 +[BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:06:32.092 +Jan 14 04:06:32.092: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename configmap 01/14/23 04:06:32.093 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:06:32.106 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:06:32.108 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:57 +STEP: Creating configMap with name configmap-test-volume-f50ed12b-8603-42af-b3b4-41c8e261e607 01/14/23 04:06:32.11 +STEP: Creating a pod to test consume configMaps 01/14/23 04:06:32.114 +Jan 14 04:06:32.123: INFO: Waiting up to 5m0s for pod "pod-configmaps-45951651-6ec4-4605-8e8a-853ff542c37b" in namespace "configmap-8399" to be "Succeeded or Failed" +Jan 14 04:06:32.126: INFO: Pod "pod-configmaps-45951651-6ec4-4605-8e8a-853ff542c37b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.836407ms +Jan 14 04:06:34.131: INFO: Pod "pod-configmaps-45951651-6ec4-4605-8e8a-853ff542c37b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007842701s +Jan 14 04:06:36.132: INFO: Pod "pod-configmaps-45951651-6ec4-4605-8e8a-853ff542c37b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008351834s +STEP: Saw pod success 01/14/23 04:06:36.132 +Jan 14 04:06:36.132: INFO: Pod "pod-configmaps-45951651-6ec4-4605-8e8a-853ff542c37b" satisfied condition "Succeeded or Failed" +Jan 14 04:06:36.135: INFO: Trying to get logs from node 10.0.1.106 pod pod-configmaps-45951651-6ec4-4605-8e8a-853ff542c37b container agnhost-container: +STEP: delete the pod 01/14/23 04:06:36.141 +Jan 14 04:06:36.154: INFO: Waiting for pod pod-configmaps-45951651-6ec4-4605-8e8a-853ff542c37b to disappear +Jan 14 04:06:36.156: INFO: Pod pod-configmaps-45951651-6ec4-4605-8e8a-853ff542c37b no longer exists +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 +Jan 14 04:06:36.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 +STEP: Destroying namespace "configmap-8399" for this suite. 01/14/23 04:06:36.161 +------------------------------ +• [4.077 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:57 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:06:32.092 + Jan 14 04:06:32.092: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename configmap 01/14/23 04:06:32.093 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:06:32.106 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:06:32.108 + [BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:57 + STEP: Creating configMap with name configmap-test-volume-f50ed12b-8603-42af-b3b4-41c8e261e607 01/14/23 04:06:32.11 + STEP: Creating a pod to test consume configMaps 01/14/23 04:06:32.114 + Jan 14 04:06:32.123: INFO: Waiting up to 5m0s for pod "pod-configmaps-45951651-6ec4-4605-8e8a-853ff542c37b" in namespace "configmap-8399" to be "Succeeded or Failed" + Jan 14 04:06:32.126: INFO: Pod "pod-configmaps-45951651-6ec4-4605-8e8a-853ff542c37b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.836407ms + Jan 14 04:06:34.131: INFO: Pod "pod-configmaps-45951651-6ec4-4605-8e8a-853ff542c37b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007842701s + Jan 14 04:06:36.132: INFO: Pod "pod-configmaps-45951651-6ec4-4605-8e8a-853ff542c37b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008351834s + STEP: Saw pod success 01/14/23 04:06:36.132 + Jan 14 04:06:36.132: INFO: Pod "pod-configmaps-45951651-6ec4-4605-8e8a-853ff542c37b" satisfied condition "Succeeded or Failed" + Jan 14 04:06:36.135: INFO: Trying to get logs from node 10.0.1.106 pod pod-configmaps-45951651-6ec4-4605-8e8a-853ff542c37b container agnhost-container: + STEP: delete the pod 01/14/23 04:06:36.141 + Jan 14 04:06:36.154: INFO: Waiting for pod pod-configmaps-45951651-6ec4-4605-8e8a-853ff542c37b to disappear + Jan 14 04:06:36.156: INFO: Pod pod-configmaps-45951651-6ec4-4605-8e8a-853ff542c37b no longer exists + [AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 + Jan 14 04:06:36.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 + STEP: Destroying namespace "configmap-8399" for this suite. 01/14/23 04:06:36.161 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-network] EndpointSlice + should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + test/e2e/network/endpointslice.go:205 +[BeforeEach] [sig-network] EndpointSlice + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:06:36.17 +Jan 14 04:06:36.170: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename endpointslice 01/14/23 04:06:36.171 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:06:36.187 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:06:36.189 +[BeforeEach] [sig-network] EndpointSlice + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] EndpointSlice + test/e2e/network/endpointslice.go:52 +[It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + test/e2e/network/endpointslice.go:205 +STEP: referencing a single matching pod 01/14/23 04:06:41.251 +STEP: referencing matching pods with named port 01/14/23 04:06:46.258 +STEP: creating empty Endpoints and EndpointSlices for no matching Pods 01/14/23 04:06:51.266 +STEP: recreating EndpointSlices after they've been deleted 01/14/23 04:06:56.274 +Jan 14 04:06:56.289: INFO: EndpointSlice for Service endpointslice-9413/example-named-port not found +[AfterEach] [sig-network] EndpointSlice + test/e2e/framework/node/init/init.go:32 +Jan 14 04:07:06.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] EndpointSlice + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] EndpointSlice + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] EndpointSlice + tear down framework | framework.go:193 +STEP: Destroying namespace "endpointslice-9413" for this suite. 01/14/23 04:07:06.302 +------------------------------ +• [SLOW TEST] [30.141 seconds] +[sig-network] EndpointSlice +test/e2e/network/common/framework.go:23 + should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + test/e2e/network/endpointslice.go:205 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] EndpointSlice + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:06:36.17 + Jan 14 04:06:36.170: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename endpointslice 01/14/23 04:06:36.171 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:06:36.187 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:06:36.189 + [BeforeEach] [sig-network] EndpointSlice + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] EndpointSlice + test/e2e/network/endpointslice.go:52 + [It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] + test/e2e/network/endpointslice.go:205 + STEP: referencing a single matching pod 01/14/23 04:06:41.251 + STEP: referencing matching pods with named port 01/14/23 04:06:46.258 + STEP: creating empty Endpoints and EndpointSlices for no matching Pods 01/14/23 04:06:51.266 + STEP: recreating EndpointSlices after they've been deleted 01/14/23 04:06:56.274 + Jan 14 04:06:56.289: INFO: EndpointSlice for Service endpointslice-9413/example-named-port not found + [AfterEach] [sig-network] EndpointSlice + test/e2e/framework/node/init/init.go:32 + Jan 14 04:07:06.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] EndpointSlice + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] EndpointSlice + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] EndpointSlice + tear down framework | framework.go:193 + STEP: Destroying namespace "endpointslice-9413" for this suite. 01/14/23 04:07:06.302 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + test/e2e/apimachinery/garbage_collector.go:735 +[BeforeEach] [sig-api-machinery] Garbage collector + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:07:06.311 +Jan 14 04:07:06.311: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename gc 01/14/23 04:07:06.312 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:07:06.328 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:07:06.33 +[BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:31 +[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + test/e2e/apimachinery/garbage_collector.go:735 +STEP: create the rc1 01/14/23 04:07:06.336 +STEP: create the rc2 01/14/23 04:07:06.344 +STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well 01/14/23 04:07:11.353 +STEP: delete the rc simpletest-rc-to-be-deleted 01/14/23 04:07:11.697 +STEP: wait for the rc to be deleted 01/14/23 04:07:11.703 +STEP: Gathering metrics 01/14/23 04:07:16.715 +Jan 14 04:07:16.742: INFO: Waiting up to 5m0s for pod "kube-controller-manager-10.0.1.231" in namespace "kube-system" to be "running and ready" +Jan 14 04:07:16.746: INFO: Pod "kube-controller-manager-10.0.1.231": Phase="Running", Reason="", readiness=true. Elapsed: 3.348003ms +Jan 14 04:07:16.746: INFO: The phase of Pod kube-controller-manager-10.0.1.231 is Running (Ready = true) +Jan 14 04:07:16.746: INFO: Pod "kube-controller-manager-10.0.1.231" satisfied condition "running and ready" +Jan 14 04:07:16.804: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +Jan 14 04:07:16.804: INFO: Deleting pod "simpletest-rc-to-be-deleted-2fttv" in namespace "gc-1765" +Jan 14 04:07:16.813: INFO: Deleting pod "simpletest-rc-to-be-deleted-49g89" in namespace "gc-1765" +Jan 14 04:07:16.823: INFO: Deleting pod "simpletest-rc-to-be-deleted-4l7dr" in namespace "gc-1765" +Jan 14 04:07:16.833: INFO: Deleting pod "simpletest-rc-to-be-deleted-4r656" in namespace "gc-1765" +Jan 14 04:07:16.842: INFO: Deleting pod "simpletest-rc-to-be-deleted-4tzg5" in namespace "gc-1765" +Jan 14 04:07:16.851: INFO: Deleting pod "simpletest-rc-to-be-deleted-4v2jj" in namespace "gc-1765" +Jan 14 04:07:16.862: INFO: Deleting pod "simpletest-rc-to-be-deleted-55rjc" in namespace "gc-1765" +Jan 14 04:07:16.874: INFO: Deleting pod "simpletest-rc-to-be-deleted-5h4dd" in namespace "gc-1765" +Jan 14 04:07:16.885: INFO: Deleting pod "simpletest-rc-to-be-deleted-5lh25" in namespace "gc-1765" +Jan 14 04:07:16.897: INFO: Deleting pod "simpletest-rc-to-be-deleted-5rctm" in namespace "gc-1765" +Jan 14 04:07:16.906: INFO: Deleting pod "simpletest-rc-to-be-deleted-6sx46" in namespace "gc-1765" +Jan 14 04:07:16.918: INFO: Deleting pod "simpletest-rc-to-be-deleted-7ljb9" in namespace "gc-1765" +Jan 14 04:07:16.930: INFO: Deleting pod "simpletest-rc-to-be-deleted-7qthw" in namespace "gc-1765" +Jan 14 04:07:16.939: INFO: Deleting pod "simpletest-rc-to-be-deleted-82nq6" in namespace "gc-1765" +Jan 14 04:07:16.948: INFO: Deleting pod "simpletest-rc-to-be-deleted-8nmhs" in namespace "gc-1765" +Jan 14 04:07:16.959: INFO: Deleting pod "simpletest-rc-to-be-deleted-8pc5n" in namespace "gc-1765" +Jan 14 04:07:16.975: INFO: Deleting pod "simpletest-rc-to-be-deleted-8szxk" in namespace "gc-1765" +Jan 14 04:07:16.988: INFO: Deleting pod "simpletest-rc-to-be-deleted-97cp6" in namespace "gc-1765" +Jan 14 04:07:16.997: INFO: Deleting pod "simpletest-rc-to-be-deleted-9qf9m" in namespace "gc-1765" +Jan 14 04:07:17.012: INFO: Deleting pod "simpletest-rc-to-be-deleted-9wkbf" in namespace "gc-1765" +Jan 14 04:07:17.024: INFO: Deleting pod "simpletest-rc-to-be-deleted-b6w5p" in namespace "gc-1765" +Jan 14 04:07:17.037: INFO: Deleting pod "simpletest-rc-to-be-deleted-b9sst" in namespace "gc-1765" +Jan 14 04:07:17.050: INFO: Deleting pod "simpletest-rc-to-be-deleted-b9zgx" in namespace "gc-1765" +Jan 14 04:07:17.062: INFO: Deleting pod "simpletest-rc-to-be-deleted-bgrjh" in namespace "gc-1765" +Jan 14 04:07:17.074: INFO: Deleting pod "simpletest-rc-to-be-deleted-bqlw2" in namespace "gc-1765" +Jan 14 04:07:17.085: INFO: Deleting pod "simpletest-rc-to-be-deleted-btslx" in namespace "gc-1765" +Jan 14 04:07:17.098: INFO: Deleting pod "simpletest-rc-to-be-deleted-c66rt" in namespace "gc-1765" +Jan 14 04:07:17.108: INFO: Deleting pod "simpletest-rc-to-be-deleted-cq7lb" in namespace "gc-1765" +Jan 14 04:07:17.117: INFO: Deleting pod "simpletest-rc-to-be-deleted-cxlmj" in namespace "gc-1765" +Jan 14 04:07:17.131: INFO: Deleting pod "simpletest-rc-to-be-deleted-d7t7h" in namespace "gc-1765" +Jan 14 04:07:17.144: INFO: Deleting pod "simpletest-rc-to-be-deleted-ddr7w" in namespace "gc-1765" +Jan 14 04:07:17.159: INFO: Deleting pod "simpletest-rc-to-be-deleted-dgz86" in namespace "gc-1765" +Jan 14 04:07:17.170: INFO: Deleting pod "simpletest-rc-to-be-deleted-dmp7r" in namespace "gc-1765" +Jan 14 04:07:17.183: INFO: Deleting pod "simpletest-rc-to-be-deleted-f7jgq" in namespace "gc-1765" +Jan 14 04:07:17.197: INFO: Deleting pod "simpletest-rc-to-be-deleted-f9nch" in namespace "gc-1765" +Jan 14 04:07:17.210: INFO: Deleting pod "simpletest-rc-to-be-deleted-g67f2" in namespace "gc-1765" +Jan 14 04:07:17.219: INFO: Deleting pod "simpletest-rc-to-be-deleted-gfqk4" in namespace "gc-1765" +Jan 14 04:07:17.231: INFO: Deleting pod "simpletest-rc-to-be-deleted-glzml" in namespace "gc-1765" +Jan 14 04:07:17.240: INFO: Deleting pod "simpletest-rc-to-be-deleted-gmfm8" in namespace "gc-1765" +Jan 14 04:07:17.249: INFO: Deleting pod "simpletest-rc-to-be-deleted-hd7hs" in namespace "gc-1765" +Jan 14 04:07:17.261: INFO: Deleting pod "simpletest-rc-to-be-deleted-hdrs5" in namespace "gc-1765" +Jan 14 04:07:17.272: INFO: Deleting pod "simpletest-rc-to-be-deleted-hns9g" in namespace "gc-1765" +Jan 14 04:07:17.286: INFO: Deleting pod "simpletest-rc-to-be-deleted-hth4h" in namespace "gc-1765" +Jan 14 04:07:17.300: INFO: Deleting pod "simpletest-rc-to-be-deleted-jn94q" in namespace "gc-1765" +Jan 14 04:07:17.313: INFO: Deleting pod "simpletest-rc-to-be-deleted-kcft5" in namespace "gc-1765" +[AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/node/init/init.go:32 +Jan 14 04:07:17.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + tear down framework | framework.go:193 +STEP: Destroying namespace "gc-1765" for this suite. 01/14/23 04:07:17.331 +------------------------------ +• [SLOW TEST] [11.025 seconds] +[sig-api-machinery] Garbage collector +test/e2e/apimachinery/framework.go:23 + should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + test/e2e/apimachinery/garbage_collector.go:735 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Garbage collector + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:07:06.311 + Jan 14 04:07:06.311: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename gc 01/14/23 04:07:06.312 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:07:06.328 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:07:06.33 + [BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:31 + [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] + test/e2e/apimachinery/garbage_collector.go:735 + STEP: create the rc1 01/14/23 04:07:06.336 + STEP: create the rc2 01/14/23 04:07:06.344 + STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well 01/14/23 04:07:11.353 + STEP: delete the rc simpletest-rc-to-be-deleted 01/14/23 04:07:11.697 + STEP: wait for the rc to be deleted 01/14/23 04:07:11.703 + STEP: Gathering metrics 01/14/23 04:07:16.715 + Jan 14 04:07:16.742: INFO: Waiting up to 5m0s for pod "kube-controller-manager-10.0.1.231" in namespace "kube-system" to be "running and ready" + Jan 14 04:07:16.746: INFO: Pod "kube-controller-manager-10.0.1.231": Phase="Running", Reason="", readiness=true. Elapsed: 3.348003ms + Jan 14 04:07:16.746: INFO: The phase of Pod kube-controller-manager-10.0.1.231 is Running (Ready = true) + Jan 14 04:07:16.746: INFO: Pod "kube-controller-manager-10.0.1.231" satisfied condition "running and ready" + Jan 14 04:07:16.804: INFO: For apiserver_request_total: + For apiserver_request_latency_seconds: + For apiserver_init_events_total: + For garbage_collector_attempt_to_delete_queue_latency: + For garbage_collector_attempt_to_delete_work_duration: + For garbage_collector_attempt_to_orphan_queue_latency: + For garbage_collector_attempt_to_orphan_work_duration: + For garbage_collector_dirty_processing_latency_microseconds: + For garbage_collector_event_processing_latency_microseconds: + For garbage_collector_graph_changes_queue_latency: + For garbage_collector_graph_changes_work_duration: + For garbage_collector_orphan_processing_latency_microseconds: + For namespace_queue_latency: + For namespace_queue_latency_sum: + For namespace_queue_latency_count: + For namespace_retries: + For namespace_work_duration: + For namespace_work_duration_sum: + For namespace_work_duration_count: + For function_duration_seconds: + For errors_total: + For evicted_pods_total: + + Jan 14 04:07:16.804: INFO: Deleting pod "simpletest-rc-to-be-deleted-2fttv" in namespace "gc-1765" + Jan 14 04:07:16.813: INFO: Deleting pod "simpletest-rc-to-be-deleted-49g89" in namespace "gc-1765" + Jan 14 04:07:16.823: INFO: Deleting pod "simpletest-rc-to-be-deleted-4l7dr" in namespace "gc-1765" + Jan 14 04:07:16.833: INFO: Deleting pod "simpletest-rc-to-be-deleted-4r656" in namespace "gc-1765" + Jan 14 04:07:16.842: INFO: Deleting pod "simpletest-rc-to-be-deleted-4tzg5" in namespace "gc-1765" + Jan 14 04:07:16.851: INFO: Deleting pod "simpletest-rc-to-be-deleted-4v2jj" in namespace "gc-1765" + Jan 14 04:07:16.862: INFO: Deleting pod "simpletest-rc-to-be-deleted-55rjc" in namespace "gc-1765" + Jan 14 04:07:16.874: INFO: Deleting pod "simpletest-rc-to-be-deleted-5h4dd" in namespace "gc-1765" + Jan 14 04:07:16.885: INFO: Deleting pod "simpletest-rc-to-be-deleted-5lh25" in namespace "gc-1765" + Jan 14 04:07:16.897: INFO: Deleting pod "simpletest-rc-to-be-deleted-5rctm" in namespace "gc-1765" + Jan 14 04:07:16.906: INFO: Deleting pod "simpletest-rc-to-be-deleted-6sx46" in namespace "gc-1765" + Jan 14 04:07:16.918: INFO: Deleting pod "simpletest-rc-to-be-deleted-7ljb9" in namespace "gc-1765" + Jan 14 04:07:16.930: INFO: Deleting pod "simpletest-rc-to-be-deleted-7qthw" in namespace "gc-1765" + Jan 14 04:07:16.939: INFO: Deleting pod "simpletest-rc-to-be-deleted-82nq6" in namespace "gc-1765" + Jan 14 04:07:16.948: INFO: Deleting pod "simpletest-rc-to-be-deleted-8nmhs" in namespace "gc-1765" + Jan 14 04:07:16.959: INFO: Deleting pod "simpletest-rc-to-be-deleted-8pc5n" in namespace "gc-1765" + Jan 14 04:07:16.975: INFO: Deleting pod "simpletest-rc-to-be-deleted-8szxk" in namespace "gc-1765" + Jan 14 04:07:16.988: INFO: Deleting pod "simpletest-rc-to-be-deleted-97cp6" in namespace "gc-1765" + Jan 14 04:07:16.997: INFO: Deleting pod "simpletest-rc-to-be-deleted-9qf9m" in namespace "gc-1765" + Jan 14 04:07:17.012: INFO: Deleting pod "simpletest-rc-to-be-deleted-9wkbf" in namespace "gc-1765" + Jan 14 04:07:17.024: INFO: Deleting pod "simpletest-rc-to-be-deleted-b6w5p" in namespace "gc-1765" + Jan 14 04:07:17.037: INFO: Deleting pod "simpletest-rc-to-be-deleted-b9sst" in namespace "gc-1765" + Jan 14 04:07:17.050: INFO: Deleting pod "simpletest-rc-to-be-deleted-b9zgx" in namespace "gc-1765" + Jan 14 04:07:17.062: INFO: Deleting pod "simpletest-rc-to-be-deleted-bgrjh" in namespace "gc-1765" + Jan 14 04:07:17.074: INFO: Deleting pod "simpletest-rc-to-be-deleted-bqlw2" in namespace "gc-1765" + Jan 14 04:07:17.085: INFO: Deleting pod "simpletest-rc-to-be-deleted-btslx" in namespace "gc-1765" + Jan 14 04:07:17.098: INFO: Deleting pod "simpletest-rc-to-be-deleted-c66rt" in namespace "gc-1765" + Jan 14 04:07:17.108: INFO: Deleting pod "simpletest-rc-to-be-deleted-cq7lb" in namespace "gc-1765" + Jan 14 04:07:17.117: INFO: Deleting pod "simpletest-rc-to-be-deleted-cxlmj" in namespace "gc-1765" + Jan 14 04:07:17.131: INFO: Deleting pod "simpletest-rc-to-be-deleted-d7t7h" in namespace "gc-1765" + Jan 14 04:07:17.144: INFO: Deleting pod "simpletest-rc-to-be-deleted-ddr7w" in namespace "gc-1765" + Jan 14 04:07:17.159: INFO: Deleting pod "simpletest-rc-to-be-deleted-dgz86" in namespace "gc-1765" + Jan 14 04:07:17.170: INFO: Deleting pod "simpletest-rc-to-be-deleted-dmp7r" in namespace "gc-1765" + Jan 14 04:07:17.183: INFO: Deleting pod "simpletest-rc-to-be-deleted-f7jgq" in namespace "gc-1765" + Jan 14 04:07:17.197: INFO: Deleting pod "simpletest-rc-to-be-deleted-f9nch" in namespace "gc-1765" + Jan 14 04:07:17.210: INFO: Deleting pod "simpletest-rc-to-be-deleted-g67f2" in namespace "gc-1765" + Jan 14 04:07:17.219: INFO: Deleting pod "simpletest-rc-to-be-deleted-gfqk4" in namespace "gc-1765" + Jan 14 04:07:17.231: INFO: Deleting pod "simpletest-rc-to-be-deleted-glzml" in namespace "gc-1765" + Jan 14 04:07:17.240: INFO: Deleting pod "simpletest-rc-to-be-deleted-gmfm8" in namespace "gc-1765" + Jan 14 04:07:17.249: INFO: Deleting pod "simpletest-rc-to-be-deleted-hd7hs" in namespace "gc-1765" + Jan 14 04:07:17.261: INFO: Deleting pod "simpletest-rc-to-be-deleted-hdrs5" in namespace "gc-1765" + Jan 14 04:07:17.272: INFO: Deleting pod "simpletest-rc-to-be-deleted-hns9g" in namespace "gc-1765" + Jan 14 04:07:17.286: INFO: Deleting pod "simpletest-rc-to-be-deleted-hth4h" in namespace "gc-1765" + Jan 14 04:07:17.300: INFO: Deleting pod "simpletest-rc-to-be-deleted-jn94q" in namespace "gc-1765" + Jan 14 04:07:17.313: INFO: Deleting pod "simpletest-rc-to-be-deleted-kcft5" in namespace "gc-1765" + [AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/node/init/init.go:32 + Jan 14 04:07:17.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + tear down framework | framework.go:193 + STEP: Destroying namespace "gc-1765" for this suite. 01/14/23 04:07:17.331 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should receive events on concurrent watches in same order [Conformance] + test/e2e/apimachinery/watch.go:334 +[BeforeEach] [sig-api-machinery] Watchers + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:07:17.338 +Jan 14 04:07:17.338: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename watch 01/14/23 04:07:17.339 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:07:17.353 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:07:17.356 +[BeforeEach] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:31 +[It] should receive events on concurrent watches in same order [Conformance] + test/e2e/apimachinery/watch.go:334 +STEP: getting a starting resourceVersion 01/14/23 04:07:17.358 +STEP: starting a background goroutine to produce watch events 01/14/23 04:07:17.361 +STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order 01/14/23 04:07:17.361 +[AfterEach] [sig-api-machinery] Watchers + test/e2e/framework/node/init/init.go:32 +Jan 14 04:07:20.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Watchers + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Watchers + tear down framework | framework.go:193 +STEP: Destroying namespace "watch-739" for this suite. 01/14/23 04:07:20.195 +------------------------------ +• [2.908 seconds] +[sig-api-machinery] Watchers +test/e2e/apimachinery/framework.go:23 + should receive events on concurrent watches in same order [Conformance] + test/e2e/apimachinery/watch.go:334 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Watchers + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:07:17.338 + Jan 14 04:07:17.338: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename watch 01/14/23 04:07:17.339 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:07:17.353 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:07:17.356 + [BeforeEach] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:31 + [It] should receive events on concurrent watches in same order [Conformance] + test/e2e/apimachinery/watch.go:334 + STEP: getting a starting resourceVersion 01/14/23 04:07:17.358 + STEP: starting a background goroutine to produce watch events 01/14/23 04:07:17.361 + STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order 01/14/23 04:07:17.361 + [AfterEach] [sig-api-machinery] Watchers + test/e2e/framework/node/init/init.go:32 + Jan 14 04:07:20.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Watchers + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Watchers + tear down framework | framework.go:193 + STEP: Destroying namespace "watch-739" for this suite. 01/14/23 04:07:20.195 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-architecture] Conformance Tests + should have at least two untainted nodes [Conformance] + test/e2e/architecture/conformance.go:38 +[BeforeEach] [sig-architecture] Conformance Tests + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:07:20.247 +Jan 14 04:07:20.247: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename conformance-tests 01/14/23 04:07:20.248 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:07:20.263 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:07:20.265 +[BeforeEach] [sig-architecture] Conformance Tests + test/e2e/framework/metrics/init/init.go:31 +[It] should have at least two untainted nodes [Conformance] + test/e2e/architecture/conformance.go:38 +STEP: Getting node addresses 01/14/23 04:07:20.268 +Jan 14 04:07:20.268: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +[AfterEach] [sig-architecture] Conformance Tests + test/e2e/framework/node/init/init.go:32 +Jan 14 04:07:20.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-architecture] Conformance Tests + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-architecture] Conformance Tests + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-architecture] Conformance Tests + tear down framework | framework.go:193 +STEP: Destroying namespace "conformance-tests-784" for this suite. 01/14/23 04:07:20.278 +------------------------------ +• [0.036 seconds] +[sig-architecture] Conformance Tests +test/e2e/architecture/framework.go:23 + should have at least two untainted nodes [Conformance] + test/e2e/architecture/conformance.go:38 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-architecture] Conformance Tests + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:07:20.247 + Jan 14 04:07:20.247: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename conformance-tests 01/14/23 04:07:20.248 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:07:20.263 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:07:20.265 + [BeforeEach] [sig-architecture] Conformance Tests + test/e2e/framework/metrics/init/init.go:31 + [It] should have at least two untainted nodes [Conformance] + test/e2e/architecture/conformance.go:38 + STEP: Getting node addresses 01/14/23 04:07:20.268 + Jan 14 04:07:20.268: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable + [AfterEach] [sig-architecture] Conformance Tests + test/e2e/framework/node/init/init.go:32 + Jan 14 04:07:20.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-architecture] Conformance Tests + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-architecture] Conformance Tests + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-architecture] Conformance Tests + tear down framework | framework.go:193 + STEP: Destroying namespace "conformance-tests-784" for this suite. 01/14/23 04:07:20.278 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + Replicaset should have a working scale subresource [Conformance] + test/e2e/apps/replica_set.go:143 +[BeforeEach] [sig-apps] ReplicaSet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:07:20.284 +Jan 14 04:07:20.284: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename replicaset 01/14/23 04:07:20.284 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:07:20.299 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:07:20.302 +[BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:31 +[It] Replicaset should have a working scale subresource [Conformance] + test/e2e/apps/replica_set.go:143 +STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota 01/14/23 04:07:20.304 +Jan 14 04:07:20.311: INFO: Pod name sample-pod: Found 0 pods out of 1 +Jan 14 04:07:25.315: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running 01/14/23 04:07:25.315 +STEP: getting scale subresource 01/14/23 04:07:25.315 +STEP: updating a scale subresource 01/14/23 04:07:25.32 +STEP: verifying the replicaset Spec.Replicas was modified 01/14/23 04:07:25.326 +STEP: Patch a scale subresource 01/14/23 04:07:25.329 +[AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/node/init/init.go:32 +Jan 14 04:07:25.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] ReplicaSet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] ReplicaSet + tear down framework | framework.go:193 +STEP: Destroying namespace "replicaset-2777" for this suite. 01/14/23 04:07:25.345 +------------------------------ +• [SLOW TEST] [5.070 seconds] +[sig-apps] ReplicaSet +test/e2e/apps/framework.go:23 + Replicaset should have a working scale subresource [Conformance] + test/e2e/apps/replica_set.go:143 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ReplicaSet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:07:20.284 + Jan 14 04:07:20.284: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename replicaset 01/14/23 04:07:20.284 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:07:20.299 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:07:20.302 + [BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:31 + [It] Replicaset should have a working scale subresource [Conformance] + test/e2e/apps/replica_set.go:143 + STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota 01/14/23 04:07:20.304 + Jan 14 04:07:20.311: INFO: Pod name sample-pod: Found 0 pods out of 1 + Jan 14 04:07:25.315: INFO: Pod name sample-pod: Found 1 pods out of 1 + STEP: ensuring each pod is running 01/14/23 04:07:25.315 + STEP: getting scale subresource 01/14/23 04:07:25.315 + STEP: updating a scale subresource 01/14/23 04:07:25.32 + STEP: verifying the replicaset Spec.Replicas was modified 01/14/23 04:07:25.326 + STEP: Patch a scale subresource 01/14/23 04:07:25.329 + [AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/node/init/init.go:32 + Jan 14 04:07:25.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] ReplicaSet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] ReplicaSet + tear down framework | framework.go:193 + STEP: Destroying namespace "replicaset-2777" for this suite. 01/14/23 04:07:25.345 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for ExternalName services [Conformance] + test/e2e/network/dns.go:333 +[BeforeEach] [sig-network] DNS + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:07:25.355 +Jan 14 04:07:25.355: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename dns 01/14/23 04:07:25.355 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:07:25.371 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:07:25.374 +[BeforeEach] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:31 +[It] should provide DNS for ExternalName services [Conformance] + test/e2e/network/dns.go:333 +STEP: Creating a test externalName service 01/14/23 04:07:25.376 +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8246.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8246.svc.cluster.local; sleep 1; done + 01/14/23 04:07:25.382 +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8246.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8246.svc.cluster.local; sleep 1; done + 01/14/23 04:07:25.382 +STEP: creating a pod to probe DNS 01/14/23 04:07:25.382 +STEP: submitting the pod to kubernetes 01/14/23 04:07:25.382 +Jan 14 04:07:25.391: INFO: Waiting up to 15m0s for pod "dns-test-ca2b78f4-b39d-4780-8415-e0eb7aeb757a" in namespace "dns-8246" to be "running" +Jan 14 04:07:25.394: INFO: Pod "dns-test-ca2b78f4-b39d-4780-8415-e0eb7aeb757a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.727511ms +Jan 14 04:07:27.399: INFO: Pod "dns-test-ca2b78f4-b39d-4780-8415-e0eb7aeb757a": Phase="Running", Reason="", readiness=true. Elapsed: 2.008151807s +Jan 14 04:07:27.399: INFO: Pod "dns-test-ca2b78f4-b39d-4780-8415-e0eb7aeb757a" satisfied condition "running" +STEP: retrieving the pod 01/14/23 04:07:27.399 +STEP: looking for the results for each expected name from probers 01/14/23 04:07:27.402 +Jan 14 04:07:27.411: INFO: DNS probes using dns-test-ca2b78f4-b39d-4780-8415-e0eb7aeb757a succeeded + +STEP: deleting the pod 01/14/23 04:07:27.411 +STEP: changing the externalName to bar.example.com 01/14/23 04:07:27.432 +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8246.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8246.svc.cluster.local; sleep 1; done + 01/14/23 04:07:27.44 +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8246.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8246.svc.cluster.local; sleep 1; done + 01/14/23 04:07:27.44 +STEP: creating a second pod to probe DNS 01/14/23 04:07:27.44 +STEP: submitting the pod to kubernetes 01/14/23 04:07:27.44 +Jan 14 04:07:27.447: INFO: Waiting up to 15m0s for pod "dns-test-4a8ff4e4-4124-40e5-aef9-9a03cd9241da" in namespace "dns-8246" to be "running" +Jan 14 04:07:27.450: INFO: Pod "dns-test-4a8ff4e4-4124-40e5-aef9-9a03cd9241da": Phase="Pending", Reason="", readiness=false. Elapsed: 3.061589ms +Jan 14 04:07:29.455: INFO: Pod "dns-test-4a8ff4e4-4124-40e5-aef9-9a03cd9241da": Phase="Running", Reason="", readiness=true. Elapsed: 2.007884834s +Jan 14 04:07:29.455: INFO: Pod "dns-test-4a8ff4e4-4124-40e5-aef9-9a03cd9241da" satisfied condition "running" +STEP: retrieving the pod 01/14/23 04:07:29.455 +STEP: looking for the results for each expected name from probers 01/14/23 04:07:29.458 +Jan 14 04:07:29.461: INFO: File wheezy_udp@dns-test-service-3.dns-8246.svc.cluster.local from pod dns-8246/dns-test-4a8ff4e4-4124-40e5-aef9-9a03cd9241da contains 'foo.example.com. +' instead of 'bar.example.com.' +Jan 14 04:07:29.465: INFO: File jessie_udp@dns-test-service-3.dns-8246.svc.cluster.local from pod dns-8246/dns-test-4a8ff4e4-4124-40e5-aef9-9a03cd9241da contains 'foo.example.com. +' instead of 'bar.example.com.' +Jan 14 04:07:29.465: INFO: Lookups using dns-8246/dns-test-4a8ff4e4-4124-40e5-aef9-9a03cd9241da failed for: [wheezy_udp@dns-test-service-3.dns-8246.svc.cluster.local jessie_udp@dns-test-service-3.dns-8246.svc.cluster.local] + +Jan 14 04:07:34.475: INFO: DNS probes using dns-test-4a8ff4e4-4124-40e5-aef9-9a03cd9241da succeeded + +STEP: deleting the pod 01/14/23 04:07:34.475 +STEP: changing the service to type=ClusterIP 01/14/23 04:07:34.487 +STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8246.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8246.svc.cluster.local; sleep 1; done + 01/14/23 04:07:34.499 +STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8246.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8246.svc.cluster.local; sleep 1; done + 01/14/23 04:07:34.499 +STEP: creating a third pod to probe DNS 01/14/23 04:07:34.499 +STEP: submitting the pod to kubernetes 01/14/23 04:07:34.502 +Jan 14 04:07:34.509: INFO: Waiting up to 15m0s for pod "dns-test-fc59197b-899f-47ff-9247-52be1760fb26" in namespace "dns-8246" to be "running" +Jan 14 04:07:34.512: INFO: Pod "dns-test-fc59197b-899f-47ff-9247-52be1760fb26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.866073ms +Jan 14 04:07:36.516: INFO: Pod "dns-test-fc59197b-899f-47ff-9247-52be1760fb26": Phase="Running", Reason="", readiness=true. Elapsed: 2.007391567s +Jan 14 04:07:36.516: INFO: Pod "dns-test-fc59197b-899f-47ff-9247-52be1760fb26" satisfied condition "running" +STEP: retrieving the pod 01/14/23 04:07:36.516 +STEP: looking for the results for each expected name from probers 01/14/23 04:07:36.52 +Jan 14 04:07:36.527: INFO: DNS probes using dns-test-fc59197b-899f-47ff-9247-52be1760fb26 succeeded + +STEP: deleting the pod 01/14/23 04:07:36.527 +STEP: deleting the test externalName service 01/14/23 04:07:36.541 +[AfterEach] [sig-network] DNS + test/e2e/framework/node/init/init.go:32 +Jan 14 04:07:36.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] DNS + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] DNS + tear down framework | framework.go:193 +STEP: Destroying namespace "dns-8246" for this suite. 01/14/23 04:07:36.561 +------------------------------ +• [SLOW TEST] [11.214 seconds] +[sig-network] DNS +test/e2e/network/common/framework.go:23 + should provide DNS for ExternalName services [Conformance] + test/e2e/network/dns.go:333 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] DNS + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:07:25.355 + Jan 14 04:07:25.355: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename dns 01/14/23 04:07:25.355 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:07:25.371 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:07:25.374 + [BeforeEach] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:31 + [It] should provide DNS for ExternalName services [Conformance] + test/e2e/network/dns.go:333 + STEP: Creating a test externalName service 01/14/23 04:07:25.376 + STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8246.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8246.svc.cluster.local; sleep 1; done + 01/14/23 04:07:25.382 + STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8246.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8246.svc.cluster.local; sleep 1; done + 01/14/23 04:07:25.382 + STEP: creating a pod to probe DNS 01/14/23 04:07:25.382 + STEP: submitting the pod to kubernetes 01/14/23 04:07:25.382 + Jan 14 04:07:25.391: INFO: Waiting up to 15m0s for pod "dns-test-ca2b78f4-b39d-4780-8415-e0eb7aeb757a" in namespace "dns-8246" to be "running" + Jan 14 04:07:25.394: INFO: Pod "dns-test-ca2b78f4-b39d-4780-8415-e0eb7aeb757a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.727511ms + Jan 14 04:07:27.399: INFO: Pod "dns-test-ca2b78f4-b39d-4780-8415-e0eb7aeb757a": Phase="Running", Reason="", readiness=true. Elapsed: 2.008151807s + Jan 14 04:07:27.399: INFO: Pod "dns-test-ca2b78f4-b39d-4780-8415-e0eb7aeb757a" satisfied condition "running" + STEP: retrieving the pod 01/14/23 04:07:27.399 + STEP: looking for the results for each expected name from probers 01/14/23 04:07:27.402 + Jan 14 04:07:27.411: INFO: DNS probes using dns-test-ca2b78f4-b39d-4780-8415-e0eb7aeb757a succeeded + + STEP: deleting the pod 01/14/23 04:07:27.411 + STEP: changing the externalName to bar.example.com 01/14/23 04:07:27.432 + STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8246.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8246.svc.cluster.local; sleep 1; done + 01/14/23 04:07:27.44 + STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8246.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8246.svc.cluster.local; sleep 1; done + 01/14/23 04:07:27.44 + STEP: creating a second pod to probe DNS 01/14/23 04:07:27.44 + STEP: submitting the pod to kubernetes 01/14/23 04:07:27.44 + Jan 14 04:07:27.447: INFO: Waiting up to 15m0s for pod "dns-test-4a8ff4e4-4124-40e5-aef9-9a03cd9241da" in namespace "dns-8246" to be "running" + Jan 14 04:07:27.450: INFO: Pod "dns-test-4a8ff4e4-4124-40e5-aef9-9a03cd9241da": Phase="Pending", Reason="", readiness=false. Elapsed: 3.061589ms + Jan 14 04:07:29.455: INFO: Pod "dns-test-4a8ff4e4-4124-40e5-aef9-9a03cd9241da": Phase="Running", Reason="", readiness=true. Elapsed: 2.007884834s + Jan 14 04:07:29.455: INFO: Pod "dns-test-4a8ff4e4-4124-40e5-aef9-9a03cd9241da" satisfied condition "running" + STEP: retrieving the pod 01/14/23 04:07:29.455 + STEP: looking for the results for each expected name from probers 01/14/23 04:07:29.458 + Jan 14 04:07:29.461: INFO: File wheezy_udp@dns-test-service-3.dns-8246.svc.cluster.local from pod dns-8246/dns-test-4a8ff4e4-4124-40e5-aef9-9a03cd9241da contains 'foo.example.com. + ' instead of 'bar.example.com.' + Jan 14 04:07:29.465: INFO: File jessie_udp@dns-test-service-3.dns-8246.svc.cluster.local from pod dns-8246/dns-test-4a8ff4e4-4124-40e5-aef9-9a03cd9241da contains 'foo.example.com. + ' instead of 'bar.example.com.' + Jan 14 04:07:29.465: INFO: Lookups using dns-8246/dns-test-4a8ff4e4-4124-40e5-aef9-9a03cd9241da failed for: [wheezy_udp@dns-test-service-3.dns-8246.svc.cluster.local jessie_udp@dns-test-service-3.dns-8246.svc.cluster.local] + + Jan 14 04:07:34.475: INFO: DNS probes using dns-test-4a8ff4e4-4124-40e5-aef9-9a03cd9241da succeeded + + STEP: deleting the pod 01/14/23 04:07:34.475 + STEP: changing the service to type=ClusterIP 01/14/23 04:07:34.487 + STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8246.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8246.svc.cluster.local; sleep 1; done + 01/14/23 04:07:34.499 + STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8246.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8246.svc.cluster.local; sleep 1; done + 01/14/23 04:07:34.499 + STEP: creating a third pod to probe DNS 01/14/23 04:07:34.499 + STEP: submitting the pod to kubernetes 01/14/23 04:07:34.502 + Jan 14 04:07:34.509: INFO: Waiting up to 15m0s for pod "dns-test-fc59197b-899f-47ff-9247-52be1760fb26" in namespace "dns-8246" to be "running" + Jan 14 04:07:34.512: INFO: Pod "dns-test-fc59197b-899f-47ff-9247-52be1760fb26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.866073ms + Jan 14 04:07:36.516: INFO: Pod "dns-test-fc59197b-899f-47ff-9247-52be1760fb26": Phase="Running", Reason="", readiness=true. Elapsed: 2.007391567s + Jan 14 04:07:36.516: INFO: Pod "dns-test-fc59197b-899f-47ff-9247-52be1760fb26" satisfied condition "running" + STEP: retrieving the pod 01/14/23 04:07:36.516 + STEP: looking for the results for each expected name from probers 01/14/23 04:07:36.52 + Jan 14 04:07:36.527: INFO: DNS probes using dns-test-fc59197b-899f-47ff-9247-52be1760fb26 succeeded + + STEP: deleting the pod 01/14/23 04:07:36.527 + STEP: deleting the test externalName service 01/14/23 04:07:36.541 + [AfterEach] [sig-network] DNS + test/e2e/framework/node/init/init.go:32 + Jan 14 04:07:36.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] DNS + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] DNS + tear down framework | framework.go:193 + STEP: Destroying namespace "dns-8246" for this suite. 01/14/23 04:07:36.561 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + custom resource defaulting for requests and from storage works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:269 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:07:36.57 +Jan 14 04:07:36.570: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename custom-resource-definition 01/14/23 04:07:36.571 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:07:36.585 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:07:36.587 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[It] custom resource defaulting for requests and from storage works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:269 +Jan 14 04:07:36.590: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:07:39.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "custom-resource-definition-2784" for this suite. 01/14/23 04:07:39.717 +------------------------------ +• [3.153 seconds] +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + custom resource defaulting for requests and from storage works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:269 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:07:36.57 + Jan 14 04:07:36.570: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename custom-resource-definition 01/14/23 04:07:36.571 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:07:36.585 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:07:36.587 + [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [It] custom resource defaulting for requests and from storage works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:269 + Jan 14 04:07:36.590: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:07:39.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "custom-resource-definition-2784" for this suite. 01/14/23 04:07:39.717 + << End Captured GinkgoWriter Output +------------------------------ +[sig-network] EndpointSliceMirroring + should mirror a custom Endpoints resource through create update and delete [Conformance] + test/e2e/network/endpointslicemirroring.go:53 +[BeforeEach] [sig-network] EndpointSliceMirroring + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:07:39.724 +Jan 14 04:07:39.724: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename endpointslicemirroring 01/14/23 04:07:39.724 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:07:39.739 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:07:39.741 +[BeforeEach] [sig-network] EndpointSliceMirroring + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] EndpointSliceMirroring + test/e2e/network/endpointslicemirroring.go:41 +[It] should mirror a custom Endpoints resource through create update and delete [Conformance] + test/e2e/network/endpointslicemirroring.go:53 +STEP: mirroring a new custom Endpoint 01/14/23 04:07:39.752 +Jan 14 04:07:39.760: INFO: Waiting for at least 1 EndpointSlice to exist, got 0 +STEP: mirroring an update to a custom Endpoint 01/14/23 04:07:41.764 +Jan 14 04:07:41.772: INFO: Expected EndpointSlice to have 10.2.3.4 as address, got 10.1.2.3 +STEP: mirroring deletion of a custom Endpoint 01/14/23 04:07:43.777 +Jan 14 04:07:43.786: INFO: Waiting for 0 EndpointSlices to exist, got 1 +[AfterEach] [sig-network] EndpointSliceMirroring + test/e2e/framework/node/init/init.go:32 +Jan 14 04:07:45.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] EndpointSliceMirroring + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] EndpointSliceMirroring + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] EndpointSliceMirroring + tear down framework | framework.go:193 +STEP: Destroying namespace "endpointslicemirroring-9187" for this suite. 01/14/23 04:07:45.794 +------------------------------ +• [SLOW TEST] [6.076 seconds] +[sig-network] EndpointSliceMirroring +test/e2e/network/common/framework.go:23 + should mirror a custom Endpoints resource through create update and delete [Conformance] + test/e2e/network/endpointslicemirroring.go:53 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] EndpointSliceMirroring + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:07:39.724 + Jan 14 04:07:39.724: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename endpointslicemirroring 01/14/23 04:07:39.724 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:07:39.739 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:07:39.741 + [BeforeEach] [sig-network] EndpointSliceMirroring + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] EndpointSliceMirroring + test/e2e/network/endpointslicemirroring.go:41 + [It] should mirror a custom Endpoints resource through create update and delete [Conformance] + test/e2e/network/endpointslicemirroring.go:53 + STEP: mirroring a new custom Endpoint 01/14/23 04:07:39.752 + Jan 14 04:07:39.760: INFO: Waiting for at least 1 EndpointSlice to exist, got 0 + STEP: mirroring an update to a custom Endpoint 01/14/23 04:07:41.764 + Jan 14 04:07:41.772: INFO: Expected EndpointSlice to have 10.2.3.4 as address, got 10.1.2.3 + STEP: mirroring deletion of a custom Endpoint 01/14/23 04:07:43.777 + Jan 14 04:07:43.786: INFO: Waiting for 0 EndpointSlices to exist, got 1 + [AfterEach] [sig-network] EndpointSliceMirroring + test/e2e/framework/node/init/init.go:32 + Jan 14 04:07:45.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] EndpointSliceMirroring + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] EndpointSliceMirroring + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] EndpointSliceMirroring + tear down framework | framework.go:193 + STEP: Destroying namespace "endpointslicemirroring-9187" for this suite. 01/14/23 04:07:45.794 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected combined + should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + test/e2e/common/storage/projected_combined.go:44 +[BeforeEach] [sig-storage] Projected combined + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:07:45.801 +Jan 14 04:07:45.801: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename projected 01/14/23 04:07:45.802 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:07:45.817 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:07:45.819 +[BeforeEach] [sig-storage] Projected combined + test/e2e/framework/metrics/init/init.go:31 +[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + test/e2e/common/storage/projected_combined.go:44 +STEP: Creating configMap with name configmap-projected-all-test-volume-ae520798-a2ee-40af-9419-d42acdc75101 01/14/23 04:07:45.827 +STEP: Creating secret with name secret-projected-all-test-volume-4f149596-16ef-446d-b198-43f8b23b38eb 01/14/23 04:07:45.839 +STEP: Creating a pod to test Check all projections for projected volume plugin 01/14/23 04:07:45.847 +Jan 14 04:07:45.861: INFO: Waiting up to 5m0s for pod "projected-volume-38c3ee8c-b0ca-4af2-9972-646693f952f3" in namespace "projected-4827" to be "Succeeded or Failed" +Jan 14 04:07:45.864: INFO: Pod "projected-volume-38c3ee8c-b0ca-4af2-9972-646693f952f3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.027158ms +Jan 14 04:07:47.869: INFO: Pod "projected-volume-38c3ee8c-b0ca-4af2-9972-646693f952f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007848071s +Jan 14 04:07:49.868: INFO: Pod "projected-volume-38c3ee8c-b0ca-4af2-9972-646693f952f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007512552s +STEP: Saw pod success 01/14/23 04:07:49.868 +Jan 14 04:07:49.869: INFO: Pod "projected-volume-38c3ee8c-b0ca-4af2-9972-646693f952f3" satisfied condition "Succeeded or Failed" +Jan 14 04:07:49.872: INFO: Trying to get logs from node 10.0.1.106 pod projected-volume-38c3ee8c-b0ca-4af2-9972-646693f952f3 container projected-all-volume-test: +STEP: delete the pod 01/14/23 04:07:49.878 +Jan 14 04:07:49.891: INFO: Waiting for pod projected-volume-38c3ee8c-b0ca-4af2-9972-646693f952f3 to disappear +Jan 14 04:07:49.894: INFO: Pod projected-volume-38c3ee8c-b0ca-4af2-9972-646693f952f3 no longer exists +[AfterEach] [sig-storage] Projected combined + test/e2e/framework/node/init/init.go:32 +Jan 14 04:07:49.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected combined + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected combined + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected combined + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-4827" for this suite. 01/14/23 04:07:49.898 +------------------------------ +• [4.102 seconds] +[sig-storage] Projected combined +test/e2e/common/storage/framework.go:23 + should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + test/e2e/common/storage/projected_combined.go:44 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected combined + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:07:45.801 + Jan 14 04:07:45.801: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename projected 01/14/23 04:07:45.802 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:07:45.817 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:07:45.819 + [BeforeEach] [sig-storage] Projected combined + test/e2e/framework/metrics/init/init.go:31 + [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + test/e2e/common/storage/projected_combined.go:44 + STEP: Creating configMap with name configmap-projected-all-test-volume-ae520798-a2ee-40af-9419-d42acdc75101 01/14/23 04:07:45.827 + STEP: Creating secret with name secret-projected-all-test-volume-4f149596-16ef-446d-b198-43f8b23b38eb 01/14/23 04:07:45.839 + STEP: Creating a pod to test Check all projections for projected volume plugin 01/14/23 04:07:45.847 + Jan 14 04:07:45.861: INFO: Waiting up to 5m0s for pod "projected-volume-38c3ee8c-b0ca-4af2-9972-646693f952f3" in namespace "projected-4827" to be "Succeeded or Failed" + Jan 14 04:07:45.864: INFO: Pod "projected-volume-38c3ee8c-b0ca-4af2-9972-646693f952f3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.027158ms + Jan 14 04:07:47.869: INFO: Pod "projected-volume-38c3ee8c-b0ca-4af2-9972-646693f952f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007848071s + Jan 14 04:07:49.868: INFO: Pod "projected-volume-38c3ee8c-b0ca-4af2-9972-646693f952f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007512552s + STEP: Saw pod success 01/14/23 04:07:49.868 + Jan 14 04:07:49.869: INFO: Pod "projected-volume-38c3ee8c-b0ca-4af2-9972-646693f952f3" satisfied condition "Succeeded or Failed" + Jan 14 04:07:49.872: INFO: Trying to get logs from node 10.0.1.106 pod projected-volume-38c3ee8c-b0ca-4af2-9972-646693f952f3 container projected-all-volume-test: + STEP: delete the pod 01/14/23 04:07:49.878 + Jan 14 04:07:49.891: INFO: Waiting for pod projected-volume-38c3ee8c-b0ca-4af2-9972-646693f952f3 to disappear + Jan 14 04:07:49.894: INFO: Pod projected-volume-38c3ee8c-b0ca-4af2-9972-646693f952f3 no longer exists + [AfterEach] [sig-storage] Projected combined + test/e2e/framework/node/init/init.go:32 + Jan 14 04:07:49.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected combined + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected combined + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected combined + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-4827" for this suite. 01/14/23 04:07:49.898 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] CronJob + should support CronJob API operations [Conformance] + test/e2e/apps/cronjob.go:319 +[BeforeEach] [sig-apps] CronJob + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:07:49.908 +Jan 14 04:07:49.908: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename cronjob 01/14/23 04:07:49.909 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:07:49.923 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:07:49.925 +[BeforeEach] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:31 +[It] should support CronJob API operations [Conformance] + test/e2e/apps/cronjob.go:319 +STEP: Creating a cronjob 01/14/23 04:07:49.927 +STEP: creating 01/14/23 04:07:49.927 +STEP: getting 01/14/23 04:07:49.932 +STEP: listing 01/14/23 04:07:49.934 +STEP: watching 01/14/23 04:07:49.937 +Jan 14 04:07:49.937: INFO: starting watch +STEP: cluster-wide listing 01/14/23 04:07:49.938 +STEP: cluster-wide watching 01/14/23 04:07:49.94 +Jan 14 04:07:49.940: INFO: starting watch +STEP: patching 01/14/23 04:07:49.941 +STEP: updating 01/14/23 04:07:49.947 +Jan 14 04:07:49.954: INFO: waiting for watch events with expected annotations +Jan 14 04:07:49.954: INFO: saw patched and updated annotations +STEP: patching /status 01/14/23 04:07:49.954 +STEP: updating /status 01/14/23 04:07:49.96 +STEP: get /status 01/14/23 04:07:49.965 +STEP: deleting 01/14/23 04:07:49.968 +STEP: deleting a collection 01/14/23 04:07:49.977 +[AfterEach] [sig-apps] CronJob + test/e2e/framework/node/init/init.go:32 +Jan 14 04:07:49.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] CronJob + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] CronJob + tear down framework | framework.go:193 +STEP: Destroying namespace "cronjob-2362" for this suite. 01/14/23 04:07:49.989 +------------------------------ +• [0.086 seconds] +[sig-apps] CronJob +test/e2e/apps/framework.go:23 + should support CronJob API operations [Conformance] + test/e2e/apps/cronjob.go:319 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] CronJob + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:07:49.908 + Jan 14 04:07:49.908: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename cronjob 01/14/23 04:07:49.909 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:07:49.923 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:07:49.925 + [BeforeEach] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:31 + [It] should support CronJob API operations [Conformance] + test/e2e/apps/cronjob.go:319 + STEP: Creating a cronjob 01/14/23 04:07:49.927 + STEP: creating 01/14/23 04:07:49.927 + STEP: getting 01/14/23 04:07:49.932 + STEP: listing 01/14/23 04:07:49.934 + STEP: watching 01/14/23 04:07:49.937 + Jan 14 04:07:49.937: INFO: starting watch + STEP: cluster-wide listing 01/14/23 04:07:49.938 + STEP: cluster-wide watching 01/14/23 04:07:49.94 + Jan 14 04:07:49.940: INFO: starting watch + STEP: patching 01/14/23 04:07:49.941 + STEP: updating 01/14/23 04:07:49.947 + Jan 14 04:07:49.954: INFO: waiting for watch events with expected annotations + Jan 14 04:07:49.954: INFO: saw patched and updated annotations + STEP: patching /status 01/14/23 04:07:49.954 + STEP: updating /status 01/14/23 04:07:49.96 + STEP: get /status 01/14/23 04:07:49.965 + STEP: deleting 01/14/23 04:07:49.968 + STEP: deleting a collection 01/14/23 04:07:49.977 + [AfterEach] [sig-apps] CronJob + test/e2e/framework/node/init/init.go:32 + Jan 14 04:07:49.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] CronJob + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] CronJob + tear down framework | framework.go:193 + STEP: Destroying namespace "cronjob-2362" for this suite. 01/14/23 04:07:49.989 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:107 +[BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:07:49.994 +Jan 14 04:07:49.994: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename emptydir 01/14/23 04:07:49.995 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:07:50.009 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:07:50.011 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 +[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:107 +STEP: Creating a pod to test emptydir 0666 on tmpfs 01/14/23 04:07:50.013 +Jan 14 04:07:50.022: INFO: Waiting up to 5m0s for pod "pod-7652f4cb-1b52-4783-b452-9313bdb7bf1b" in namespace "emptydir-4325" to be "Succeeded or Failed" +Jan 14 04:07:50.025: INFO: Pod "pod-7652f4cb-1b52-4783-b452-9313bdb7bf1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.681575ms +Jan 14 04:07:52.029: INFO: Pod "pod-7652f4cb-1b52-4783-b452-9313bdb7bf1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007003855s +Jan 14 04:07:54.030: INFO: Pod "pod-7652f4cb-1b52-4783-b452-9313bdb7bf1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008235787s +STEP: Saw pod success 01/14/23 04:07:54.03 +Jan 14 04:07:54.030: INFO: Pod "pod-7652f4cb-1b52-4783-b452-9313bdb7bf1b" satisfied condition "Succeeded or Failed" +Jan 14 04:07:54.033: INFO: Trying to get logs from node 10.0.1.106 pod pod-7652f4cb-1b52-4783-b452-9313bdb7bf1b container test-container: +STEP: delete the pod 01/14/23 04:07:54.039 +Jan 14 04:07:54.055: INFO: Waiting for pod pod-7652f4cb-1b52-4783-b452-9313bdb7bf1b to disappear +Jan 14 04:07:54.058: INFO: Pod pod-7652f4cb-1b52-4783-b452-9313bdb7bf1b no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 +Jan 14 04:07:54.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 +STEP: Destroying namespace "emptydir-4325" for this suite. 01/14/23 04:07:54.064 +------------------------------ +• [4.077 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:107 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:07:49.994 + Jan 14 04:07:49.994: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename emptydir 01/14/23 04:07:49.995 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:07:50.009 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:07:50.011 + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 + [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:107 + STEP: Creating a pod to test emptydir 0666 on tmpfs 01/14/23 04:07:50.013 + Jan 14 04:07:50.022: INFO: Waiting up to 5m0s for pod "pod-7652f4cb-1b52-4783-b452-9313bdb7bf1b" in namespace "emptydir-4325" to be "Succeeded or Failed" + Jan 14 04:07:50.025: INFO: Pod "pod-7652f4cb-1b52-4783-b452-9313bdb7bf1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.681575ms + Jan 14 04:07:52.029: INFO: Pod "pod-7652f4cb-1b52-4783-b452-9313bdb7bf1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007003855s + Jan 14 04:07:54.030: INFO: Pod "pod-7652f4cb-1b52-4783-b452-9313bdb7bf1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008235787s + STEP: Saw pod success 01/14/23 04:07:54.03 + Jan 14 04:07:54.030: INFO: Pod "pod-7652f4cb-1b52-4783-b452-9313bdb7bf1b" satisfied condition "Succeeded or Failed" + Jan 14 04:07:54.033: INFO: Trying to get logs from node 10.0.1.106 pod pod-7652f4cb-1b52-4783-b452-9313bdb7bf1b container test-container: + STEP: delete the pod 01/14/23 04:07:54.039 + Jan 14 04:07:54.055: INFO: Waiting for pod pod-7652f4cb-1b52-4783-b452-9313bdb7bf1b to disappear + Jan 14 04:07:54.058: INFO: Pod pod-7652f4cb-1b52-4783-b452-9313bdb7bf1b no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 + Jan 14 04:07:54.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 + STEP: Destroying namespace "emptydir-4325" for this suite. 01/14/23 04:07:54.064 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource with different stored version [Conformance] + test/e2e/apimachinery/webhook.go:323 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:07:54.072 +Jan 14 04:07:54.072: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename webhook 01/14/23 04:07:54.073 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:07:54.088 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:07:54.09 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 01/14/23 04:07:54.103 +STEP: Create role binding to let webhook read extension-apiserver-authentication 01/14/23 04:07:54.501 +STEP: Deploying the webhook pod 01/14/23 04:07:54.509 +STEP: Wait for the deployment to be ready 01/14/23 04:07:54.519 +Jan 14 04:07:54.526: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 01/14/23 04:07:56.537 +STEP: Verifying the service has paired with the endpoint 01/14/23 04:07:56.547 +Jan 14 04:07:57.547: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource with different stored version [Conformance] + test/e2e/apimachinery/webhook.go:323 +Jan 14 04:07:57.551: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8711-crds.webhook.example.com via the AdmissionRegistration API 01/14/23 04:07:58.063 +STEP: Creating a custom resource while v1 is storage version 01/14/23 04:07:58.077 +STEP: Patching Custom Resource Definition to set v2 as storage 01/14/23 04:08:00.127 +STEP: Patching the custom resource while v2 is storage version 01/14/23 04:08:00.143 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:08:00.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "webhook-7199" for this suite. 01/14/23 04:08:00.747 +STEP: Destroying namespace "webhook-7199-markers" for this suite. 01/14/23 04:08:00.754 +------------------------------ +• [SLOW TEST] [6.687 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should mutate custom resource with different stored version [Conformance] + test/e2e/apimachinery/webhook.go:323 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:07:54.072 + Jan 14 04:07:54.072: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename webhook 01/14/23 04:07:54.073 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:07:54.088 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:07:54.09 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 01/14/23 04:07:54.103 + STEP: Create role binding to let webhook read extension-apiserver-authentication 01/14/23 04:07:54.501 + STEP: Deploying the webhook pod 01/14/23 04:07:54.509 + STEP: Wait for the deployment to be ready 01/14/23 04:07:54.519 + Jan 14 04:07:54.526: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 01/14/23 04:07:56.537 + STEP: Verifying the service has paired with the endpoint 01/14/23 04:07:56.547 + Jan 14 04:07:57.547: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should mutate custom resource with different stored version [Conformance] + test/e2e/apimachinery/webhook.go:323 + Jan 14 04:07:57.551: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8711-crds.webhook.example.com via the AdmissionRegistration API 01/14/23 04:07:58.063 + STEP: Creating a custom resource while v1 is storage version 01/14/23 04:07:58.077 + STEP: Patching Custom Resource Definition to set v2 as storage 01/14/23 04:08:00.127 + STEP: Patching the custom resource while v2 is storage version 01/14/23 04:08:00.143 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:08:00.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "webhook-7199" for this suite. 01/14/23 04:08:00.747 + STEP: Destroying namespace "webhook-7199-markers" for this suite. 01/14/23 04:08:00.754 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-network] Service endpoints latency + should not be very high [Conformance] + test/e2e/network/service_latency.go:59 +[BeforeEach] [sig-network] Service endpoints latency + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:08:00.76 +Jan 14 04:08:00.760: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename svc-latency 01/14/23 04:08:00.761 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:08:00.804 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:08:00.807 +[BeforeEach] [sig-network] Service endpoints latency + test/e2e/framework/metrics/init/init.go:31 +[It] should not be very high [Conformance] + test/e2e/network/service_latency.go:59 +Jan 14 04:08:00.809: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: creating replication controller svc-latency-rc in namespace svc-latency-2940 01/14/23 04:08:00.81 +I0114 04:08:00.815694 25 runners.go:193] Created replication controller with name: svc-latency-rc, namespace: svc-latency-2940, replica count: 1 +I0114 04:08:01.867587 25 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Jan 14 04:08:01.978: INFO: Created: latency-svc-6dmc5 +Jan 14 04:08:01.985: INFO: Got endpoints: latency-svc-6dmc5 [17.317701ms] +Jan 14 04:08:01.997: INFO: Created: latency-svc-t8lp4 +Jan 14 04:08:02.004: INFO: Got endpoints: latency-svc-t8lp4 [18.820812ms] +Jan 14 04:08:02.006: INFO: Created: latency-svc-7p2ml +Jan 14 04:08:02.014: INFO: Got endpoints: latency-svc-7p2ml [28.862896ms] +Jan 14 04:08:02.015: INFO: Created: latency-svc-vc4p6 +Jan 14 04:08:02.020: INFO: Created: latency-svc-lxq7q +Jan 14 04:08:02.023: INFO: Got endpoints: latency-svc-vc4p6 [37.654689ms] +Jan 14 04:08:02.025: INFO: Got endpoints: latency-svc-lxq7q [38.800466ms] +Jan 14 04:08:02.028: INFO: Created: latency-svc-vtdnc +Jan 14 04:08:02.033: INFO: Got endpoints: latency-svc-vtdnc [46.866233ms] +Jan 14 04:08:02.035: INFO: Created: latency-svc-d9gtf +Jan 14 04:08:02.041: INFO: Got endpoints: latency-svc-d9gtf [54.603142ms] +Jan 14 04:08:02.043: INFO: Created: latency-svc-hswmz +Jan 14 04:08:02.049: INFO: Created: latency-svc-d4622 +Jan 14 04:08:02.050: INFO: Got endpoints: latency-svc-hswmz [63.107695ms] +Jan 14 04:08:02.054: INFO: Got endpoints: latency-svc-d4622 [67.803886ms] +Jan 14 04:08:02.058: INFO: Created: latency-svc-9rwn7 +Jan 14 04:08:02.063: INFO: Got endpoints: latency-svc-9rwn7 [75.999222ms] +Jan 14 04:08:02.064: INFO: Created: latency-svc-cqwtd +Jan 14 04:08:02.069: INFO: Got endpoints: latency-svc-cqwtd [82.027766ms] +Jan 14 04:08:02.071: INFO: Created: latency-svc-qgxdr +Jan 14 04:08:02.077: INFO: Got endpoints: latency-svc-qgxdr [90.4683ms] +Jan 14 04:08:02.079: INFO: Created: latency-svc-b4sgs +Jan 14 04:08:02.085: INFO: Got endpoints: latency-svc-b4sgs [98.060037ms] +Jan 14 04:08:02.086: INFO: Created: latency-svc-8j4fd +Jan 14 04:08:02.092: INFO: Got endpoints: latency-svc-8j4fd [104.909156ms] +Jan 14 04:08:02.093: INFO: Created: latency-svc-zxvfz +Jan 14 04:08:02.097: INFO: Got endpoints: latency-svc-zxvfz [110.046502ms] +Jan 14 04:08:02.099: INFO: Created: latency-svc-f5zjt +Jan 14 04:08:02.103: INFO: Got endpoints: latency-svc-f5zjt [115.826962ms] +Jan 14 04:08:02.106: INFO: Created: latency-svc-tf2sn +Jan 14 04:08:02.111: INFO: Got endpoints: latency-svc-tf2sn [106.518078ms] +Jan 14 04:08:02.114: INFO: Created: latency-svc-qzccl +Jan 14 04:08:02.120: INFO: Got endpoints: latency-svc-qzccl [105.74236ms] +Jan 14 04:08:02.121: INFO: Created: latency-svc-gqhpw +Jan 14 04:08:02.128: INFO: Got endpoints: latency-svc-gqhpw [104.905121ms] +Jan 14 04:08:02.130: INFO: Created: latency-svc-dj2r7 +Jan 14 04:08:02.141: INFO: Created: latency-svc-5qsww +Jan 14 04:08:02.141: INFO: Got endpoints: latency-svc-dj2r7 [115.765632ms] +Jan 14 04:08:02.145: INFO: Got endpoints: latency-svc-5qsww [112.240917ms] +Jan 14 04:08:02.147: INFO: Created: latency-svc-vtbpq +Jan 14 04:08:02.153: INFO: Got endpoints: latency-svc-vtbpq [112.00814ms] +Jan 14 04:08:02.154: INFO: Created: latency-svc-5rp2h +Jan 14 04:08:02.162: INFO: Got endpoints: latency-svc-5rp2h [112.486556ms] +Jan 14 04:08:02.162: INFO: Created: latency-svc-2fvg4 +Jan 14 04:08:02.170: INFO: Got endpoints: latency-svc-2fvg4 [115.716582ms] +Jan 14 04:08:02.174: INFO: Created: latency-svc-zp8p5 +Jan 14 04:08:02.182: INFO: Got endpoints: latency-svc-zp8p5 [119.622995ms] +Jan 14 04:08:02.183: INFO: Created: latency-svc-6krm9 +Jan 14 04:08:02.188: INFO: Got endpoints: latency-svc-6krm9 [118.665757ms] +Jan 14 04:08:02.192: INFO: Created: latency-svc-nk6cf +Jan 14 04:08:02.198: INFO: Got endpoints: latency-svc-nk6cf [120.545205ms] +Jan 14 04:08:02.202: INFO: Created: latency-svc-n5z9b +Jan 14 04:08:02.207: INFO: Got endpoints: latency-svc-n5z9b [121.938058ms] +Jan 14 04:08:02.210: INFO: Created: latency-svc-cr57k +Jan 14 04:08:02.217: INFO: Got endpoints: latency-svc-cr57k [125.13093ms] +Jan 14 04:08:02.220: INFO: Created: latency-svc-p9cqh +Jan 14 04:08:02.226: INFO: Got endpoints: latency-svc-p9cqh [128.867601ms] +Jan 14 04:08:02.228: INFO: Created: latency-svc-kq2hr +Jan 14 04:08:02.234: INFO: Got endpoints: latency-svc-kq2hr [130.838371ms] +Jan 14 04:08:02.235: INFO: Created: latency-svc-ds49q +Jan 14 04:08:02.243: INFO: Got endpoints: latency-svc-ds49q [131.609156ms] +Jan 14 04:08:02.243: INFO: Created: latency-svc-zpgqv +Jan 14 04:08:02.248: INFO: Got endpoints: latency-svc-zpgqv [128.461085ms] +Jan 14 04:08:02.251: INFO: Created: latency-svc-hfg4z +Jan 14 04:08:02.258: INFO: Got endpoints: latency-svc-hfg4z [129.240596ms] +Jan 14 04:08:02.261: INFO: Created: latency-svc-q9ddz +Jan 14 04:08:02.268: INFO: Got endpoints: latency-svc-q9ddz [126.658102ms] +Jan 14 04:08:02.269: INFO: Created: latency-svc-h5wpp +Jan 14 04:08:02.275: INFO: Created: latency-svc-px7f8 +Jan 14 04:08:02.275: INFO: Got endpoints: latency-svc-h5wpp [129.624162ms] +Jan 14 04:08:02.280: INFO: Got endpoints: latency-svc-px7f8 [127.064691ms] +Jan 14 04:08:02.283: INFO: Created: latency-svc-m78xw +Jan 14 04:08:02.289: INFO: Got endpoints: latency-svc-m78xw [126.523672ms] +Jan 14 04:08:02.290: INFO: Created: latency-svc-72q5b +Jan 14 04:08:02.294: INFO: Got endpoints: latency-svc-72q5b [124.29819ms] +Jan 14 04:08:02.296: INFO: Created: latency-svc-2258h +Jan 14 04:08:02.300: INFO: Got endpoints: latency-svc-2258h [117.538082ms] +Jan 14 04:08:02.303: INFO: Created: latency-svc-6xjkf +Jan 14 04:08:02.315: INFO: Created: latency-svc-c6p68 +Jan 14 04:08:02.323: INFO: Got endpoints: latency-svc-6xjkf [135.296173ms] +Jan 14 04:08:02.325: INFO: Got endpoints: latency-svc-c6p68 [127.526163ms] +Jan 14 04:08:02.333: INFO: Created: latency-svc-hq547 +Jan 14 04:08:02.340: INFO: Got endpoints: latency-svc-hq547 [132.562501ms] +Jan 14 04:08:02.342: INFO: Created: latency-svc-s8plj +Jan 14 04:08:02.347: INFO: Got endpoints: latency-svc-s8plj [129.758022ms] +Jan 14 04:08:02.348: INFO: Created: latency-svc-fz9bs +Jan 14 04:08:02.355: INFO: Got endpoints: latency-svc-fz9bs [128.362636ms] +Jan 14 04:08:02.357: INFO: Created: latency-svc-llnbr +Jan 14 04:08:02.363: INFO: Got endpoints: latency-svc-llnbr [128.853775ms] +Jan 14 04:08:02.366: INFO: Created: latency-svc-8l2fq +Jan 14 04:08:02.372: INFO: Got endpoints: latency-svc-8l2fq [129.48057ms] +Jan 14 04:08:02.372: INFO: Created: latency-svc-klgj8 +Jan 14 04:08:02.376: INFO: Created: latency-svc-fnjzd +Jan 14 04:08:02.377: INFO: Got endpoints: latency-svc-klgj8 [128.899522ms] +Jan 14 04:08:02.381: INFO: Got endpoints: latency-svc-fnjzd [122.893151ms] +Jan 14 04:08:02.386: INFO: Created: latency-svc-pc5x8 +Jan 14 04:08:02.391: INFO: Got endpoints: latency-svc-pc5x8 [123.002508ms] +Jan 14 04:08:02.392: INFO: Created: latency-svc-989bl +Jan 14 04:08:02.399: INFO: Got endpoints: latency-svc-989bl [124.144439ms] +Jan 14 04:08:02.401: INFO: Created: latency-svc-r8n2d +Jan 14 04:08:02.407: INFO: Got endpoints: latency-svc-r8n2d [127.270577ms] +Jan 14 04:08:02.408: INFO: Created: latency-svc-ngc84 +Jan 14 04:08:02.412: INFO: Got endpoints: latency-svc-ngc84 [123.287495ms] +Jan 14 04:08:02.414: INFO: Created: latency-svc-rmz54 +Jan 14 04:08:02.419: INFO: Got endpoints: latency-svc-rmz54 [124.059845ms] +Jan 14 04:08:02.421: INFO: Created: latency-svc-p6hjd +Jan 14 04:08:02.428: INFO: Got endpoints: latency-svc-p6hjd [127.803603ms] +Jan 14 04:08:02.428: INFO: Created: latency-svc-pqf48 +Jan 14 04:08:02.434: INFO: Got endpoints: latency-svc-pqf48 [111.157608ms] +Jan 14 04:08:02.435: INFO: Created: latency-svc-frdj8 +Jan 14 04:08:02.440: INFO: Got endpoints: latency-svc-frdj8 [114.843137ms] +Jan 14 04:08:02.441: INFO: Created: latency-svc-mzmgk +Jan 14 04:08:02.445: INFO: Got endpoints: latency-svc-mzmgk [104.912767ms] +Jan 14 04:08:02.447: INFO: Created: latency-svc-n88dj +Jan 14 04:08:02.452: INFO: Got endpoints: latency-svc-n88dj [104.340912ms] +Jan 14 04:08:02.454: INFO: Created: latency-svc-fnq6r +Jan 14 04:08:02.459: INFO: Got endpoints: latency-svc-fnq6r [104.183152ms] +Jan 14 04:08:02.461: INFO: Created: latency-svc-8f5zj +Jan 14 04:08:02.465: INFO: Got endpoints: latency-svc-8f5zj [102.035347ms] +Jan 14 04:08:02.469: INFO: Created: latency-svc-pw9pv +Jan 14 04:08:02.476: INFO: Got endpoints: latency-svc-pw9pv [103.392043ms] +Jan 14 04:08:02.477: INFO: Created: latency-svc-7vpnm +Jan 14 04:08:02.481: INFO: Got endpoints: latency-svc-7vpnm [104.060792ms] +Jan 14 04:08:02.482: INFO: Created: latency-svc-ws4jj +Jan 14 04:08:02.487: INFO: Got endpoints: latency-svc-ws4jj [106.147976ms] +Jan 14 04:08:02.491: INFO: Created: latency-svc-vdjwm +Jan 14 04:08:02.497: INFO: Got endpoints: latency-svc-vdjwm [106.084135ms] +Jan 14 04:08:02.498: INFO: Created: latency-svc-dhzmh +Jan 14 04:08:02.505: INFO: Got endpoints: latency-svc-dhzmh [105.531681ms] +Jan 14 04:08:02.505: INFO: Created: latency-svc-x6k7z +Jan 14 04:08:02.512: INFO: Created: latency-svc-kc5n9 +Jan 14 04:08:02.512: INFO: Got endpoints: latency-svc-x6k7z [104.426084ms] +Jan 14 04:08:02.518: INFO: Got endpoints: latency-svc-kc5n9 [105.928125ms] +Jan 14 04:08:02.518: INFO: Created: latency-svc-v6r6w +Jan 14 04:08:02.522: INFO: Got endpoints: latency-svc-v6r6w [103.75986ms] +Jan 14 04:08:02.525: INFO: Created: latency-svc-zhr29 +Jan 14 04:08:02.529: INFO: Got endpoints: latency-svc-zhr29 [101.72371ms] +Jan 14 04:08:02.531: INFO: Created: latency-svc-482ww +Jan 14 04:08:02.538: INFO: Got endpoints: latency-svc-482ww [103.484464ms] +Jan 14 04:08:02.539: INFO: Created: latency-svc-bjk45 +Jan 14 04:08:02.545: INFO: Got endpoints: latency-svc-bjk45 [104.407329ms] +Jan 14 04:08:02.545: INFO: Created: latency-svc-2r9m2 +Jan 14 04:08:02.551: INFO: Got endpoints: latency-svc-2r9m2 [106.490278ms] +Jan 14 04:08:02.551: INFO: Created: latency-svc-24zr4 +Jan 14 04:08:02.555: INFO: Got endpoints: latency-svc-24zr4 [103.285292ms] +Jan 14 04:08:02.559: INFO: Created: latency-svc-jtpms +Jan 14 04:08:02.567: INFO: Got endpoints: latency-svc-jtpms [108.051678ms] +Jan 14 04:08:02.571: INFO: Created: latency-svc-8bj2q +Jan 14 04:08:02.578: INFO: Got endpoints: latency-svc-8bj2q [112.322133ms] +Jan 14 04:08:02.578: INFO: Created: latency-svc-8r8zr +Jan 14 04:08:02.585: INFO: Got endpoints: latency-svc-8r8zr [109.172068ms] +Jan 14 04:08:02.586: INFO: Created: latency-svc-vzcrh +Jan 14 04:08:02.590: INFO: Got endpoints: latency-svc-vzcrh [108.184781ms] +Jan 14 04:08:02.593: INFO: Created: latency-svc-wlr5q +Jan 14 04:08:02.598: INFO: Got endpoints: latency-svc-wlr5q [110.940296ms] +Jan 14 04:08:02.600: INFO: Created: latency-svc-mnrpx +Jan 14 04:08:02.605: INFO: Got endpoints: latency-svc-mnrpx [107.829096ms] +Jan 14 04:08:02.607: INFO: Created: latency-svc-p2fp4 +Jan 14 04:08:02.611: INFO: Got endpoints: latency-svc-p2fp4 [106.518276ms] +Jan 14 04:08:02.613: INFO: Created: latency-svc-6dlr2 +Jan 14 04:08:02.619: INFO: Got endpoints: latency-svc-6dlr2 [107.670328ms] +Jan 14 04:08:02.619: INFO: Created: latency-svc-w6lvx +Jan 14 04:08:02.626: INFO: Got endpoints: latency-svc-w6lvx [107.697818ms] +Jan 14 04:08:02.629: INFO: Created: latency-svc-s9fx8 +Jan 14 04:08:02.639: INFO: Got endpoints: latency-svc-s9fx8 [116.792886ms] +Jan 14 04:08:02.642: INFO: Created: latency-svc-5rwgr +Jan 14 04:08:02.647: INFO: Got endpoints: latency-svc-5rwgr [117.33319ms] +Jan 14 04:08:02.649: INFO: Created: latency-svc-rk5tb +Jan 14 04:08:02.658: INFO: Got endpoints: latency-svc-rk5tb [120.552509ms] +Jan 14 04:08:02.660: INFO: Created: latency-svc-mh8rp +Jan 14 04:08:02.666: INFO: Got endpoints: latency-svc-mh8rp [121.645756ms] +Jan 14 04:08:02.668: INFO: Created: latency-svc-jmtlf +Jan 14 04:08:02.673: INFO: Got endpoints: latency-svc-jmtlf [122.095625ms] +Jan 14 04:08:02.677: INFO: Created: latency-svc-htbjv +Jan 14 04:08:02.686: INFO: Created: latency-svc-bgqtl +Jan 14 04:08:02.688: INFO: Got endpoints: latency-svc-htbjv [132.750513ms] +Jan 14 04:08:02.692: INFO: Got endpoints: latency-svc-bgqtl [124.99535ms] +Jan 14 04:08:02.695: INFO: Created: latency-svc-pd5qv +Jan 14 04:08:02.700: INFO: Got endpoints: latency-svc-pd5qv [122.861124ms] +Jan 14 04:08:02.702: INFO: Created: latency-svc-98rbq +Jan 14 04:08:02.707: INFO: Got endpoints: latency-svc-98rbq [122.603942ms] +Jan 14 04:08:02.708: INFO: Created: latency-svc-zp68t +Jan 14 04:08:02.712: INFO: Got endpoints: latency-svc-zp68t [122.435533ms] +Jan 14 04:08:02.714: INFO: Created: latency-svc-5q2f7 +Jan 14 04:08:02.720: INFO: Got endpoints: latency-svc-5q2f7 [122.258429ms] +Jan 14 04:08:02.720: INFO: Created: latency-svc-2mszh +Jan 14 04:08:02.727: INFO: Created: latency-svc-q99fv +Jan 14 04:08:02.728: INFO: Got endpoints: latency-svc-2mszh [123.88441ms] +Jan 14 04:08:02.733: INFO: Got endpoints: latency-svc-q99fv [121.832724ms] +Jan 14 04:08:02.737: INFO: Created: latency-svc-qdzqp +Jan 14 04:08:02.743: INFO: Got endpoints: latency-svc-qdzqp [124.082908ms] +Jan 14 04:08:02.745: INFO: Created: latency-svc-nbkr8 +Jan 14 04:08:02.751: INFO: Got endpoints: latency-svc-nbkr8 [125.471546ms] +Jan 14 04:08:02.755: INFO: Created: latency-svc-xnkmz +Jan 14 04:08:02.760: INFO: Got endpoints: latency-svc-xnkmz [120.355691ms] +Jan 14 04:08:02.762: INFO: Created: latency-svc-mbk8f +Jan 14 04:08:02.767: INFO: Got endpoints: latency-svc-mbk8f [119.837468ms] +Jan 14 04:08:02.769: INFO: Created: latency-svc-lkdnr +Jan 14 04:08:02.774: INFO: Got endpoints: latency-svc-lkdnr [116.148492ms] +Jan 14 04:08:02.777: INFO: Created: latency-svc-9pf6x +Jan 14 04:08:02.785: INFO: Got endpoints: latency-svc-9pf6x [118.546186ms] +Jan 14 04:08:02.786: INFO: Created: latency-svc-5l66m +Jan 14 04:08:02.791: INFO: Got endpoints: latency-svc-5l66m [117.193891ms] +Jan 14 04:08:02.792: INFO: Created: latency-svc-xl82m +Jan 14 04:08:02.799: INFO: Got endpoints: latency-svc-xl82m [111.405005ms] +Jan 14 04:08:02.800: INFO: Created: latency-svc-jt656 +Jan 14 04:08:02.811: INFO: Created: latency-svc-qf9bg +Jan 14 04:08:02.812: INFO: Got endpoints: latency-svc-jt656 [119.604497ms] +Jan 14 04:08:02.818: INFO: Got endpoints: latency-svc-qf9bg [117.204404ms] +Jan 14 04:08:02.819: INFO: Created: latency-svc-sm52q +Jan 14 04:08:02.823: INFO: Got endpoints: latency-svc-sm52q [115.624489ms] +Jan 14 04:08:02.824: INFO: Created: latency-svc-skxr8 +Jan 14 04:08:02.828: INFO: Got endpoints: latency-svc-skxr8 [115.404217ms] +Jan 14 04:08:02.830: INFO: Created: latency-svc-l5qfm +Jan 14 04:08:02.837: INFO: Got endpoints: latency-svc-l5qfm [116.701515ms] +Jan 14 04:08:02.840: INFO: Created: latency-svc-8n62j +Jan 14 04:08:02.846: INFO: Got endpoints: latency-svc-8n62j [117.491505ms] +Jan 14 04:08:02.849: INFO: Created: latency-svc-zccwb +Jan 14 04:08:02.854: INFO: Got endpoints: latency-svc-zccwb [120.713689ms] +Jan 14 04:08:02.856: INFO: Created: latency-svc-5p6x6 +Jan 14 04:08:02.861: INFO: Got endpoints: latency-svc-5p6x6 [117.704513ms] +Jan 14 04:08:02.863: INFO: Created: latency-svc-96w2f +Jan 14 04:08:02.868: INFO: Got endpoints: latency-svc-96w2f [116.441589ms] +Jan 14 04:08:02.869: INFO: Created: latency-svc-56cnm +Jan 14 04:08:02.872: INFO: Got endpoints: latency-svc-56cnm [112.706196ms] +Jan 14 04:08:02.875: INFO: Created: latency-svc-gl4sc +Jan 14 04:08:02.880: INFO: Got endpoints: latency-svc-gl4sc [113.132833ms] +Jan 14 04:08:02.880: INFO: Created: latency-svc-2f55x +Jan 14 04:08:02.887: INFO: Got endpoints: latency-svc-2f55x [112.428701ms] +Jan 14 04:08:02.887: INFO: Created: latency-svc-rmrkk +Jan 14 04:08:02.892: INFO: Got endpoints: latency-svc-rmrkk [107.086757ms] +Jan 14 04:08:02.894: INFO: Created: latency-svc-h7s6h +Jan 14 04:08:02.900: INFO: Created: latency-svc-j62hq +Jan 14 04:08:02.900: INFO: Got endpoints: latency-svc-h7s6h [109.095351ms] +Jan 14 04:08:02.906: INFO: Got endpoints: latency-svc-j62hq [106.838129ms] +Jan 14 04:08:02.906: INFO: Created: latency-svc-5qjkn +Jan 14 04:08:02.912: INFO: Created: latency-svc-djqlh +Jan 14 04:08:02.913: INFO: Got endpoints: latency-svc-5qjkn [101.075225ms] +Jan 14 04:08:02.920: INFO: Got endpoints: latency-svc-djqlh [102.006604ms] +Jan 14 04:08:02.923: INFO: Created: latency-svc-zzx56 +Jan 14 04:08:02.926: INFO: Got endpoints: latency-svc-zzx56 [102.810024ms] +Jan 14 04:08:02.928: INFO: Created: latency-svc-ss5gj +Jan 14 04:08:02.933: INFO: Got endpoints: latency-svc-ss5gj [105.418363ms] +Jan 14 04:08:02.933: INFO: Created: latency-svc-rmlbg +Jan 14 04:08:02.942: INFO: Created: latency-svc-9vjgn +Jan 14 04:08:02.943: INFO: Got endpoints: latency-svc-rmlbg [106.134345ms] +Jan 14 04:08:02.950: INFO: Created: latency-svc-lfzfk +Jan 14 04:08:02.950: INFO: Got endpoints: latency-svc-9vjgn [104.42926ms] +Jan 14 04:08:02.963: INFO: Got endpoints: latency-svc-lfzfk [109.085913ms] +Jan 14 04:08:02.963: INFO: Created: latency-svc-lrwnk +Jan 14 04:08:02.970: INFO: Got endpoints: latency-svc-lrwnk [109.307595ms] +Jan 14 04:08:02.972: INFO: Created: latency-svc-ncn4z +Jan 14 04:08:02.979: INFO: Got endpoints: latency-svc-ncn4z [111.308897ms] +Jan 14 04:08:02.980: INFO: Created: latency-svc-tg22g +Jan 14 04:08:02.994: INFO: Got endpoints: latency-svc-tg22g [121.507186ms] +Jan 14 04:08:02.994: INFO: Created: latency-svc-vq727 +Jan 14 04:08:02.994: INFO: Created: latency-svc-pcnkp +Jan 14 04:08:02.997: INFO: Created: latency-svc-kwqc7 +Jan 14 04:08:03.000: INFO: Got endpoints: latency-svc-pcnkp [112.89559ms] +Jan 14 04:08:03.002: INFO: Got endpoints: latency-svc-vq727 [122.109327ms] +Jan 14 04:08:03.004: INFO: Got endpoints: latency-svc-kwqc7 [112.298876ms] +Jan 14 04:08:03.004: INFO: Created: latency-svc-d5s99 +Jan 14 04:08:03.008: INFO: Got endpoints: latency-svc-d5s99 [108.043392ms] +Jan 14 04:08:03.011: INFO: Created: latency-svc-qhdvc +Jan 14 04:08:03.014: INFO: Got endpoints: latency-svc-qhdvc [107.522089ms] +Jan 14 04:08:03.015: INFO: Created: latency-svc-tk69s +Jan 14 04:08:03.019: INFO: Got endpoints: latency-svc-tk69s [106.117273ms] +Jan 14 04:08:03.025: INFO: Created: latency-svc-9p447 +Jan 14 04:08:03.028: INFO: Got endpoints: latency-svc-9p447 [107.887169ms] +Jan 14 04:08:03.033: INFO: Created: latency-svc-t99f9 +Jan 14 04:08:03.038: INFO: Got endpoints: latency-svc-t99f9 [111.903023ms] +Jan 14 04:08:03.047: INFO: Created: latency-svc-x2pgd +Jan 14 04:08:03.047: INFO: Got endpoints: latency-svc-x2pgd [114.136125ms] +Jan 14 04:08:03.047: INFO: Created: latency-svc-mttfg +Jan 14 04:08:03.051: INFO: Got endpoints: latency-svc-mttfg [108.34021ms] +Jan 14 04:08:03.053: INFO: Created: latency-svc-wxmr5 +Jan 14 04:08:03.057: INFO: Got endpoints: latency-svc-wxmr5 [106.484612ms] +Jan 14 04:08:03.061: INFO: Created: latency-svc-vxjsm +Jan 14 04:08:03.064: INFO: Created: latency-svc-477dn +Jan 14 04:08:03.065: INFO: Got endpoints: latency-svc-vxjsm [101.379834ms] +Jan 14 04:08:03.070: INFO: Got endpoints: latency-svc-477dn [99.760917ms] +Jan 14 04:08:03.073: INFO: Created: latency-svc-pvvtm +Jan 14 04:08:03.081: INFO: Got endpoints: latency-svc-pvvtm [102.263505ms] +Jan 14 04:08:03.083: INFO: Created: latency-svc-krhss +Jan 14 04:08:03.087: INFO: Got endpoints: latency-svc-krhss [92.705593ms] +Jan 14 04:08:03.090: INFO: Created: latency-svc-pd78w +Jan 14 04:08:03.095: INFO: Got endpoints: latency-svc-pd78w [94.912685ms] +Jan 14 04:08:03.095: INFO: Created: latency-svc-wxdh7 +Jan 14 04:08:03.102: INFO: Got endpoints: latency-svc-wxdh7 [100.394752ms] +Jan 14 04:08:03.103: INFO: Created: latency-svc-vfvfn +Jan 14 04:08:03.108: INFO: Got endpoints: latency-svc-vfvfn [103.69192ms] +Jan 14 04:08:03.110: INFO: Created: latency-svc-n2txv +Jan 14 04:08:03.116: INFO: Got endpoints: latency-svc-n2txv [107.715342ms] +Jan 14 04:08:03.116: INFO: Created: latency-svc-k925m +Jan 14 04:08:03.120: INFO: Got endpoints: latency-svc-k925m [106.265559ms] +Jan 14 04:08:03.124: INFO: Created: latency-svc-x24qn +Jan 14 04:08:03.128: INFO: Got endpoints: latency-svc-x24qn [109.357198ms] +Jan 14 04:08:03.133: INFO: Created: latency-svc-4vjzh +Jan 14 04:08:03.137: INFO: Got endpoints: latency-svc-4vjzh [109.142349ms] +Jan 14 04:08:03.137: INFO: Created: latency-svc-rj8c4 +Jan 14 04:08:03.145: INFO: Got endpoints: latency-svc-rj8c4 [107.080154ms] +Jan 14 04:08:03.145: INFO: Created: latency-svc-bpzv7 +Jan 14 04:08:03.150: INFO: Got endpoints: latency-svc-bpzv7 [102.402238ms] +Jan 14 04:08:03.151: INFO: Created: latency-svc-7x479 +Jan 14 04:08:03.157: INFO: Got endpoints: latency-svc-7x479 [105.288932ms] +Jan 14 04:08:03.159: INFO: Created: latency-svc-m9jr4 +Jan 14 04:08:03.163: INFO: Got endpoints: latency-svc-m9jr4 [105.814233ms] +Jan 14 04:08:03.166: INFO: Created: latency-svc-rsskx +Jan 14 04:08:03.170: INFO: Got endpoints: latency-svc-rsskx [105.487045ms] +Jan 14 04:08:03.173: INFO: Created: latency-svc-hf46p +Jan 14 04:08:03.177: INFO: Got endpoints: latency-svc-hf46p [106.605772ms] +Jan 14 04:08:03.180: INFO: Created: latency-svc-bzdg2 +Jan 14 04:08:03.185: INFO: Got endpoints: latency-svc-bzdg2 [103.884037ms] +Jan 14 04:08:03.187: INFO: Created: latency-svc-bw4bj +Jan 14 04:08:03.191: INFO: Got endpoints: latency-svc-bw4bj [104.695842ms] +Jan 14 04:08:03.193: INFO: Created: latency-svc-k829n +Jan 14 04:08:03.199: INFO: Got endpoints: latency-svc-k829n [104.366107ms] +Jan 14 04:08:03.199: INFO: Created: latency-svc-xz4sj +Jan 14 04:08:03.203: INFO: Got endpoints: latency-svc-xz4sj [100.530377ms] +Jan 14 04:08:03.205: INFO: Created: latency-svc-4fbg4 +Jan 14 04:08:03.210: INFO: Got endpoints: latency-svc-4fbg4 [101.795543ms] +Jan 14 04:08:03.210: INFO: Created: latency-svc-l8lrq +Jan 14 04:08:03.216: INFO: Got endpoints: latency-svc-l8lrq [100.290237ms] +Jan 14 04:08:03.220: INFO: Created: latency-svc-7x7q2 +Jan 14 04:08:03.224: INFO: Got endpoints: latency-svc-7x7q2 [104.299567ms] +Jan 14 04:08:03.228: INFO: Created: latency-svc-8jzch +Jan 14 04:08:03.234: INFO: Got endpoints: latency-svc-8jzch [105.409746ms] +Jan 14 04:08:03.235: INFO: Created: latency-svc-g2rpj +Jan 14 04:08:03.240: INFO: Got endpoints: latency-svc-g2rpj [103.445164ms] +Jan 14 04:08:03.242: INFO: Created: latency-svc-g9sqc +Jan 14 04:08:03.248: INFO: Created: latency-svc-kh6w8 +Jan 14 04:08:03.248: INFO: Got endpoints: latency-svc-g9sqc [102.881181ms] +Jan 14 04:08:03.252: INFO: Got endpoints: latency-svc-kh6w8 [102.251706ms] +Jan 14 04:08:03.254: INFO: Created: latency-svc-9xrkq +Jan 14 04:08:03.259: INFO: Got endpoints: latency-svc-9xrkq [102.004436ms] +Jan 14 04:08:03.261: INFO: Created: latency-svc-74wnh +Jan 14 04:08:03.266: INFO: Got endpoints: latency-svc-74wnh [102.826838ms] +Jan 14 04:08:03.268: INFO: Created: latency-svc-4h466 +Jan 14 04:08:03.273: INFO: Got endpoints: latency-svc-4h466 [103.037996ms] +Jan 14 04:08:03.274: INFO: Created: latency-svc-56k2h +Jan 14 04:08:03.279: INFO: Got endpoints: latency-svc-56k2h [102.601093ms] +Jan 14 04:08:03.282: INFO: Created: latency-svc-7jcwh +Jan 14 04:08:03.287: INFO: Got endpoints: latency-svc-7jcwh [101.9689ms] +Jan 14 04:08:03.291: INFO: Created: latency-svc-rlrm8 +Jan 14 04:08:03.295: INFO: Created: latency-svc-nfvxm +Jan 14 04:08:03.299: INFO: Got endpoints: latency-svc-rlrm8 [108.140387ms] +Jan 14 04:08:03.300: INFO: Got endpoints: latency-svc-nfvxm [100.904619ms] +Jan 14 04:08:03.306: INFO: Created: latency-svc-j9scw +Jan 14 04:08:03.311: INFO: Got endpoints: latency-svc-j9scw [108.151781ms] +Jan 14 04:08:03.314: INFO: Created: latency-svc-j6cpt +Jan 14 04:08:03.319: INFO: Got endpoints: latency-svc-j6cpt [108.925082ms] +Jan 14 04:08:03.322: INFO: Created: latency-svc-jdc5p +Jan 14 04:08:03.327: INFO: Got endpoints: latency-svc-jdc5p [110.991353ms] +Jan 14 04:08:03.327: INFO: Created: latency-svc-slr7b +Jan 14 04:08:03.332: INFO: Got endpoints: latency-svc-slr7b [107.640565ms] +Jan 14 04:08:03.333: INFO: Created: latency-svc-5hcwd +Jan 14 04:08:03.340: INFO: Got endpoints: latency-svc-5hcwd [106.726349ms] +Jan 14 04:08:03.341: INFO: Created: latency-svc-8fnfx +Jan 14 04:08:03.347: INFO: Got endpoints: latency-svc-8fnfx [106.319085ms] +Jan 14 04:08:03.347: INFO: Created: latency-svc-ls2cm +Jan 14 04:08:03.353: INFO: Got endpoints: latency-svc-ls2cm [105.4689ms] +Jan 14 04:08:03.354: INFO: Created: latency-svc-4nww8 +Jan 14 04:08:03.358: INFO: Got endpoints: latency-svc-4nww8 [106.282387ms] +Jan 14 04:08:03.359: INFO: Created: latency-svc-86tpp +Jan 14 04:08:03.364: INFO: Got endpoints: latency-svc-86tpp [105.416772ms] +Jan 14 04:08:03.369: INFO: Created: latency-svc-9ntv2 +Jan 14 04:08:03.373: INFO: Got endpoints: latency-svc-9ntv2 [107.20565ms] +Jan 14 04:08:03.377: INFO: Created: latency-svc-8pp2w +Jan 14 04:08:03.380: INFO: Got endpoints: latency-svc-8pp2w [107.001539ms] +Jan 14 04:08:03.385: INFO: Created: latency-svc-hjtd2 +Jan 14 04:08:03.390: INFO: Got endpoints: latency-svc-hjtd2 [110.713872ms] +Jan 14 04:08:03.391: INFO: Created: latency-svc-n57hd +Jan 14 04:08:03.395: INFO: Created: latency-svc-6fxq7 +Jan 14 04:08:03.396: INFO: Got endpoints: latency-svc-n57hd [108.89138ms] +Jan 14 04:08:03.402: INFO: Got endpoints: latency-svc-6fxq7 [102.567358ms] +Jan 14 04:08:03.403: INFO: Created: latency-svc-cwcwp +Jan 14 04:08:03.407: INFO: Got endpoints: latency-svc-cwcwp [107.149202ms] +Jan 14 04:08:03.410: INFO: Created: latency-svc-5t2wt +Jan 14 04:08:03.414: INFO: Got endpoints: latency-svc-5t2wt [103.264716ms] +Jan 14 04:08:03.418: INFO: Created: latency-svc-bbbl4 +Jan 14 04:08:03.423: INFO: Got endpoints: latency-svc-bbbl4 [103.701284ms] +Jan 14 04:08:03.425: INFO: Created: latency-svc-452jc +Jan 14 04:08:03.429: INFO: Got endpoints: latency-svc-452jc [102.444816ms] +Jan 14 04:08:03.431: INFO: Created: latency-svc-7wb66 +Jan 14 04:08:03.437: INFO: Created: latency-svc-b2jrp +Jan 14 04:08:03.438: INFO: Got endpoints: latency-svc-7wb66 [106.337645ms] +Jan 14 04:08:03.443: INFO: Got endpoints: latency-svc-b2jrp [102.896679ms] +Jan 14 04:08:03.445: INFO: Created: latency-svc-9mtqp +Jan 14 04:08:03.450: INFO: Got endpoints: latency-svc-9mtqp [103.336761ms] +Jan 14 04:08:03.454: INFO: Created: latency-svc-qq6n4 +Jan 14 04:08:03.460: INFO: Got endpoints: latency-svc-qq6n4 [106.501415ms] +Jan 14 04:08:03.460: INFO: Created: latency-svc-cq7fg +Jan 14 04:08:03.465: INFO: Got endpoints: latency-svc-cq7fg [106.576559ms] +Jan 14 04:08:03.469: INFO: Created: latency-svc-pph5d +Jan 14 04:08:03.474: INFO: Got endpoints: latency-svc-pph5d [110.161629ms] +Jan 14 04:08:03.476: INFO: Created: latency-svc-f442r +Jan 14 04:08:03.481: INFO: Got endpoints: latency-svc-f442r [108.482642ms] +Jan 14 04:08:03.484: INFO: Created: latency-svc-v8ssf +Jan 14 04:08:03.490: INFO: Got endpoints: latency-svc-v8ssf [109.745768ms] +Jan 14 04:08:03.490: INFO: Latencies: [18.820812ms 28.862896ms 37.654689ms 38.800466ms 46.866233ms 54.603142ms 63.107695ms 67.803886ms 75.999222ms 82.027766ms 90.4683ms 92.705593ms 94.912685ms 98.060037ms 99.760917ms 100.290237ms 100.394752ms 100.530377ms 100.904619ms 101.075225ms 101.379834ms 101.72371ms 101.795543ms 101.9689ms 102.004436ms 102.006604ms 102.035347ms 102.251706ms 102.263505ms 102.402238ms 102.444816ms 102.567358ms 102.601093ms 102.810024ms 102.826838ms 102.881181ms 102.896679ms 103.037996ms 103.264716ms 103.285292ms 103.336761ms 103.392043ms 103.445164ms 103.484464ms 103.69192ms 103.701284ms 103.75986ms 103.884037ms 104.060792ms 104.183152ms 104.299567ms 104.340912ms 104.366107ms 104.407329ms 104.426084ms 104.42926ms 104.695842ms 104.905121ms 104.909156ms 104.912767ms 105.288932ms 105.409746ms 105.416772ms 105.418363ms 105.4689ms 105.487045ms 105.531681ms 105.74236ms 105.814233ms 105.928125ms 106.084135ms 106.117273ms 106.134345ms 106.147976ms 106.265559ms 106.282387ms 106.319085ms 106.337645ms 106.484612ms 106.490278ms 106.501415ms 106.518078ms 106.518276ms 106.576559ms 106.605772ms 106.726349ms 106.838129ms 107.001539ms 107.080154ms 107.086757ms 107.149202ms 107.20565ms 107.522089ms 107.640565ms 107.670328ms 107.697818ms 107.715342ms 107.829096ms 107.887169ms 108.043392ms 108.051678ms 108.140387ms 108.151781ms 108.184781ms 108.34021ms 108.482642ms 108.89138ms 108.925082ms 109.085913ms 109.095351ms 109.142349ms 109.172068ms 109.307595ms 109.357198ms 109.745768ms 110.046502ms 110.161629ms 110.713872ms 110.940296ms 110.991353ms 111.157608ms 111.308897ms 111.405005ms 111.903023ms 112.00814ms 112.240917ms 112.298876ms 112.322133ms 112.428701ms 112.486556ms 112.706196ms 112.89559ms 113.132833ms 114.136125ms 114.843137ms 115.404217ms 115.624489ms 115.716582ms 115.765632ms 115.826962ms 116.148492ms 116.441589ms 116.701515ms 116.792886ms 117.193891ms 117.204404ms 117.33319ms 117.491505ms 117.538082ms 117.704513ms 118.546186ms 118.665757ms 119.604497ms 119.622995ms 119.837468ms 120.355691ms 120.545205ms 120.552509ms 120.713689ms 121.507186ms 121.645756ms 121.832724ms 121.938058ms 122.095625ms 122.109327ms 122.258429ms 122.435533ms 122.603942ms 122.861124ms 122.893151ms 123.002508ms 123.287495ms 123.88441ms 124.059845ms 124.082908ms 124.144439ms 124.29819ms 124.99535ms 125.13093ms 125.471546ms 126.523672ms 126.658102ms 127.064691ms 127.270577ms 127.526163ms 127.803603ms 128.362636ms 128.461085ms 128.853775ms 128.867601ms 128.899522ms 129.240596ms 129.48057ms 129.624162ms 129.758022ms 130.838371ms 131.609156ms 132.562501ms 132.750513ms 135.296173ms] +Jan 14 04:08:03.490: INFO: 50 %ile: 108.051678ms +Jan 14 04:08:03.490: INFO: 90 %ile: 126.523672ms +Jan 14 04:08:03.490: INFO: 99 %ile: 132.750513ms +Jan 14 04:08:03.490: INFO: Total sample count: 200 +[AfterEach] [sig-network] Service endpoints latency + test/e2e/framework/node/init/init.go:32 +Jan 14 04:08:03.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Service endpoints latency + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Service endpoints latency + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Service endpoints latency + tear down framework | framework.go:193 +STEP: Destroying namespace "svc-latency-2940" for this suite. 01/14/23 04:08:03.497 +------------------------------ +• [2.743 seconds] +[sig-network] Service endpoints latency +test/e2e/network/common/framework.go:23 + should not be very high [Conformance] + test/e2e/network/service_latency.go:59 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Service endpoints latency + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:08:00.76 + Jan 14 04:08:00.760: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename svc-latency 01/14/23 04:08:00.761 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:08:00.804 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:08:00.807 + [BeforeEach] [sig-network] Service endpoints latency + test/e2e/framework/metrics/init/init.go:31 + [It] should not be very high [Conformance] + test/e2e/network/service_latency.go:59 + Jan 14 04:08:00.809: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: creating replication controller svc-latency-rc in namespace svc-latency-2940 01/14/23 04:08:00.81 + I0114 04:08:00.815694 25 runners.go:193] Created replication controller with name: svc-latency-rc, namespace: svc-latency-2940, replica count: 1 + I0114 04:08:01.867587 25 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Jan 14 04:08:01.978: INFO: Created: latency-svc-6dmc5 + Jan 14 04:08:01.985: INFO: Got endpoints: latency-svc-6dmc5 [17.317701ms] + Jan 14 04:08:01.997: INFO: Created: latency-svc-t8lp4 + Jan 14 04:08:02.004: INFO: Got endpoints: latency-svc-t8lp4 [18.820812ms] + Jan 14 04:08:02.006: INFO: Created: latency-svc-7p2ml + Jan 14 04:08:02.014: INFO: Got endpoints: latency-svc-7p2ml [28.862896ms] + Jan 14 04:08:02.015: INFO: Created: latency-svc-vc4p6 + Jan 14 04:08:02.020: INFO: Created: latency-svc-lxq7q + Jan 14 04:08:02.023: INFO: Got endpoints: latency-svc-vc4p6 [37.654689ms] + Jan 14 04:08:02.025: INFO: Got endpoints: latency-svc-lxq7q [38.800466ms] + Jan 14 04:08:02.028: INFO: Created: latency-svc-vtdnc + Jan 14 04:08:02.033: INFO: Got endpoints: latency-svc-vtdnc [46.866233ms] + Jan 14 04:08:02.035: INFO: Created: latency-svc-d9gtf + Jan 14 04:08:02.041: INFO: Got endpoints: latency-svc-d9gtf [54.603142ms] + Jan 14 04:08:02.043: INFO: Created: latency-svc-hswmz + Jan 14 04:08:02.049: INFO: Created: latency-svc-d4622 + Jan 14 04:08:02.050: INFO: Got endpoints: latency-svc-hswmz [63.107695ms] + Jan 14 04:08:02.054: INFO: Got endpoints: latency-svc-d4622 [67.803886ms] + Jan 14 04:08:02.058: INFO: Created: latency-svc-9rwn7 + Jan 14 04:08:02.063: INFO: Got endpoints: latency-svc-9rwn7 [75.999222ms] + Jan 14 04:08:02.064: INFO: Created: latency-svc-cqwtd + Jan 14 04:08:02.069: INFO: Got endpoints: latency-svc-cqwtd [82.027766ms] + Jan 14 04:08:02.071: INFO: Created: latency-svc-qgxdr + Jan 14 04:08:02.077: INFO: Got endpoints: latency-svc-qgxdr [90.4683ms] + Jan 14 04:08:02.079: INFO: Created: latency-svc-b4sgs + Jan 14 04:08:02.085: INFO: Got endpoints: latency-svc-b4sgs [98.060037ms] + Jan 14 04:08:02.086: INFO: Created: latency-svc-8j4fd + Jan 14 04:08:02.092: INFO: Got endpoints: latency-svc-8j4fd [104.909156ms] + Jan 14 04:08:02.093: INFO: Created: latency-svc-zxvfz + Jan 14 04:08:02.097: INFO: Got endpoints: latency-svc-zxvfz [110.046502ms] + Jan 14 04:08:02.099: INFO: Created: latency-svc-f5zjt + Jan 14 04:08:02.103: INFO: Got endpoints: latency-svc-f5zjt [115.826962ms] + Jan 14 04:08:02.106: INFO: Created: latency-svc-tf2sn + Jan 14 04:08:02.111: INFO: Got endpoints: latency-svc-tf2sn [106.518078ms] + Jan 14 04:08:02.114: INFO: Created: latency-svc-qzccl + Jan 14 04:08:02.120: INFO: Got endpoints: latency-svc-qzccl [105.74236ms] + Jan 14 04:08:02.121: INFO: Created: latency-svc-gqhpw + Jan 14 04:08:02.128: INFO: Got endpoints: latency-svc-gqhpw [104.905121ms] + Jan 14 04:08:02.130: INFO: Created: latency-svc-dj2r7 + Jan 14 04:08:02.141: INFO: Created: latency-svc-5qsww + Jan 14 04:08:02.141: INFO: Got endpoints: latency-svc-dj2r7 [115.765632ms] + Jan 14 04:08:02.145: INFO: Got endpoints: latency-svc-5qsww [112.240917ms] + Jan 14 04:08:02.147: INFO: Created: latency-svc-vtbpq + Jan 14 04:08:02.153: INFO: Got endpoints: latency-svc-vtbpq [112.00814ms] + Jan 14 04:08:02.154: INFO: Created: latency-svc-5rp2h + Jan 14 04:08:02.162: INFO: Got endpoints: latency-svc-5rp2h [112.486556ms] + Jan 14 04:08:02.162: INFO: Created: latency-svc-2fvg4 + Jan 14 04:08:02.170: INFO: Got endpoints: latency-svc-2fvg4 [115.716582ms] + Jan 14 04:08:02.174: INFO: Created: latency-svc-zp8p5 + Jan 14 04:08:02.182: INFO: Got endpoints: latency-svc-zp8p5 [119.622995ms] + Jan 14 04:08:02.183: INFO: Created: latency-svc-6krm9 + Jan 14 04:08:02.188: INFO: Got endpoints: latency-svc-6krm9 [118.665757ms] + Jan 14 04:08:02.192: INFO: Created: latency-svc-nk6cf + Jan 14 04:08:02.198: INFO: Got endpoints: latency-svc-nk6cf [120.545205ms] + Jan 14 04:08:02.202: INFO: Created: latency-svc-n5z9b + Jan 14 04:08:02.207: INFO: Got endpoints: latency-svc-n5z9b [121.938058ms] + Jan 14 04:08:02.210: INFO: Created: latency-svc-cr57k + Jan 14 04:08:02.217: INFO: Got endpoints: latency-svc-cr57k [125.13093ms] + Jan 14 04:08:02.220: INFO: Created: latency-svc-p9cqh + Jan 14 04:08:02.226: INFO: Got endpoints: latency-svc-p9cqh [128.867601ms] + Jan 14 04:08:02.228: INFO: Created: latency-svc-kq2hr + Jan 14 04:08:02.234: INFO: Got endpoints: latency-svc-kq2hr [130.838371ms] + Jan 14 04:08:02.235: INFO: Created: latency-svc-ds49q + Jan 14 04:08:02.243: INFO: Got endpoints: latency-svc-ds49q [131.609156ms] + Jan 14 04:08:02.243: INFO: Created: latency-svc-zpgqv + Jan 14 04:08:02.248: INFO: Got endpoints: latency-svc-zpgqv [128.461085ms] + Jan 14 04:08:02.251: INFO: Created: latency-svc-hfg4z + Jan 14 04:08:02.258: INFO: Got endpoints: latency-svc-hfg4z [129.240596ms] + Jan 14 04:08:02.261: INFO: Created: latency-svc-q9ddz + Jan 14 04:08:02.268: INFO: Got endpoints: latency-svc-q9ddz [126.658102ms] + Jan 14 04:08:02.269: INFO: Created: latency-svc-h5wpp + Jan 14 04:08:02.275: INFO: Created: latency-svc-px7f8 + Jan 14 04:08:02.275: INFO: Got endpoints: latency-svc-h5wpp [129.624162ms] + Jan 14 04:08:02.280: INFO: Got endpoints: latency-svc-px7f8 [127.064691ms] + Jan 14 04:08:02.283: INFO: Created: latency-svc-m78xw + Jan 14 04:08:02.289: INFO: Got endpoints: latency-svc-m78xw [126.523672ms] + Jan 14 04:08:02.290: INFO: Created: latency-svc-72q5b + Jan 14 04:08:02.294: INFO: Got endpoints: latency-svc-72q5b [124.29819ms] + Jan 14 04:08:02.296: INFO: Created: latency-svc-2258h + Jan 14 04:08:02.300: INFO: Got endpoints: latency-svc-2258h [117.538082ms] + Jan 14 04:08:02.303: INFO: Created: latency-svc-6xjkf + Jan 14 04:08:02.315: INFO: Created: latency-svc-c6p68 + Jan 14 04:08:02.323: INFO: Got endpoints: latency-svc-6xjkf [135.296173ms] + Jan 14 04:08:02.325: INFO: Got endpoints: latency-svc-c6p68 [127.526163ms] + Jan 14 04:08:02.333: INFO: Created: latency-svc-hq547 + Jan 14 04:08:02.340: INFO: Got endpoints: latency-svc-hq547 [132.562501ms] + Jan 14 04:08:02.342: INFO: Created: latency-svc-s8plj + Jan 14 04:08:02.347: INFO: Got endpoints: latency-svc-s8plj [129.758022ms] + Jan 14 04:08:02.348: INFO: Created: latency-svc-fz9bs + Jan 14 04:08:02.355: INFO: Got endpoints: latency-svc-fz9bs [128.362636ms] + Jan 14 04:08:02.357: INFO: Created: latency-svc-llnbr + Jan 14 04:08:02.363: INFO: Got endpoints: latency-svc-llnbr [128.853775ms] + Jan 14 04:08:02.366: INFO: Created: latency-svc-8l2fq + Jan 14 04:08:02.372: INFO: Got endpoints: latency-svc-8l2fq [129.48057ms] + Jan 14 04:08:02.372: INFO: Created: latency-svc-klgj8 + Jan 14 04:08:02.376: INFO: Created: latency-svc-fnjzd + Jan 14 04:08:02.377: INFO: Got endpoints: latency-svc-klgj8 [128.899522ms] + Jan 14 04:08:02.381: INFO: Got endpoints: latency-svc-fnjzd [122.893151ms] + Jan 14 04:08:02.386: INFO: Created: latency-svc-pc5x8 + Jan 14 04:08:02.391: INFO: Got endpoints: latency-svc-pc5x8 [123.002508ms] + Jan 14 04:08:02.392: INFO: Created: latency-svc-989bl + Jan 14 04:08:02.399: INFO: Got endpoints: latency-svc-989bl [124.144439ms] + Jan 14 04:08:02.401: INFO: Created: latency-svc-r8n2d + Jan 14 04:08:02.407: INFO: Got endpoints: latency-svc-r8n2d [127.270577ms] + Jan 14 04:08:02.408: INFO: Created: latency-svc-ngc84 + Jan 14 04:08:02.412: INFO: Got endpoints: latency-svc-ngc84 [123.287495ms] + Jan 14 04:08:02.414: INFO: Created: latency-svc-rmz54 + Jan 14 04:08:02.419: INFO: Got endpoints: latency-svc-rmz54 [124.059845ms] + Jan 14 04:08:02.421: INFO: Created: latency-svc-p6hjd + Jan 14 04:08:02.428: INFO: Got endpoints: latency-svc-p6hjd [127.803603ms] + Jan 14 04:08:02.428: INFO: Created: latency-svc-pqf48 + Jan 14 04:08:02.434: INFO: Got endpoints: latency-svc-pqf48 [111.157608ms] + Jan 14 04:08:02.435: INFO: Created: latency-svc-frdj8 + Jan 14 04:08:02.440: INFO: Got endpoints: latency-svc-frdj8 [114.843137ms] + Jan 14 04:08:02.441: INFO: Created: latency-svc-mzmgk + Jan 14 04:08:02.445: INFO: Got endpoints: latency-svc-mzmgk [104.912767ms] + Jan 14 04:08:02.447: INFO: Created: latency-svc-n88dj + Jan 14 04:08:02.452: INFO: Got endpoints: latency-svc-n88dj [104.340912ms] + Jan 14 04:08:02.454: INFO: Created: latency-svc-fnq6r + Jan 14 04:08:02.459: INFO: Got endpoints: latency-svc-fnq6r [104.183152ms] + Jan 14 04:08:02.461: INFO: Created: latency-svc-8f5zj + Jan 14 04:08:02.465: INFO: Got endpoints: latency-svc-8f5zj [102.035347ms] + Jan 14 04:08:02.469: INFO: Created: latency-svc-pw9pv + Jan 14 04:08:02.476: INFO: Got endpoints: latency-svc-pw9pv [103.392043ms] + Jan 14 04:08:02.477: INFO: Created: latency-svc-7vpnm + Jan 14 04:08:02.481: INFO: Got endpoints: latency-svc-7vpnm [104.060792ms] + Jan 14 04:08:02.482: INFO: Created: latency-svc-ws4jj + Jan 14 04:08:02.487: INFO: Got endpoints: latency-svc-ws4jj [106.147976ms] + Jan 14 04:08:02.491: INFO: Created: latency-svc-vdjwm + Jan 14 04:08:02.497: INFO: Got endpoints: latency-svc-vdjwm [106.084135ms] + Jan 14 04:08:02.498: INFO: Created: latency-svc-dhzmh + Jan 14 04:08:02.505: INFO: Got endpoints: latency-svc-dhzmh [105.531681ms] + Jan 14 04:08:02.505: INFO: Created: latency-svc-x6k7z + Jan 14 04:08:02.512: INFO: Created: latency-svc-kc5n9 + Jan 14 04:08:02.512: INFO: Got endpoints: latency-svc-x6k7z [104.426084ms] + Jan 14 04:08:02.518: INFO: Got endpoints: latency-svc-kc5n9 [105.928125ms] + Jan 14 04:08:02.518: INFO: Created: latency-svc-v6r6w + Jan 14 04:08:02.522: INFO: Got endpoints: latency-svc-v6r6w [103.75986ms] + Jan 14 04:08:02.525: INFO: Created: latency-svc-zhr29 + Jan 14 04:08:02.529: INFO: Got endpoints: latency-svc-zhr29 [101.72371ms] + Jan 14 04:08:02.531: INFO: Created: latency-svc-482ww + Jan 14 04:08:02.538: INFO: Got endpoints: latency-svc-482ww [103.484464ms] + Jan 14 04:08:02.539: INFO: Created: latency-svc-bjk45 + Jan 14 04:08:02.545: INFO: Got endpoints: latency-svc-bjk45 [104.407329ms] + Jan 14 04:08:02.545: INFO: Created: latency-svc-2r9m2 + Jan 14 04:08:02.551: INFO: Got endpoints: latency-svc-2r9m2 [106.490278ms] + Jan 14 04:08:02.551: INFO: Created: latency-svc-24zr4 + Jan 14 04:08:02.555: INFO: Got endpoints: latency-svc-24zr4 [103.285292ms] + Jan 14 04:08:02.559: INFO: Created: latency-svc-jtpms + Jan 14 04:08:02.567: INFO: Got endpoints: latency-svc-jtpms [108.051678ms] + Jan 14 04:08:02.571: INFO: Created: latency-svc-8bj2q + Jan 14 04:08:02.578: INFO: Got endpoints: latency-svc-8bj2q [112.322133ms] + Jan 14 04:08:02.578: INFO: Created: latency-svc-8r8zr + Jan 14 04:08:02.585: INFO: Got endpoints: latency-svc-8r8zr [109.172068ms] + Jan 14 04:08:02.586: INFO: Created: latency-svc-vzcrh + Jan 14 04:08:02.590: INFO: Got endpoints: latency-svc-vzcrh [108.184781ms] + Jan 14 04:08:02.593: INFO: Created: latency-svc-wlr5q + Jan 14 04:08:02.598: INFO: Got endpoints: latency-svc-wlr5q [110.940296ms] + Jan 14 04:08:02.600: INFO: Created: latency-svc-mnrpx + Jan 14 04:08:02.605: INFO: Got endpoints: latency-svc-mnrpx [107.829096ms] + Jan 14 04:08:02.607: INFO: Created: latency-svc-p2fp4 + Jan 14 04:08:02.611: INFO: Got endpoints: latency-svc-p2fp4 [106.518276ms] + Jan 14 04:08:02.613: INFO: Created: latency-svc-6dlr2 + Jan 14 04:08:02.619: INFO: Got endpoints: latency-svc-6dlr2 [107.670328ms] + Jan 14 04:08:02.619: INFO: Created: latency-svc-w6lvx + Jan 14 04:08:02.626: INFO: Got endpoints: latency-svc-w6lvx [107.697818ms] + Jan 14 04:08:02.629: INFO: Created: latency-svc-s9fx8 + Jan 14 04:08:02.639: INFO: Got endpoints: latency-svc-s9fx8 [116.792886ms] + Jan 14 04:08:02.642: INFO: Created: latency-svc-5rwgr + Jan 14 04:08:02.647: INFO: Got endpoints: latency-svc-5rwgr [117.33319ms] + Jan 14 04:08:02.649: INFO: Created: latency-svc-rk5tb + Jan 14 04:08:02.658: INFO: Got endpoints: latency-svc-rk5tb [120.552509ms] + Jan 14 04:08:02.660: INFO: Created: latency-svc-mh8rp + Jan 14 04:08:02.666: INFO: Got endpoints: latency-svc-mh8rp [121.645756ms] + Jan 14 04:08:02.668: INFO: Created: latency-svc-jmtlf + Jan 14 04:08:02.673: INFO: Got endpoints: latency-svc-jmtlf [122.095625ms] + Jan 14 04:08:02.677: INFO: Created: latency-svc-htbjv + Jan 14 04:08:02.686: INFO: Created: latency-svc-bgqtl + Jan 14 04:08:02.688: INFO: Got endpoints: latency-svc-htbjv [132.750513ms] + Jan 14 04:08:02.692: INFO: Got endpoints: latency-svc-bgqtl [124.99535ms] + Jan 14 04:08:02.695: INFO: Created: latency-svc-pd5qv + Jan 14 04:08:02.700: INFO: Got endpoints: latency-svc-pd5qv [122.861124ms] + Jan 14 04:08:02.702: INFO: Created: latency-svc-98rbq + Jan 14 04:08:02.707: INFO: Got endpoints: latency-svc-98rbq [122.603942ms] + Jan 14 04:08:02.708: INFO: Created: latency-svc-zp68t + Jan 14 04:08:02.712: INFO: Got endpoints: latency-svc-zp68t [122.435533ms] + Jan 14 04:08:02.714: INFO: Created: latency-svc-5q2f7 + Jan 14 04:08:02.720: INFO: Got endpoints: latency-svc-5q2f7 [122.258429ms] + Jan 14 04:08:02.720: INFO: Created: latency-svc-2mszh + Jan 14 04:08:02.727: INFO: Created: latency-svc-q99fv + Jan 14 04:08:02.728: INFO: Got endpoints: latency-svc-2mszh [123.88441ms] + Jan 14 04:08:02.733: INFO: Got endpoints: latency-svc-q99fv [121.832724ms] + Jan 14 04:08:02.737: INFO: Created: latency-svc-qdzqp + Jan 14 04:08:02.743: INFO: Got endpoints: latency-svc-qdzqp [124.082908ms] + Jan 14 04:08:02.745: INFO: Created: latency-svc-nbkr8 + Jan 14 04:08:02.751: INFO: Got endpoints: latency-svc-nbkr8 [125.471546ms] + Jan 14 04:08:02.755: INFO: Created: latency-svc-xnkmz + Jan 14 04:08:02.760: INFO: Got endpoints: latency-svc-xnkmz [120.355691ms] + Jan 14 04:08:02.762: INFO: Created: latency-svc-mbk8f + Jan 14 04:08:02.767: INFO: Got endpoints: latency-svc-mbk8f [119.837468ms] + Jan 14 04:08:02.769: INFO: Created: latency-svc-lkdnr + Jan 14 04:08:02.774: INFO: Got endpoints: latency-svc-lkdnr [116.148492ms] + Jan 14 04:08:02.777: INFO: Created: latency-svc-9pf6x + Jan 14 04:08:02.785: INFO: Got endpoints: latency-svc-9pf6x [118.546186ms] + Jan 14 04:08:02.786: INFO: Created: latency-svc-5l66m + Jan 14 04:08:02.791: INFO: Got endpoints: latency-svc-5l66m [117.193891ms] + Jan 14 04:08:02.792: INFO: Created: latency-svc-xl82m + Jan 14 04:08:02.799: INFO: Got endpoints: latency-svc-xl82m [111.405005ms] + Jan 14 04:08:02.800: INFO: Created: latency-svc-jt656 + Jan 14 04:08:02.811: INFO: Created: latency-svc-qf9bg + Jan 14 04:08:02.812: INFO: Got endpoints: latency-svc-jt656 [119.604497ms] + Jan 14 04:08:02.818: INFO: Got endpoints: latency-svc-qf9bg [117.204404ms] + Jan 14 04:08:02.819: INFO: Created: latency-svc-sm52q + Jan 14 04:08:02.823: INFO: Got endpoints: latency-svc-sm52q [115.624489ms] + Jan 14 04:08:02.824: INFO: Created: latency-svc-skxr8 + Jan 14 04:08:02.828: INFO: Got endpoints: latency-svc-skxr8 [115.404217ms] + Jan 14 04:08:02.830: INFO: Created: latency-svc-l5qfm + Jan 14 04:08:02.837: INFO: Got endpoints: latency-svc-l5qfm [116.701515ms] + Jan 14 04:08:02.840: INFO: Created: latency-svc-8n62j + Jan 14 04:08:02.846: INFO: Got endpoints: latency-svc-8n62j [117.491505ms] + Jan 14 04:08:02.849: INFO: Created: latency-svc-zccwb + Jan 14 04:08:02.854: INFO: Got endpoints: latency-svc-zccwb [120.713689ms] + Jan 14 04:08:02.856: INFO: Created: latency-svc-5p6x6 + Jan 14 04:08:02.861: INFO: Got endpoints: latency-svc-5p6x6 [117.704513ms] + Jan 14 04:08:02.863: INFO: Created: latency-svc-96w2f + Jan 14 04:08:02.868: INFO: Got endpoints: latency-svc-96w2f [116.441589ms] + Jan 14 04:08:02.869: INFO: Created: latency-svc-56cnm + Jan 14 04:08:02.872: INFO: Got endpoints: latency-svc-56cnm [112.706196ms] + Jan 14 04:08:02.875: INFO: Created: latency-svc-gl4sc + Jan 14 04:08:02.880: INFO: Got endpoints: latency-svc-gl4sc [113.132833ms] + Jan 14 04:08:02.880: INFO: Created: latency-svc-2f55x + Jan 14 04:08:02.887: INFO: Got endpoints: latency-svc-2f55x [112.428701ms] + Jan 14 04:08:02.887: INFO: Created: latency-svc-rmrkk + Jan 14 04:08:02.892: INFO: Got endpoints: latency-svc-rmrkk [107.086757ms] + Jan 14 04:08:02.894: INFO: Created: latency-svc-h7s6h + Jan 14 04:08:02.900: INFO: Created: latency-svc-j62hq + Jan 14 04:08:02.900: INFO: Got endpoints: latency-svc-h7s6h [109.095351ms] + Jan 14 04:08:02.906: INFO: Got endpoints: latency-svc-j62hq [106.838129ms] + Jan 14 04:08:02.906: INFO: Created: latency-svc-5qjkn + Jan 14 04:08:02.912: INFO: Created: latency-svc-djqlh + Jan 14 04:08:02.913: INFO: Got endpoints: latency-svc-5qjkn [101.075225ms] + Jan 14 04:08:02.920: INFO: Got endpoints: latency-svc-djqlh [102.006604ms] + Jan 14 04:08:02.923: INFO: Created: latency-svc-zzx56 + Jan 14 04:08:02.926: INFO: Got endpoints: latency-svc-zzx56 [102.810024ms] + Jan 14 04:08:02.928: INFO: Created: latency-svc-ss5gj + Jan 14 04:08:02.933: INFO: Got endpoints: latency-svc-ss5gj [105.418363ms] + Jan 14 04:08:02.933: INFO: Created: latency-svc-rmlbg + Jan 14 04:08:02.942: INFO: Created: latency-svc-9vjgn + Jan 14 04:08:02.943: INFO: Got endpoints: latency-svc-rmlbg [106.134345ms] + Jan 14 04:08:02.950: INFO: Created: latency-svc-lfzfk + Jan 14 04:08:02.950: INFO: Got endpoints: latency-svc-9vjgn [104.42926ms] + Jan 14 04:08:02.963: INFO: Got endpoints: latency-svc-lfzfk [109.085913ms] + Jan 14 04:08:02.963: INFO: Created: latency-svc-lrwnk + Jan 14 04:08:02.970: INFO: Got endpoints: latency-svc-lrwnk [109.307595ms] + Jan 14 04:08:02.972: INFO: Created: latency-svc-ncn4z + Jan 14 04:08:02.979: INFO: Got endpoints: latency-svc-ncn4z [111.308897ms] + Jan 14 04:08:02.980: INFO: Created: latency-svc-tg22g + Jan 14 04:08:02.994: INFO: Got endpoints: latency-svc-tg22g [121.507186ms] + Jan 14 04:08:02.994: INFO: Created: latency-svc-vq727 + Jan 14 04:08:02.994: INFO: Created: latency-svc-pcnkp + Jan 14 04:08:02.997: INFO: Created: latency-svc-kwqc7 + Jan 14 04:08:03.000: INFO: Got endpoints: latency-svc-pcnkp [112.89559ms] + Jan 14 04:08:03.002: INFO: Got endpoints: latency-svc-vq727 [122.109327ms] + Jan 14 04:08:03.004: INFO: Got endpoints: latency-svc-kwqc7 [112.298876ms] + Jan 14 04:08:03.004: INFO: Created: latency-svc-d5s99 + Jan 14 04:08:03.008: INFO: Got endpoints: latency-svc-d5s99 [108.043392ms] + Jan 14 04:08:03.011: INFO: Created: latency-svc-qhdvc + Jan 14 04:08:03.014: INFO: Got endpoints: latency-svc-qhdvc [107.522089ms] + Jan 14 04:08:03.015: INFO: Created: latency-svc-tk69s + Jan 14 04:08:03.019: INFO: Got endpoints: latency-svc-tk69s [106.117273ms] + Jan 14 04:08:03.025: INFO: Created: latency-svc-9p447 + Jan 14 04:08:03.028: INFO: Got endpoints: latency-svc-9p447 [107.887169ms] + Jan 14 04:08:03.033: INFO: Created: latency-svc-t99f9 + Jan 14 04:08:03.038: INFO: Got endpoints: latency-svc-t99f9 [111.903023ms] + Jan 14 04:08:03.047: INFO: Created: latency-svc-x2pgd + Jan 14 04:08:03.047: INFO: Got endpoints: latency-svc-x2pgd [114.136125ms] + Jan 14 04:08:03.047: INFO: Created: latency-svc-mttfg + Jan 14 04:08:03.051: INFO: Got endpoints: latency-svc-mttfg [108.34021ms] + Jan 14 04:08:03.053: INFO: Created: latency-svc-wxmr5 + Jan 14 04:08:03.057: INFO: Got endpoints: latency-svc-wxmr5 [106.484612ms] + Jan 14 04:08:03.061: INFO: Created: latency-svc-vxjsm + Jan 14 04:08:03.064: INFO: Created: latency-svc-477dn + Jan 14 04:08:03.065: INFO: Got endpoints: latency-svc-vxjsm [101.379834ms] + Jan 14 04:08:03.070: INFO: Got endpoints: latency-svc-477dn [99.760917ms] + Jan 14 04:08:03.073: INFO: Created: latency-svc-pvvtm + Jan 14 04:08:03.081: INFO: Got endpoints: latency-svc-pvvtm [102.263505ms] + Jan 14 04:08:03.083: INFO: Created: latency-svc-krhss + Jan 14 04:08:03.087: INFO: Got endpoints: latency-svc-krhss [92.705593ms] + Jan 14 04:08:03.090: INFO: Created: latency-svc-pd78w + Jan 14 04:08:03.095: INFO: Got endpoints: latency-svc-pd78w [94.912685ms] + Jan 14 04:08:03.095: INFO: Created: latency-svc-wxdh7 + Jan 14 04:08:03.102: INFO: Got endpoints: latency-svc-wxdh7 [100.394752ms] + Jan 14 04:08:03.103: INFO: Created: latency-svc-vfvfn + Jan 14 04:08:03.108: INFO: Got endpoints: latency-svc-vfvfn [103.69192ms] + Jan 14 04:08:03.110: INFO: Created: latency-svc-n2txv + Jan 14 04:08:03.116: INFO: Got endpoints: latency-svc-n2txv [107.715342ms] + Jan 14 04:08:03.116: INFO: Created: latency-svc-k925m + Jan 14 04:08:03.120: INFO: Got endpoints: latency-svc-k925m [106.265559ms] + Jan 14 04:08:03.124: INFO: Created: latency-svc-x24qn + Jan 14 04:08:03.128: INFO: Got endpoints: latency-svc-x24qn [109.357198ms] + Jan 14 04:08:03.133: INFO: Created: latency-svc-4vjzh + Jan 14 04:08:03.137: INFO: Got endpoints: latency-svc-4vjzh [109.142349ms] + Jan 14 04:08:03.137: INFO: Created: latency-svc-rj8c4 + Jan 14 04:08:03.145: INFO: Got endpoints: latency-svc-rj8c4 [107.080154ms] + Jan 14 04:08:03.145: INFO: Created: latency-svc-bpzv7 + Jan 14 04:08:03.150: INFO: Got endpoints: latency-svc-bpzv7 [102.402238ms] + Jan 14 04:08:03.151: INFO: Created: latency-svc-7x479 + Jan 14 04:08:03.157: INFO: Got endpoints: latency-svc-7x479 [105.288932ms] + Jan 14 04:08:03.159: INFO: Created: latency-svc-m9jr4 + Jan 14 04:08:03.163: INFO: Got endpoints: latency-svc-m9jr4 [105.814233ms] + Jan 14 04:08:03.166: INFO: Created: latency-svc-rsskx + Jan 14 04:08:03.170: INFO: Got endpoints: latency-svc-rsskx [105.487045ms] + Jan 14 04:08:03.173: INFO: Created: latency-svc-hf46p + Jan 14 04:08:03.177: INFO: Got endpoints: latency-svc-hf46p [106.605772ms] + Jan 14 04:08:03.180: INFO: Created: latency-svc-bzdg2 + Jan 14 04:08:03.185: INFO: Got endpoints: latency-svc-bzdg2 [103.884037ms] + Jan 14 04:08:03.187: INFO: Created: latency-svc-bw4bj + Jan 14 04:08:03.191: INFO: Got endpoints: latency-svc-bw4bj [104.695842ms] + Jan 14 04:08:03.193: INFO: Created: latency-svc-k829n + Jan 14 04:08:03.199: INFO: Got endpoints: latency-svc-k829n [104.366107ms] + Jan 14 04:08:03.199: INFO: Created: latency-svc-xz4sj + Jan 14 04:08:03.203: INFO: Got endpoints: latency-svc-xz4sj [100.530377ms] + Jan 14 04:08:03.205: INFO: Created: latency-svc-4fbg4 + Jan 14 04:08:03.210: INFO: Got endpoints: latency-svc-4fbg4 [101.795543ms] + Jan 14 04:08:03.210: INFO: Created: latency-svc-l8lrq + Jan 14 04:08:03.216: INFO: Got endpoints: latency-svc-l8lrq [100.290237ms] + Jan 14 04:08:03.220: INFO: Created: latency-svc-7x7q2 + Jan 14 04:08:03.224: INFO: Got endpoints: latency-svc-7x7q2 [104.299567ms] + Jan 14 04:08:03.228: INFO: Created: latency-svc-8jzch + Jan 14 04:08:03.234: INFO: Got endpoints: latency-svc-8jzch [105.409746ms] + Jan 14 04:08:03.235: INFO: Created: latency-svc-g2rpj + Jan 14 04:08:03.240: INFO: Got endpoints: latency-svc-g2rpj [103.445164ms] + Jan 14 04:08:03.242: INFO: Created: latency-svc-g9sqc + Jan 14 04:08:03.248: INFO: Created: latency-svc-kh6w8 + Jan 14 04:08:03.248: INFO: Got endpoints: latency-svc-g9sqc [102.881181ms] + Jan 14 04:08:03.252: INFO: Got endpoints: latency-svc-kh6w8 [102.251706ms] + Jan 14 04:08:03.254: INFO: Created: latency-svc-9xrkq + Jan 14 04:08:03.259: INFO: Got endpoints: latency-svc-9xrkq [102.004436ms] + Jan 14 04:08:03.261: INFO: Created: latency-svc-74wnh + Jan 14 04:08:03.266: INFO: Got endpoints: latency-svc-74wnh [102.826838ms] + Jan 14 04:08:03.268: INFO: Created: latency-svc-4h466 + Jan 14 04:08:03.273: INFO: Got endpoints: latency-svc-4h466 [103.037996ms] + Jan 14 04:08:03.274: INFO: Created: latency-svc-56k2h + Jan 14 04:08:03.279: INFO: Got endpoints: latency-svc-56k2h [102.601093ms] + Jan 14 04:08:03.282: INFO: Created: latency-svc-7jcwh + Jan 14 04:08:03.287: INFO: Got endpoints: latency-svc-7jcwh [101.9689ms] + Jan 14 04:08:03.291: INFO: Created: latency-svc-rlrm8 + Jan 14 04:08:03.295: INFO: Created: latency-svc-nfvxm + Jan 14 04:08:03.299: INFO: Got endpoints: latency-svc-rlrm8 [108.140387ms] + Jan 14 04:08:03.300: INFO: Got endpoints: latency-svc-nfvxm [100.904619ms] + Jan 14 04:08:03.306: INFO: Created: latency-svc-j9scw + Jan 14 04:08:03.311: INFO: Got endpoints: latency-svc-j9scw [108.151781ms] + Jan 14 04:08:03.314: INFO: Created: latency-svc-j6cpt + Jan 14 04:08:03.319: INFO: Got endpoints: latency-svc-j6cpt [108.925082ms] + Jan 14 04:08:03.322: INFO: Created: latency-svc-jdc5p + Jan 14 04:08:03.327: INFO: Got endpoints: latency-svc-jdc5p [110.991353ms] + Jan 14 04:08:03.327: INFO: Created: latency-svc-slr7b + Jan 14 04:08:03.332: INFO: Got endpoints: latency-svc-slr7b [107.640565ms] + Jan 14 04:08:03.333: INFO: Created: latency-svc-5hcwd + Jan 14 04:08:03.340: INFO: Got endpoints: latency-svc-5hcwd [106.726349ms] + Jan 14 04:08:03.341: INFO: Created: latency-svc-8fnfx + Jan 14 04:08:03.347: INFO: Got endpoints: latency-svc-8fnfx [106.319085ms] + Jan 14 04:08:03.347: INFO: Created: latency-svc-ls2cm + Jan 14 04:08:03.353: INFO: Got endpoints: latency-svc-ls2cm [105.4689ms] + Jan 14 04:08:03.354: INFO: Created: latency-svc-4nww8 + Jan 14 04:08:03.358: INFO: Got endpoints: latency-svc-4nww8 [106.282387ms] + Jan 14 04:08:03.359: INFO: Created: latency-svc-86tpp + Jan 14 04:08:03.364: INFO: Got endpoints: latency-svc-86tpp [105.416772ms] + Jan 14 04:08:03.369: INFO: Created: latency-svc-9ntv2 + Jan 14 04:08:03.373: INFO: Got endpoints: latency-svc-9ntv2 [107.20565ms] + Jan 14 04:08:03.377: INFO: Created: latency-svc-8pp2w + Jan 14 04:08:03.380: INFO: Got endpoints: latency-svc-8pp2w [107.001539ms] + Jan 14 04:08:03.385: INFO: Created: latency-svc-hjtd2 + Jan 14 04:08:03.390: INFO: Got endpoints: latency-svc-hjtd2 [110.713872ms] + Jan 14 04:08:03.391: INFO: Created: latency-svc-n57hd + Jan 14 04:08:03.395: INFO: Created: latency-svc-6fxq7 + Jan 14 04:08:03.396: INFO: Got endpoints: latency-svc-n57hd [108.89138ms] + Jan 14 04:08:03.402: INFO: Got endpoints: latency-svc-6fxq7 [102.567358ms] + Jan 14 04:08:03.403: INFO: Created: latency-svc-cwcwp + Jan 14 04:08:03.407: INFO: Got endpoints: latency-svc-cwcwp [107.149202ms] + Jan 14 04:08:03.410: INFO: Created: latency-svc-5t2wt + Jan 14 04:08:03.414: INFO: Got endpoints: latency-svc-5t2wt [103.264716ms] + Jan 14 04:08:03.418: INFO: Created: latency-svc-bbbl4 + Jan 14 04:08:03.423: INFO: Got endpoints: latency-svc-bbbl4 [103.701284ms] + Jan 14 04:08:03.425: INFO: Created: latency-svc-452jc + Jan 14 04:08:03.429: INFO: Got endpoints: latency-svc-452jc [102.444816ms] + Jan 14 04:08:03.431: INFO: Created: latency-svc-7wb66 + Jan 14 04:08:03.437: INFO: Created: latency-svc-b2jrp + Jan 14 04:08:03.438: INFO: Got endpoints: latency-svc-7wb66 [106.337645ms] + Jan 14 04:08:03.443: INFO: Got endpoints: latency-svc-b2jrp [102.896679ms] + Jan 14 04:08:03.445: INFO: Created: latency-svc-9mtqp + Jan 14 04:08:03.450: INFO: Got endpoints: latency-svc-9mtqp [103.336761ms] + Jan 14 04:08:03.454: INFO: Created: latency-svc-qq6n4 + Jan 14 04:08:03.460: INFO: Got endpoints: latency-svc-qq6n4 [106.501415ms] + Jan 14 04:08:03.460: INFO: Created: latency-svc-cq7fg + Jan 14 04:08:03.465: INFO: Got endpoints: latency-svc-cq7fg [106.576559ms] + Jan 14 04:08:03.469: INFO: Created: latency-svc-pph5d + Jan 14 04:08:03.474: INFO: Got endpoints: latency-svc-pph5d [110.161629ms] + Jan 14 04:08:03.476: INFO: Created: latency-svc-f442r + Jan 14 04:08:03.481: INFO: Got endpoints: latency-svc-f442r [108.482642ms] + Jan 14 04:08:03.484: INFO: Created: latency-svc-v8ssf + Jan 14 04:08:03.490: INFO: Got endpoints: latency-svc-v8ssf [109.745768ms] + Jan 14 04:08:03.490: INFO: Latencies: [18.820812ms 28.862896ms 37.654689ms 38.800466ms 46.866233ms 54.603142ms 63.107695ms 67.803886ms 75.999222ms 82.027766ms 90.4683ms 92.705593ms 94.912685ms 98.060037ms 99.760917ms 100.290237ms 100.394752ms 100.530377ms 100.904619ms 101.075225ms 101.379834ms 101.72371ms 101.795543ms 101.9689ms 102.004436ms 102.006604ms 102.035347ms 102.251706ms 102.263505ms 102.402238ms 102.444816ms 102.567358ms 102.601093ms 102.810024ms 102.826838ms 102.881181ms 102.896679ms 103.037996ms 103.264716ms 103.285292ms 103.336761ms 103.392043ms 103.445164ms 103.484464ms 103.69192ms 103.701284ms 103.75986ms 103.884037ms 104.060792ms 104.183152ms 104.299567ms 104.340912ms 104.366107ms 104.407329ms 104.426084ms 104.42926ms 104.695842ms 104.905121ms 104.909156ms 104.912767ms 105.288932ms 105.409746ms 105.416772ms 105.418363ms 105.4689ms 105.487045ms 105.531681ms 105.74236ms 105.814233ms 105.928125ms 106.084135ms 106.117273ms 106.134345ms 106.147976ms 106.265559ms 106.282387ms 106.319085ms 106.337645ms 106.484612ms 106.490278ms 106.501415ms 106.518078ms 106.518276ms 106.576559ms 106.605772ms 106.726349ms 106.838129ms 107.001539ms 107.080154ms 107.086757ms 107.149202ms 107.20565ms 107.522089ms 107.640565ms 107.670328ms 107.697818ms 107.715342ms 107.829096ms 107.887169ms 108.043392ms 108.051678ms 108.140387ms 108.151781ms 108.184781ms 108.34021ms 108.482642ms 108.89138ms 108.925082ms 109.085913ms 109.095351ms 109.142349ms 109.172068ms 109.307595ms 109.357198ms 109.745768ms 110.046502ms 110.161629ms 110.713872ms 110.940296ms 110.991353ms 111.157608ms 111.308897ms 111.405005ms 111.903023ms 112.00814ms 112.240917ms 112.298876ms 112.322133ms 112.428701ms 112.486556ms 112.706196ms 112.89559ms 113.132833ms 114.136125ms 114.843137ms 115.404217ms 115.624489ms 115.716582ms 115.765632ms 115.826962ms 116.148492ms 116.441589ms 116.701515ms 116.792886ms 117.193891ms 117.204404ms 117.33319ms 117.491505ms 117.538082ms 117.704513ms 118.546186ms 118.665757ms 119.604497ms 119.622995ms 119.837468ms 120.355691ms 120.545205ms 120.552509ms 120.713689ms 121.507186ms 121.645756ms 121.832724ms 121.938058ms 122.095625ms 122.109327ms 122.258429ms 122.435533ms 122.603942ms 122.861124ms 122.893151ms 123.002508ms 123.287495ms 123.88441ms 124.059845ms 124.082908ms 124.144439ms 124.29819ms 124.99535ms 125.13093ms 125.471546ms 126.523672ms 126.658102ms 127.064691ms 127.270577ms 127.526163ms 127.803603ms 128.362636ms 128.461085ms 128.853775ms 128.867601ms 128.899522ms 129.240596ms 129.48057ms 129.624162ms 129.758022ms 130.838371ms 131.609156ms 132.562501ms 132.750513ms 135.296173ms] + Jan 14 04:08:03.490: INFO: 50 %ile: 108.051678ms + Jan 14 04:08:03.490: INFO: 90 %ile: 126.523672ms + Jan 14 04:08:03.490: INFO: 99 %ile: 132.750513ms + Jan 14 04:08:03.490: INFO: Total sample count: 200 + [AfterEach] [sig-network] Service endpoints latency + test/e2e/framework/node/init/init.go:32 + Jan 14 04:08:03.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Service endpoints latency + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Service endpoints latency + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Service endpoints latency + tear down framework | framework.go:193 + STEP: Destroying namespace "svc-latency-2940" for this suite. 01/14/23 04:08:03.497 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + test/e2e/apimachinery/garbage_collector.go:650 +[BeforeEach] [sig-api-machinery] Garbage collector + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:08:03.503 +Jan 14 04:08:03.503: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename gc 01/14/23 04:08:03.504 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:08:03.52 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:08:03.523 +[BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:31 +[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + test/e2e/apimachinery/garbage_collector.go:650 +STEP: create the rc 01/14/23 04:08:03.53 +STEP: delete the rc 01/14/23 04:08:08.539 +STEP: wait for the rc to be deleted 01/14/23 04:08:08.547 +STEP: Gathering metrics 01/14/23 04:08:09.553 +Jan 14 04:08:09.574: INFO: Waiting up to 5m0s for pod "kube-controller-manager-10.0.1.231" in namespace "kube-system" to be "running and ready" +Jan 14 04:08:09.578: INFO: Pod "kube-controller-manager-10.0.1.231": Phase="Running", Reason="", readiness=true. Elapsed: 3.639987ms +Jan 14 04:08:09.578: INFO: The phase of Pod kube-controller-manager-10.0.1.231 is Running (Ready = true) +Jan 14 04:08:09.578: INFO: Pod "kube-controller-manager-10.0.1.231" satisfied condition "running and ready" +Jan 14 04:08:09.628: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/node/init/init.go:32 +Jan 14 04:08:09.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + tear down framework | framework.go:193 +STEP: Destroying namespace "gc-2450" for this suite. 01/14/23 04:08:09.632 +------------------------------ +• [SLOW TEST] [6.135 seconds] +[sig-api-machinery] Garbage collector +test/e2e/apimachinery/framework.go:23 + should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + test/e2e/apimachinery/garbage_collector.go:650 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Garbage collector + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:08:03.503 + Jan 14 04:08:03.503: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename gc 01/14/23 04:08:03.504 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:08:03.52 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:08:03.523 + [BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:31 + [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + test/e2e/apimachinery/garbage_collector.go:650 + STEP: create the rc 01/14/23 04:08:03.53 + STEP: delete the rc 01/14/23 04:08:08.539 + STEP: wait for the rc to be deleted 01/14/23 04:08:08.547 + STEP: Gathering metrics 01/14/23 04:08:09.553 + Jan 14 04:08:09.574: INFO: Waiting up to 5m0s for pod "kube-controller-manager-10.0.1.231" in namespace "kube-system" to be "running and ready" + Jan 14 04:08:09.578: INFO: Pod "kube-controller-manager-10.0.1.231": Phase="Running", Reason="", readiness=true. Elapsed: 3.639987ms + Jan 14 04:08:09.578: INFO: The phase of Pod kube-controller-manager-10.0.1.231 is Running (Ready = true) + Jan 14 04:08:09.578: INFO: Pod "kube-controller-manager-10.0.1.231" satisfied condition "running and ready" + Jan 14 04:08:09.628: INFO: For apiserver_request_total: + For apiserver_request_latency_seconds: + For apiserver_init_events_total: + For garbage_collector_attempt_to_delete_queue_latency: + For garbage_collector_attempt_to_delete_work_duration: + For garbage_collector_attempt_to_orphan_queue_latency: + For garbage_collector_attempt_to_orphan_work_duration: + For garbage_collector_dirty_processing_latency_microseconds: + For garbage_collector_event_processing_latency_microseconds: + For garbage_collector_graph_changes_queue_latency: + For garbage_collector_graph_changes_work_duration: + For garbage_collector_orphan_processing_latency_microseconds: + For namespace_queue_latency: + For namespace_queue_latency_sum: + For namespace_queue_latency_count: + For namespace_retries: + For namespace_work_duration: + For namespace_work_duration_sum: + For namespace_work_duration_count: + For function_duration_seconds: + For errors_total: + For evicted_pods_total: + + [AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/node/init/init.go:32 + Jan 14 04:08:09.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + tear down framework | framework.go:193 + STEP: Destroying namespace "gc-2450" for this suite. 01/14/23 04:08:09.632 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should verify ResourceQuota with terminating scopes. [Conformance] + test/e2e/apimachinery/resource_quota.go:690 +[BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:08:09.639 +Jan 14 04:08:09.639: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename resourcequota 01/14/23 04:08:09.64 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:08:09.656 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:08:09.658 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 +[It] should verify ResourceQuota with terminating scopes. [Conformance] + test/e2e/apimachinery/resource_quota.go:690 +STEP: Creating a ResourceQuota with terminating scope 01/14/23 04:08:09.66 +STEP: Ensuring ResourceQuota status is calculated 01/14/23 04:08:09.666 +STEP: Creating a ResourceQuota with not terminating scope 01/14/23 04:08:11.671 +STEP: Ensuring ResourceQuota status is calculated 01/14/23 04:08:11.675 +STEP: Creating a long running pod 01/14/23 04:08:13.679 +STEP: Ensuring resource quota with not terminating scope captures the pod usage 01/14/23 04:08:13.694 +STEP: Ensuring resource quota with terminating scope ignored the pod usage 01/14/23 04:08:15.698 +STEP: Deleting the pod 01/14/23 04:08:17.701 +STEP: Ensuring resource quota status released the pod usage 01/14/23 04:08:17.718 +STEP: Creating a terminating pod 01/14/23 04:08:19.722 +STEP: Ensuring resource quota with terminating scope captures the pod usage 01/14/23 04:08:19.734 +STEP: Ensuring resource quota with not terminating scope ignored the pod usage 01/14/23 04:08:21.739 +STEP: Deleting the pod 01/14/23 04:08:23.743 +STEP: Ensuring resource quota status released the pod usage 01/14/23 04:08:23.756 +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 +Jan 14 04:08:25.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 +STEP: Destroying namespace "resourcequota-4949" for this suite. 01/14/23 04:08:25.765 +------------------------------ +• [SLOW TEST] [16.132 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should verify ResourceQuota with terminating scopes. [Conformance] + test/e2e/apimachinery/resource_quota.go:690 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:08:09.639 + Jan 14 04:08:09.639: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename resourcequota 01/14/23 04:08:09.64 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:08:09.656 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:08:09.658 + [BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 + [It] should verify ResourceQuota with terminating scopes. [Conformance] + test/e2e/apimachinery/resource_quota.go:690 + STEP: Creating a ResourceQuota with terminating scope 01/14/23 04:08:09.66 + STEP: Ensuring ResourceQuota status is calculated 01/14/23 04:08:09.666 + STEP: Creating a ResourceQuota with not terminating scope 01/14/23 04:08:11.671 + STEP: Ensuring ResourceQuota status is calculated 01/14/23 04:08:11.675 + STEP: Creating a long running pod 01/14/23 04:08:13.679 + STEP: Ensuring resource quota with not terminating scope captures the pod usage 01/14/23 04:08:13.694 + STEP: Ensuring resource quota with terminating scope ignored the pod usage 01/14/23 04:08:15.698 + STEP: Deleting the pod 01/14/23 04:08:17.701 + STEP: Ensuring resource quota status released the pod usage 01/14/23 04:08:17.718 + STEP: Creating a terminating pod 01/14/23 04:08:19.722 + STEP: Ensuring resource quota with terminating scope captures the pod usage 01/14/23 04:08:19.734 + STEP: Ensuring resource quota with not terminating scope ignored the pod usage 01/14/23 04:08:21.739 + STEP: Deleting the pod 01/14/23 04:08:23.743 + STEP: Ensuring resource quota status released the pod usage 01/14/23 04:08:23.756 + [AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 + Jan 14 04:08:25.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 + STEP: Destroying namespace "resourcequota-4949" for this suite. 01/14/23 04:08:25.765 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] PodTemplates + should replace a pod template [Conformance] + test/e2e/common/node/podtemplates.go:176 +[BeforeEach] [sig-node] PodTemplates + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:08:25.771 +Jan 14 04:08:25.772: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename podtemplate 01/14/23 04:08:25.772 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:08:25.787 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:08:25.79 +[BeforeEach] [sig-node] PodTemplates + test/e2e/framework/metrics/init/init.go:31 +[It] should replace a pod template [Conformance] + test/e2e/common/node/podtemplates.go:176 +STEP: Create a pod template 01/14/23 04:08:25.792 +STEP: Replace a pod template 01/14/23 04:08:25.796 +Jan 14 04:08:25.804: INFO: Found updated podtemplate annotation: "true" + +[AfterEach] [sig-node] PodTemplates + test/e2e/framework/node/init/init.go:32 +Jan 14 04:08:25.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] PodTemplates + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] PodTemplates + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] PodTemplates + tear down framework | framework.go:193 +STEP: Destroying namespace "podtemplate-4656" for this suite. 01/14/23 04:08:25.809 +------------------------------ +• [0.044 seconds] +[sig-node] PodTemplates +test/e2e/common/node/framework.go:23 + should replace a pod template [Conformance] + test/e2e/common/node/podtemplates.go:176 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] PodTemplates + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:08:25.771 + Jan 14 04:08:25.772: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename podtemplate 01/14/23 04:08:25.772 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:08:25.787 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:08:25.79 + [BeforeEach] [sig-node] PodTemplates + test/e2e/framework/metrics/init/init.go:31 + [It] should replace a pod template [Conformance] + test/e2e/common/node/podtemplates.go:176 + STEP: Create a pod template 01/14/23 04:08:25.792 + STEP: Replace a pod template 01/14/23 04:08:25.796 + Jan 14 04:08:25.804: INFO: Found updated podtemplate annotation: "true" + + [AfterEach] [sig-node] PodTemplates + test/e2e/framework/node/init/init.go:32 + Jan 14 04:08:25.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] PodTemplates + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] PodTemplates + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] PodTemplates + tear down framework | framework.go:193 + STEP: Destroying namespace "podtemplate-4656" for this suite. 01/14/23 04:08:25.809 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Pods + should support remote command execution over websockets [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:536 +[BeforeEach] [sig-node] Pods + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:08:25.816 +Jan 14 04:08:25.816: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename pods 01/14/23 04:08:25.817 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:08:25.833 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:08:25.835 +[BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 +[It] should support remote command execution over websockets [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:536 +Jan 14 04:08:25.838: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: creating the pod 01/14/23 04:08:25.838 +STEP: submitting the pod to kubernetes 01/14/23 04:08:25.838 +Jan 14 04:08:25.847: INFO: Waiting up to 5m0s for pod "pod-exec-websocket-1e74bb82-e4fb-4718-8f75-722f509560d8" in namespace "pods-5777" to be "running and ready" +Jan 14 04:08:25.850: INFO: Pod "pod-exec-websocket-1e74bb82-e4fb-4718-8f75-722f509560d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.566986ms +Jan 14 04:08:25.850: INFO: The phase of Pod pod-exec-websocket-1e74bb82-e4fb-4718-8f75-722f509560d8 is Pending, waiting for it to be Running (with Ready = true) +Jan 14 04:08:27.854: INFO: Pod "pod-exec-websocket-1e74bb82-e4fb-4718-8f75-722f509560d8": Phase="Running", Reason="", readiness=true. Elapsed: 2.006881342s +Jan 14 04:08:27.854: INFO: The phase of Pod pod-exec-websocket-1e74bb82-e4fb-4718-8f75-722f509560d8 is Running (Ready = true) +Jan 14 04:08:27.854: INFO: Pod "pod-exec-websocket-1e74bb82-e4fb-4718-8f75-722f509560d8" satisfied condition "running and ready" +[AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 +Jan 14 04:08:27.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 +STEP: Destroying namespace "pods-5777" for this suite. 01/14/23 04:08:27.927 +------------------------------ +• [2.118 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should support remote command execution over websockets [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:536 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Pods + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:08:25.816 + Jan 14 04:08:25.816: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename pods 01/14/23 04:08:25.817 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:08:25.833 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:08:25.835 + [BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 + [It] should support remote command execution over websockets [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:536 + Jan 14 04:08:25.838: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: creating the pod 01/14/23 04:08:25.838 + STEP: submitting the pod to kubernetes 01/14/23 04:08:25.838 + Jan 14 04:08:25.847: INFO: Waiting up to 5m0s for pod "pod-exec-websocket-1e74bb82-e4fb-4718-8f75-722f509560d8" in namespace "pods-5777" to be "running and ready" + Jan 14 04:08:25.850: INFO: Pod "pod-exec-websocket-1e74bb82-e4fb-4718-8f75-722f509560d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.566986ms + Jan 14 04:08:25.850: INFO: The phase of Pod pod-exec-websocket-1e74bb82-e4fb-4718-8f75-722f509560d8 is Pending, waiting for it to be Running (with Ready = true) + Jan 14 04:08:27.854: INFO: Pod "pod-exec-websocket-1e74bb82-e4fb-4718-8f75-722f509560d8": Phase="Running", Reason="", readiness=true. Elapsed: 2.006881342s + Jan 14 04:08:27.854: INFO: The phase of Pod pod-exec-websocket-1e74bb82-e4fb-4718-8f75-722f509560d8 is Running (Ready = true) + Jan 14 04:08:27.854: INFO: Pod "pod-exec-websocket-1e74bb82-e4fb-4718-8f75-722f509560d8" satisfied condition "running and ready" + [AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 + Jan 14 04:08:27.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 + STEP: Destroying namespace "pods-5777" for this suite. 01/14/23 04:08:27.927 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should update labels on modification [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:130 +[BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:08:27.935 +Jan 14 04:08:27.935: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename downward-api 01/14/23 04:08:27.936 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:08:27.951 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:08:27.953 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 +[It] should update labels on modification [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:130 +STEP: Creating the pod 01/14/23 04:08:27.955 +Jan 14 04:08:27.964: INFO: Waiting up to 5m0s for pod "labelsupdatec0398b54-a31e-485e-b463-2cc287d7a513" in namespace "downward-api-2407" to be "running and ready" +Jan 14 04:08:27.967: INFO: Pod "labelsupdatec0398b54-a31e-485e-b463-2cc287d7a513": Phase="Pending", Reason="", readiness=false. Elapsed: 2.630095ms +Jan 14 04:08:27.967: INFO: The phase of Pod labelsupdatec0398b54-a31e-485e-b463-2cc287d7a513 is Pending, waiting for it to be Running (with Ready = true) +Jan 14 04:08:29.972: INFO: Pod "labelsupdatec0398b54-a31e-485e-b463-2cc287d7a513": Phase="Running", Reason="", readiness=true. Elapsed: 2.007840981s +Jan 14 04:08:29.972: INFO: The phase of Pod labelsupdatec0398b54-a31e-485e-b463-2cc287d7a513 is Running (Ready = true) +Jan 14 04:08:29.972: INFO: Pod "labelsupdatec0398b54-a31e-485e-b463-2cc287d7a513" satisfied condition "running and ready" +Jan 14 04:08:30.502: INFO: Successfully updated pod "labelsupdatec0398b54-a31e-485e-b463-2cc287d7a513" +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 +Jan 14 04:08:34.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 +STEP: Destroying namespace "downward-api-2407" for this suite. 01/14/23 04:08:34.532 +------------------------------ +• [SLOW TEST] [6.603 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should update labels on modification [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:130 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:08:27.935 + Jan 14 04:08:27.935: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename downward-api 01/14/23 04:08:27.936 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:08:27.951 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:08:27.953 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 + [It] should update labels on modification [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:130 + STEP: Creating the pod 01/14/23 04:08:27.955 + Jan 14 04:08:27.964: INFO: Waiting up to 5m0s for pod "labelsupdatec0398b54-a31e-485e-b463-2cc287d7a513" in namespace "downward-api-2407" to be "running and ready" + Jan 14 04:08:27.967: INFO: Pod "labelsupdatec0398b54-a31e-485e-b463-2cc287d7a513": Phase="Pending", Reason="", readiness=false. Elapsed: 2.630095ms + Jan 14 04:08:27.967: INFO: The phase of Pod labelsupdatec0398b54-a31e-485e-b463-2cc287d7a513 is Pending, waiting for it to be Running (with Ready = true) + Jan 14 04:08:29.972: INFO: Pod "labelsupdatec0398b54-a31e-485e-b463-2cc287d7a513": Phase="Running", Reason="", readiness=true. Elapsed: 2.007840981s + Jan 14 04:08:29.972: INFO: The phase of Pod labelsupdatec0398b54-a31e-485e-b463-2cc287d7a513 is Running (Ready = true) + Jan 14 04:08:29.972: INFO: Pod "labelsupdatec0398b54-a31e-485e-b463-2cc287d7a513" satisfied condition "running and ready" + Jan 14 04:08:30.502: INFO: Successfully updated pod "labelsupdatec0398b54-a31e-485e-b463-2cc287d7a513" + [AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 + Jan 14 04:08:34.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 + STEP: Destroying namespace "downward-api-2407" for this suite. 01/14/23 04:08:34.532 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should update labels on modification [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:130 +[BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:08:34.538 +Jan 14 04:08:34.538: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename projected 01/14/23 04:08:34.539 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:08:34.553 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:08:34.555 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 +[It] should update labels on modification [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:130 +STEP: Creating the pod 01/14/23 04:08:34.557 +Jan 14 04:08:34.567: INFO: Waiting up to 5m0s for pod "labelsupdate8306589d-b991-476c-a5ba-ff79a8851b58" in namespace "projected-8468" to be "running and ready" +Jan 14 04:08:34.570: INFO: Pod "labelsupdate8306589d-b991-476c-a5ba-ff79a8851b58": Phase="Pending", Reason="", readiness=false. Elapsed: 3.109469ms +Jan 14 04:08:34.570: INFO: The phase of Pod labelsupdate8306589d-b991-476c-a5ba-ff79a8851b58 is Pending, waiting for it to be Running (with Ready = true) +Jan 14 04:08:36.575: INFO: Pod "labelsupdate8306589d-b991-476c-a5ba-ff79a8851b58": Phase="Running", Reason="", readiness=true. Elapsed: 2.008135793s +Jan 14 04:08:36.576: INFO: The phase of Pod labelsupdate8306589d-b991-476c-a5ba-ff79a8851b58 is Running (Ready = true) +Jan 14 04:08:36.576: INFO: Pod "labelsupdate8306589d-b991-476c-a5ba-ff79a8851b58" satisfied condition "running and ready" +Jan 14 04:08:37.097: INFO: Successfully updated pod "labelsupdate8306589d-b991-476c-a5ba-ff79a8851b58" +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 +Jan 14 04:08:41.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-8468" for this suite. 01/14/23 04:08:41.122 +------------------------------ +• [SLOW TEST] [6.590 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should update labels on modification [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:130 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:08:34.538 + Jan 14 04:08:34.538: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename projected 01/14/23 04:08:34.539 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:08:34.553 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:08:34.555 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 + [It] should update labels on modification [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:130 + STEP: Creating the pod 01/14/23 04:08:34.557 + Jan 14 04:08:34.567: INFO: Waiting up to 5m0s for pod "labelsupdate8306589d-b991-476c-a5ba-ff79a8851b58" in namespace "projected-8468" to be "running and ready" + Jan 14 04:08:34.570: INFO: Pod "labelsupdate8306589d-b991-476c-a5ba-ff79a8851b58": Phase="Pending", Reason="", readiness=false. Elapsed: 3.109469ms + Jan 14 04:08:34.570: INFO: The phase of Pod labelsupdate8306589d-b991-476c-a5ba-ff79a8851b58 is Pending, waiting for it to be Running (with Ready = true) + Jan 14 04:08:36.575: INFO: Pod "labelsupdate8306589d-b991-476c-a5ba-ff79a8851b58": Phase="Running", Reason="", readiness=true. Elapsed: 2.008135793s + Jan 14 04:08:36.576: INFO: The phase of Pod labelsupdate8306589d-b991-476c-a5ba-ff79a8851b58 is Running (Ready = true) + Jan 14 04:08:36.576: INFO: Pod "labelsupdate8306589d-b991-476c-a5ba-ff79a8851b58" satisfied condition "running and ready" + Jan 14 04:08:37.097: INFO: Successfully updated pod "labelsupdate8306589d-b991-476c-a5ba-ff79a8851b58" + [AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 + Jan 14 04:08:41.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-8468" for this suite. 01/14/23 04:08:41.122 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir wrapper volumes + should not cause race condition when used for configmaps [Serial] [Conformance] + test/e2e/storage/empty_dir_wrapper.go:189 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:08:41.129 +Jan 14 04:08:41.129: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename emptydir-wrapper 01/14/23 04:08:41.13 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:08:41.145 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:08:41.147 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + test/e2e/framework/metrics/init/init.go:31 +[It] should not cause race condition when used for configmaps [Serial] [Conformance] + test/e2e/storage/empty_dir_wrapper.go:189 +STEP: Creating 50 configmaps 01/14/23 04:08:41.149 +STEP: Creating RC which spawns configmap-volume pods 01/14/23 04:08:41.385 +Jan 14 04:08:41.490: INFO: Pod name wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a: Found 3 pods out of 5 +Jan 14 04:08:46.498: INFO: Pod name wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a: Found 5 pods out of 5 +STEP: Ensuring each pod is running 01/14/23 04:08:46.498 +Jan 14 04:08:46.499: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-687vd" in namespace "emptydir-wrapper-9826" to be "running" +Jan 14 04:08:46.502: INFO: Pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-687vd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.45543ms +Jan 14 04:08:48.508: INFO: Pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-687vd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009431889s +Jan 14 04:08:50.507: INFO: Pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-687vd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008321461s +Jan 14 04:08:52.507: INFO: Pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-687vd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.008744487s +Jan 14 04:08:54.508: INFO: Pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-687vd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.009377648s +Jan 14 04:08:56.507: INFO: Pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-687vd": Phase="Running", Reason="", readiness=true. Elapsed: 10.008960287s +Jan 14 04:08:56.508: INFO: Pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-687vd" satisfied condition "running" +Jan 14 04:08:56.508: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-7lh4h" in namespace "emptydir-wrapper-9826" to be "running" +Jan 14 04:08:56.511: INFO: Pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-7lh4h": Phase="Running", Reason="", readiness=true. Elapsed: 3.420709ms +Jan 14 04:08:56.511: INFO: Pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-7lh4h" satisfied condition "running" +Jan 14 04:08:56.511: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-8htk9" in namespace "emptydir-wrapper-9826" to be "running" +Jan 14 04:08:56.514: INFO: Pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-8htk9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.203053ms +Jan 14 04:08:58.522: INFO: Pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-8htk9": Phase="Running", Reason="", readiness=true. Elapsed: 2.011062385s +Jan 14 04:08:58.522: INFO: Pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-8htk9" satisfied condition "running" +Jan 14 04:08:58.522: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-9tsf6" in namespace "emptydir-wrapper-9826" to be "running" +Jan 14 04:08:58.526: INFO: Pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-9tsf6": Phase="Running", Reason="", readiness=true. Elapsed: 4.097661ms +Jan 14 04:08:58.526: INFO: Pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-9tsf6" satisfied condition "running" +Jan 14 04:08:58.526: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-qrgvm" in namespace "emptydir-wrapper-9826" to be "running" +Jan 14 04:08:58.533: INFO: Pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-qrgvm": Phase="Running", Reason="", readiness=true. Elapsed: 6.312878ms +Jan 14 04:08:58.533: INFO: Pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-qrgvm" satisfied condition "running" +STEP: deleting ReplicationController wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a in namespace emptydir-wrapper-9826, will wait for the garbage collector to delete the pods 01/14/23 04:08:58.533 +Jan 14 04:08:58.594: INFO: Deleting ReplicationController wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a took: 7.434522ms +Jan 14 04:08:58.695: INFO: Terminating ReplicationController wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a pods took: 100.164248ms +STEP: Creating RC which spawns configmap-volume pods 01/14/23 04:09:01.302 +Jan 14 04:09:01.318: INFO: Pod name wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945: Found 0 pods out of 5 +Jan 14 04:09:06.327: INFO: Pod name wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945: Found 5 pods out of 5 +STEP: Ensuring each pod is running 01/14/23 04:09:06.327 +Jan 14 04:09:06.327: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-2ftpj" in namespace "emptydir-wrapper-9826" to be "running" +Jan 14 04:09:06.330: INFO: Pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-2ftpj": Phase="Pending", Reason="", readiness=false. Elapsed: 3.509678ms +Jan 14 04:09:08.336: INFO: Pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-2ftpj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00967948s +Jan 14 04:09:10.335: INFO: Pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-2ftpj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008717428s +Jan 14 04:09:12.337: INFO: Pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-2ftpj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010343935s +Jan 14 04:09:14.336: INFO: Pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-2ftpj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.009782202s +Jan 14 04:09:16.336: INFO: Pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-2ftpj": Phase="Running", Reason="", readiness=true. Elapsed: 10.009152282s +Jan 14 04:09:16.336: INFO: Pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-2ftpj" satisfied condition "running" +Jan 14 04:09:16.336: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-bqff2" in namespace "emptydir-wrapper-9826" to be "running" +Jan 14 04:09:16.339: INFO: Pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-bqff2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.327942ms +Jan 14 04:09:18.345: INFO: Pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-bqff2": Phase="Running", Reason="", readiness=true. Elapsed: 2.009231424s +Jan 14 04:09:18.345: INFO: Pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-bqff2" satisfied condition "running" +Jan 14 04:09:18.345: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-dq8sg" in namespace "emptydir-wrapper-9826" to be "running" +Jan 14 04:09:18.348: INFO: Pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-dq8sg": Phase="Running", Reason="", readiness=true. Elapsed: 3.328248ms +Jan 14 04:09:18.348: INFO: Pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-dq8sg" satisfied condition "running" +Jan 14 04:09:18.349: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-nzjl2" in namespace "emptydir-wrapper-9826" to be "running" +Jan 14 04:09:18.352: INFO: Pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-nzjl2": Phase="Running", Reason="", readiness=true. Elapsed: 3.273488ms +Jan 14 04:09:18.352: INFO: Pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-nzjl2" satisfied condition "running" +Jan 14 04:09:18.352: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-qwglz" in namespace "emptydir-wrapper-9826" to be "running" +Jan 14 04:09:18.355: INFO: Pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-qwglz": Phase="Running", Reason="", readiness=true. Elapsed: 2.999615ms +Jan 14 04:09:18.355: INFO: Pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-qwglz" satisfied condition "running" +STEP: deleting ReplicationController wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945 in namespace emptydir-wrapper-9826, will wait for the garbage collector to delete the pods 01/14/23 04:09:18.355 +Jan 14 04:09:18.416: INFO: Deleting ReplicationController wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945 took: 7.374237ms +Jan 14 04:09:18.517: INFO: Terminating ReplicationController wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945 pods took: 100.919152ms +STEP: Creating RC which spawns configmap-volume pods 01/14/23 04:09:21.823 +Jan 14 04:09:21.838: INFO: Pod name wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9: Found 0 pods out of 5 +Jan 14 04:09:26.845: INFO: Pod name wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9: Found 5 pods out of 5 +STEP: Ensuring each pod is running 01/14/23 04:09:26.845 +Jan 14 04:09:26.846: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-9zcg5" in namespace "emptydir-wrapper-9826" to be "running" +Jan 14 04:09:26.849: INFO: Pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-9zcg5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.596638ms +Jan 14 04:09:28.855: INFO: Pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-9zcg5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009806329s +Jan 14 04:09:30.855: INFO: Pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-9zcg5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009874475s +Jan 14 04:09:32.857: INFO: Pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-9zcg5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011054266s +Jan 14 04:09:34.855: INFO: Pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-9zcg5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.008986175s +Jan 14 04:09:36.855: INFO: Pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-9zcg5": Phase="Running", Reason="", readiness=true. Elapsed: 10.00967798s +Jan 14 04:09:36.855: INFO: Pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-9zcg5" satisfied condition "running" +Jan 14 04:09:36.855: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-c5zsp" in namespace "emptydir-wrapper-9826" to be "running" +Jan 14 04:09:36.859: INFO: Pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-c5zsp": Phase="Running", Reason="", readiness=true. Elapsed: 3.493811ms +Jan 14 04:09:36.859: INFO: Pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-c5zsp" satisfied condition "running" +Jan 14 04:09:36.859: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-d54zv" in namespace "emptydir-wrapper-9826" to be "running" +Jan 14 04:09:36.862: INFO: Pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-d54zv": Phase="Running", Reason="", readiness=true. Elapsed: 3.23537ms +Jan 14 04:09:36.862: INFO: Pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-d54zv" satisfied condition "running" +Jan 14 04:09:36.862: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-g5k9h" in namespace "emptydir-wrapper-9826" to be "running" +Jan 14 04:09:36.865: INFO: Pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-g5k9h": Phase="Running", Reason="", readiness=true. Elapsed: 3.096129ms +Jan 14 04:09:36.865: INFO: Pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-g5k9h" satisfied condition "running" +Jan 14 04:09:36.865: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-rtlxr" in namespace "emptydir-wrapper-9826" to be "running" +Jan 14 04:09:36.868: INFO: Pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-rtlxr": Phase="Running", Reason="", readiness=true. Elapsed: 3.201101ms +Jan 14 04:09:36.868: INFO: Pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-rtlxr" satisfied condition "running" +STEP: deleting ReplicationController wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9 in namespace emptydir-wrapper-9826, will wait for the garbage collector to delete the pods 01/14/23 04:09:36.868 +Jan 14 04:09:36.930: INFO: Deleting ReplicationController wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9 took: 7.13264ms +Jan 14 04:09:37.030: INFO: Terminating ReplicationController wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9 pods took: 100.513182ms +STEP: Cleaning up the configMaps 01/14/23 04:09:40.331 +[AfterEach] [sig-storage] EmptyDir wrapper volumes + test/e2e/framework/node/init/init.go:32 +Jan 14 04:09:40.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes + tear down framework | framework.go:193 +STEP: Destroying namespace "emptydir-wrapper-9826" for this suite. 01/14/23 04:09:40.608 +------------------------------ +• [SLOW TEST] [59.486 seconds] +[sig-storage] EmptyDir wrapper volumes +test/e2e/storage/utils/framework.go:23 + should not cause race condition when used for configmaps [Serial] [Conformance] + test/e2e/storage/empty_dir_wrapper.go:189 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir wrapper volumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:08:41.129 + Jan 14 04:08:41.129: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename emptydir-wrapper 01/14/23 04:08:41.13 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:08:41.145 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:08:41.147 + [BeforeEach] [sig-storage] EmptyDir wrapper volumes + test/e2e/framework/metrics/init/init.go:31 + [It] should not cause race condition when used for configmaps [Serial] [Conformance] + test/e2e/storage/empty_dir_wrapper.go:189 + STEP: Creating 50 configmaps 01/14/23 04:08:41.149 + STEP: Creating RC which spawns configmap-volume pods 01/14/23 04:08:41.385 + Jan 14 04:08:41.490: INFO: Pod name wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a: Found 3 pods out of 5 + Jan 14 04:08:46.498: INFO: Pod name wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a: Found 5 pods out of 5 + STEP: Ensuring each pod is running 01/14/23 04:08:46.498 + Jan 14 04:08:46.499: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-687vd" in namespace "emptydir-wrapper-9826" to be "running" + Jan 14 04:08:46.502: INFO: Pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-687vd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.45543ms + Jan 14 04:08:48.508: INFO: Pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-687vd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009431889s + Jan 14 04:08:50.507: INFO: Pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-687vd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008321461s + Jan 14 04:08:52.507: INFO: Pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-687vd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.008744487s + Jan 14 04:08:54.508: INFO: Pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-687vd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.009377648s + Jan 14 04:08:56.507: INFO: Pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-687vd": Phase="Running", Reason="", readiness=true. Elapsed: 10.008960287s + Jan 14 04:08:56.508: INFO: Pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-687vd" satisfied condition "running" + Jan 14 04:08:56.508: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-7lh4h" in namespace "emptydir-wrapper-9826" to be "running" + Jan 14 04:08:56.511: INFO: Pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-7lh4h": Phase="Running", Reason="", readiness=true. Elapsed: 3.420709ms + Jan 14 04:08:56.511: INFO: Pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-7lh4h" satisfied condition "running" + Jan 14 04:08:56.511: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-8htk9" in namespace "emptydir-wrapper-9826" to be "running" + Jan 14 04:08:56.514: INFO: Pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-8htk9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.203053ms + Jan 14 04:08:58.522: INFO: Pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-8htk9": Phase="Running", Reason="", readiness=true. Elapsed: 2.011062385s + Jan 14 04:08:58.522: INFO: Pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-8htk9" satisfied condition "running" + Jan 14 04:08:58.522: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-9tsf6" in namespace "emptydir-wrapper-9826" to be "running" + Jan 14 04:08:58.526: INFO: Pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-9tsf6": Phase="Running", Reason="", readiness=true. Elapsed: 4.097661ms + Jan 14 04:08:58.526: INFO: Pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-9tsf6" satisfied condition "running" + Jan 14 04:08:58.526: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-qrgvm" in namespace "emptydir-wrapper-9826" to be "running" + Jan 14 04:08:58.533: INFO: Pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-qrgvm": Phase="Running", Reason="", readiness=true. Elapsed: 6.312878ms + Jan 14 04:08:58.533: INFO: Pod "wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a-qrgvm" satisfied condition "running" + STEP: deleting ReplicationController wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a in namespace emptydir-wrapper-9826, will wait for the garbage collector to delete the pods 01/14/23 04:08:58.533 + Jan 14 04:08:58.594: INFO: Deleting ReplicationController wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a took: 7.434522ms + Jan 14 04:08:58.695: INFO: Terminating ReplicationController wrapped-volume-race-e8b7d34b-0b32-446e-8964-3164ea86442a pods took: 100.164248ms + STEP: Creating RC which spawns configmap-volume pods 01/14/23 04:09:01.302 + Jan 14 04:09:01.318: INFO: Pod name wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945: Found 0 pods out of 5 + Jan 14 04:09:06.327: INFO: Pod name wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945: Found 5 pods out of 5 + STEP: Ensuring each pod is running 01/14/23 04:09:06.327 + Jan 14 04:09:06.327: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-2ftpj" in namespace "emptydir-wrapper-9826" to be "running" + Jan 14 04:09:06.330: INFO: Pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-2ftpj": Phase="Pending", Reason="", readiness=false. Elapsed: 3.509678ms + Jan 14 04:09:08.336: INFO: Pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-2ftpj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00967948s + Jan 14 04:09:10.335: INFO: Pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-2ftpj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008717428s + Jan 14 04:09:12.337: INFO: Pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-2ftpj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.010343935s + Jan 14 04:09:14.336: INFO: Pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-2ftpj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.009782202s + Jan 14 04:09:16.336: INFO: Pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-2ftpj": Phase="Running", Reason="", readiness=true. Elapsed: 10.009152282s + Jan 14 04:09:16.336: INFO: Pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-2ftpj" satisfied condition "running" + Jan 14 04:09:16.336: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-bqff2" in namespace "emptydir-wrapper-9826" to be "running" + Jan 14 04:09:16.339: INFO: Pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-bqff2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.327942ms + Jan 14 04:09:18.345: INFO: Pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-bqff2": Phase="Running", Reason="", readiness=true. Elapsed: 2.009231424s + Jan 14 04:09:18.345: INFO: Pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-bqff2" satisfied condition "running" + Jan 14 04:09:18.345: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-dq8sg" in namespace "emptydir-wrapper-9826" to be "running" + Jan 14 04:09:18.348: INFO: Pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-dq8sg": Phase="Running", Reason="", readiness=true. Elapsed: 3.328248ms + Jan 14 04:09:18.348: INFO: Pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-dq8sg" satisfied condition "running" + Jan 14 04:09:18.349: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-nzjl2" in namespace "emptydir-wrapper-9826" to be "running" + Jan 14 04:09:18.352: INFO: Pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-nzjl2": Phase="Running", Reason="", readiness=true. Elapsed: 3.273488ms + Jan 14 04:09:18.352: INFO: Pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-nzjl2" satisfied condition "running" + Jan 14 04:09:18.352: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-qwglz" in namespace "emptydir-wrapper-9826" to be "running" + Jan 14 04:09:18.355: INFO: Pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-qwglz": Phase="Running", Reason="", readiness=true. Elapsed: 2.999615ms + Jan 14 04:09:18.355: INFO: Pod "wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945-qwglz" satisfied condition "running" + STEP: deleting ReplicationController wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945 in namespace emptydir-wrapper-9826, will wait for the garbage collector to delete the pods 01/14/23 04:09:18.355 + Jan 14 04:09:18.416: INFO: Deleting ReplicationController wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945 took: 7.374237ms + Jan 14 04:09:18.517: INFO: Terminating ReplicationController wrapped-volume-race-89f98bbf-c3f0-417a-bbfd-144ffc878945 pods took: 100.919152ms + STEP: Creating RC which spawns configmap-volume pods 01/14/23 04:09:21.823 + Jan 14 04:09:21.838: INFO: Pod name wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9: Found 0 pods out of 5 + Jan 14 04:09:26.845: INFO: Pod name wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9: Found 5 pods out of 5 + STEP: Ensuring each pod is running 01/14/23 04:09:26.845 + Jan 14 04:09:26.846: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-9zcg5" in namespace "emptydir-wrapper-9826" to be "running" + Jan 14 04:09:26.849: INFO: Pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-9zcg5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.596638ms + Jan 14 04:09:28.855: INFO: Pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-9zcg5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009806329s + Jan 14 04:09:30.855: INFO: Pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-9zcg5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009874475s + Jan 14 04:09:32.857: INFO: Pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-9zcg5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011054266s + Jan 14 04:09:34.855: INFO: Pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-9zcg5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.008986175s + Jan 14 04:09:36.855: INFO: Pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-9zcg5": Phase="Running", Reason="", readiness=true. Elapsed: 10.00967798s + Jan 14 04:09:36.855: INFO: Pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-9zcg5" satisfied condition "running" + Jan 14 04:09:36.855: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-c5zsp" in namespace "emptydir-wrapper-9826" to be "running" + Jan 14 04:09:36.859: INFO: Pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-c5zsp": Phase="Running", Reason="", readiness=true. Elapsed: 3.493811ms + Jan 14 04:09:36.859: INFO: Pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-c5zsp" satisfied condition "running" + Jan 14 04:09:36.859: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-d54zv" in namespace "emptydir-wrapper-9826" to be "running" + Jan 14 04:09:36.862: INFO: Pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-d54zv": Phase="Running", Reason="", readiness=true. Elapsed: 3.23537ms + Jan 14 04:09:36.862: INFO: Pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-d54zv" satisfied condition "running" + Jan 14 04:09:36.862: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-g5k9h" in namespace "emptydir-wrapper-9826" to be "running" + Jan 14 04:09:36.865: INFO: Pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-g5k9h": Phase="Running", Reason="", readiness=true. Elapsed: 3.096129ms + Jan 14 04:09:36.865: INFO: Pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-g5k9h" satisfied condition "running" + Jan 14 04:09:36.865: INFO: Waiting up to 5m0s for pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-rtlxr" in namespace "emptydir-wrapper-9826" to be "running" + Jan 14 04:09:36.868: INFO: Pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-rtlxr": Phase="Running", Reason="", readiness=true. Elapsed: 3.201101ms + Jan 14 04:09:36.868: INFO: Pod "wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9-rtlxr" satisfied condition "running" + STEP: deleting ReplicationController wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9 in namespace emptydir-wrapper-9826, will wait for the garbage collector to delete the pods 01/14/23 04:09:36.868 + Jan 14 04:09:36.930: INFO: Deleting ReplicationController wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9 took: 7.13264ms + Jan 14 04:09:37.030: INFO: Terminating ReplicationController wrapped-volume-race-423111d0-43a9-49de-8491-01b7f5957af9 pods took: 100.513182ms + STEP: Cleaning up the configMaps 01/14/23 04:09:40.331 + [AfterEach] [sig-storage] EmptyDir wrapper volumes + test/e2e/framework/node/init/init.go:32 + Jan 14 04:09:40.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes + tear down framework | framework.go:193 + STEP: Destroying namespace "emptydir-wrapper-9826" for this suite. 01/14/23 04:09:40.608 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should mount projected service account token [Conformance] + test/e2e/auth/service_accounts.go:275 +[BeforeEach] [sig-auth] ServiceAccounts + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:09:40.616 +Jan 14 04:09:40.616: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename svcaccounts 01/14/23 04:09:40.616 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:09:40.632 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:09:40.634 +[BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:31 +[It] should mount projected service account token [Conformance] + test/e2e/auth/service_accounts.go:275 +STEP: Creating a pod to test service account token: 01/14/23 04:09:40.637 +Jan 14 04:09:40.646: INFO: Waiting up to 5m0s for pod "test-pod-ef9a1397-ad44-4870-b996-fbf402c7533b" in namespace "svcaccounts-3892" to be "Succeeded or Failed" +Jan 14 04:09:40.649: INFO: Pod "test-pod-ef9a1397-ad44-4870-b996-fbf402c7533b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.715871ms +Jan 14 04:09:42.653: INFO: Pod "test-pod-ef9a1397-ad44-4870-b996-fbf402c7533b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006993269s +Jan 14 04:09:44.654: INFO: Pod "test-pod-ef9a1397-ad44-4870-b996-fbf402c7533b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007845207s +STEP: Saw pod success 01/14/23 04:09:44.654 +Jan 14 04:09:44.654: INFO: Pod "test-pod-ef9a1397-ad44-4870-b996-fbf402c7533b" satisfied condition "Succeeded or Failed" +Jan 14 04:09:44.657: INFO: Trying to get logs from node 10.0.1.106 pod test-pod-ef9a1397-ad44-4870-b996-fbf402c7533b container agnhost-container: +STEP: delete the pod 01/14/23 04:09:44.663 +Jan 14 04:09:44.674: INFO: Waiting for pod test-pod-ef9a1397-ad44-4870-b996-fbf402c7533b to disappear +Jan 14 04:09:44.677: INFO: Pod test-pod-ef9a1397-ad44-4870-b996-fbf402c7533b no longer exists +[AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/node/init/init.go:32 +Jan 14 04:09:44.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + tear down framework | framework.go:193 +STEP: Destroying namespace "svcaccounts-3892" for this suite. 01/14/23 04:09:44.681 +------------------------------ +• [4.071 seconds] +[sig-auth] ServiceAccounts +test/e2e/auth/framework.go:23 + should mount projected service account token [Conformance] + test/e2e/auth/service_accounts.go:275 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-auth] ServiceAccounts + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:09:40.616 + Jan 14 04:09:40.616: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename svcaccounts 01/14/23 04:09:40.616 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:09:40.632 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:09:40.634 + [BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:31 + [It] should mount projected service account token [Conformance] + test/e2e/auth/service_accounts.go:275 + STEP: Creating a pod to test service account token: 01/14/23 04:09:40.637 + Jan 14 04:09:40.646: INFO: Waiting up to 5m0s for pod "test-pod-ef9a1397-ad44-4870-b996-fbf402c7533b" in namespace "svcaccounts-3892" to be "Succeeded or Failed" + Jan 14 04:09:40.649: INFO: Pod "test-pod-ef9a1397-ad44-4870-b996-fbf402c7533b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.715871ms + Jan 14 04:09:42.653: INFO: Pod "test-pod-ef9a1397-ad44-4870-b996-fbf402c7533b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006993269s + Jan 14 04:09:44.654: INFO: Pod "test-pod-ef9a1397-ad44-4870-b996-fbf402c7533b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007845207s + STEP: Saw pod success 01/14/23 04:09:44.654 + Jan 14 04:09:44.654: INFO: Pod "test-pod-ef9a1397-ad44-4870-b996-fbf402c7533b" satisfied condition "Succeeded or Failed" + Jan 14 04:09:44.657: INFO: Trying to get logs from node 10.0.1.106 pod test-pod-ef9a1397-ad44-4870-b996-fbf402c7533b container agnhost-container: + STEP: delete the pod 01/14/23 04:09:44.663 + Jan 14 04:09:44.674: INFO: Waiting for pod test-pod-ef9a1397-ad44-4870-b996-fbf402c7533b to disappear + Jan 14 04:09:44.677: INFO: Pod test-pod-ef9a1397-ad44-4870-b996-fbf402c7533b no longer exists + [AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/node/init/init.go:32 + Jan 14 04:09:44.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + tear down framework | framework.go:193 + STEP: Destroying namespace "svcaccounts-3892" for this suite. 01/14/23 04:09:44.681 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling an agnhost Pod with hostAliases + should write entries to /etc/hosts [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:148 +[BeforeEach] [sig-node] Kubelet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:09:44.687 +Jan 14 04:09:44.687: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename kubelet-test 01/14/23 04:09:44.688 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:09:44.702 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:09:44.704 +[BeforeEach] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 +[It] should write entries to /etc/hosts [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:148 +STEP: Waiting for pod completion 01/14/23 04:09:44.714 +Jan 14 04:09:44.714: INFO: Waiting up to 3m0s for pod "agnhost-host-aliases1b2b2d6d-9073-4c78-880e-38878bd173e2" in namespace "kubelet-test-1454" to be "completed" +Jan 14 04:09:44.716: INFO: Pod "agnhost-host-aliases1b2b2d6d-9073-4c78-880e-38878bd173e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.730512ms +Jan 14 04:09:46.720: INFO: Pod "agnhost-host-aliases1b2b2d6d-9073-4c78-880e-38878bd173e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006342517s +Jan 14 04:09:48.722: INFO: Pod "agnhost-host-aliases1b2b2d6d-9073-4c78-880e-38878bd173e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007837269s +Jan 14 04:09:48.722: INFO: Pod "agnhost-host-aliases1b2b2d6d-9073-4c78-880e-38878bd173e2" satisfied condition "completed" +[AfterEach] [sig-node] Kubelet + test/e2e/framework/node/init/init.go:32 +Jan 14 04:09:48.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Kubelet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Kubelet + tear down framework | framework.go:193 +STEP: Destroying namespace "kubelet-test-1454" for this suite. 01/14/23 04:09:48.732 +------------------------------ +• [4.050 seconds] +[sig-node] Kubelet +test/e2e/common/node/framework.go:23 + when scheduling an agnhost Pod with hostAliases + test/e2e/common/node/kubelet.go:140 + should write entries to /etc/hosts [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:148 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Kubelet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:09:44.687 + Jan 14 04:09:44.687: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename kubelet-test 01/14/23 04:09:44.688 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:09:44.702 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:09:44.704 + [BeforeEach] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 + [It] should write entries to /etc/hosts [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:148 + STEP: Waiting for pod completion 01/14/23 04:09:44.714 + Jan 14 04:09:44.714: INFO: Waiting up to 3m0s for pod "agnhost-host-aliases1b2b2d6d-9073-4c78-880e-38878bd173e2" in namespace "kubelet-test-1454" to be "completed" + Jan 14 04:09:44.716: INFO: Pod "agnhost-host-aliases1b2b2d6d-9073-4c78-880e-38878bd173e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.730512ms + Jan 14 04:09:46.720: INFO: Pod "agnhost-host-aliases1b2b2d6d-9073-4c78-880e-38878bd173e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006342517s + Jan 14 04:09:48.722: INFO: Pod "agnhost-host-aliases1b2b2d6d-9073-4c78-880e-38878bd173e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007837269s + Jan 14 04:09:48.722: INFO: Pod "agnhost-host-aliases1b2b2d6d-9073-4c78-880e-38878bd173e2" satisfied condition "completed" + [AfterEach] [sig-node] Kubelet + test/e2e/framework/node/init/init.go:32 + Jan 14 04:09:48.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Kubelet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Kubelet + tear down framework | framework.go:193 + STEP: Destroying namespace "kubelet-test-1454" for this suite. 01/14/23 04:09:48.732 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of same group but different versions [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:309 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:09:48.738 +Jan 14 04:09:48.739: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename crd-publish-openapi 01/14/23 04:09:48.739 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:09:48.755 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:09:48.757 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[It] works for multiple CRDs of same group but different versions [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:309 +STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation 01/14/23 04:09:48.759 +Jan 14 04:09:48.760: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation 01/14/23 04:09:55.004 +Jan 14 04:09:55.005: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +Jan 14 04:09:57.294: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:10:04.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "crd-publish-openapi-7992" for this suite. 01/14/23 04:10:04.451 +------------------------------ +• [SLOW TEST] [15.720 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + works for multiple CRDs of same group but different versions [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:309 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:09:48.738 + Jan 14 04:09:48.739: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename crd-publish-openapi 01/14/23 04:09:48.739 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:09:48.755 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:09:48.757 + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [It] works for multiple CRDs of same group but different versions [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:309 + STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation 01/14/23 04:09:48.759 + Jan 14 04:09:48.760: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation 01/14/23 04:09:55.004 + Jan 14 04:09:55.005: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + Jan 14 04:09:57.294: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:10:04.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "crd-publish-openapi-7992" for this suite. 01/14/23 04:10:04.451 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Job + should create pods for an Indexed job with completion indexes and specified hostname [Conformance] + test/e2e/apps/job.go:366 +[BeforeEach] [sig-apps] Job + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:10:04.46 +Jan 14 04:10:04.460: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename job 01/14/23 04:10:04.461 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:10:04.476 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:10:04.478 +[BeforeEach] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:31 +[It] should create pods for an Indexed job with completion indexes and specified hostname [Conformance] + test/e2e/apps/job.go:366 +STEP: Creating Indexed job 01/14/23 04:10:04.481 +STEP: Ensuring job reaches completions 01/14/23 04:10:04.487 +STEP: Ensuring pods with index for job exist 01/14/23 04:10:12.493 +[AfterEach] [sig-apps] Job + test/e2e/framework/node/init/init.go:32 +Jan 14 04:10:12.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Job + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Job + tear down framework | framework.go:193 +STEP: Destroying namespace "job-1957" for this suite. 01/14/23 04:10:12.502 +------------------------------ +• [SLOW TEST] [8.048 seconds] +[sig-apps] Job +test/e2e/apps/framework.go:23 + should create pods for an Indexed job with completion indexes and specified hostname [Conformance] + test/e2e/apps/job.go:366 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Job + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:10:04.46 + Jan 14 04:10:04.460: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename job 01/14/23 04:10:04.461 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:10:04.476 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:10:04.478 + [BeforeEach] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:31 + [It] should create pods for an Indexed job with completion indexes and specified hostname [Conformance] + test/e2e/apps/job.go:366 + STEP: Creating Indexed job 01/14/23 04:10:04.481 + STEP: Ensuring job reaches completions 01/14/23 04:10:04.487 + STEP: Ensuring pods with index for job exist 01/14/23 04:10:12.493 + [AfterEach] [sig-apps] Job + test/e2e/framework/node/init/init.go:32 + Jan 14 04:10:12.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Job + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Job + tear down framework | framework.go:193 + STEP: Destroying namespace "job-1957" for this suite. 01/14/23 04:10:12.502 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from ClusterIP to ExternalName [Conformance] + test/e2e/network/service.go:1515 +[BeforeEach] [sig-network] Services + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:10:12.508 +Jan 14 04:10:12.509: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename services 01/14/23 04:10:12.509 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:10:12.523 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:10:12.525 +[BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should be able to change the type from ClusterIP to ExternalName [Conformance] + test/e2e/network/service.go:1515 +STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-8747 01/14/23 04:10:12.527 +STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service 01/14/23 04:10:12.535 +STEP: creating service externalsvc in namespace services-8747 01/14/23 04:10:12.535 +STEP: creating replication controller externalsvc in namespace services-8747 01/14/23 04:10:12.547 +I0114 04:10:12.557375 25 runners.go:193] Created replication controller with name: externalsvc, namespace: services-8747, replica count: 2 +I0114 04:10:15.609126 25 runners.go:193] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +STEP: changing the ClusterIP service to type=ExternalName 01/14/23 04:10:15.612 +Jan 14 04:10:15.623: INFO: Creating new exec pod +Jan 14 04:10:15.633: INFO: Waiting up to 5m0s for pod "execpodmctrj" in namespace "services-8747" to be "running" +Jan 14 04:10:15.636: INFO: Pod "execpodmctrj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.867558ms +Jan 14 04:10:17.640: INFO: Pod "execpodmctrj": Phase="Running", Reason="", readiness=true. Elapsed: 2.007346558s +Jan 14 04:10:17.640: INFO: Pod "execpodmctrj" satisfied condition "running" +Jan 14 04:10:17.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-8747 exec execpodmctrj -- /bin/sh -x -c nslookup clusterip-service.services-8747.svc.cluster.local' +Jan 14 04:10:17.766: INFO: stderr: "+ nslookup clusterip-service.services-8747.svc.cluster.local\n" +Jan 14 04:10:17.766: INFO: stdout: "Server:\t\t10.55.255.254\nAddress:\t10.55.255.254#53\n\nclusterip-service.services-8747.svc.cluster.local\tcanonical name = externalsvc.services-8747.svc.cluster.local.\nName:\texternalsvc.services-8747.svc.cluster.local\nAddress: 10.55.254.38\n\n" +STEP: deleting ReplicationController externalsvc in namespace services-8747, will wait for the garbage collector to delete the pods 01/14/23 04:10:17.766 +Jan 14 04:10:17.826: INFO: Deleting ReplicationController externalsvc took: 5.42344ms +Jan 14 04:10:17.926: INFO: Terminating ReplicationController externalsvc pods took: 100.464101ms +Jan 14 04:10:19.343: INFO: Cleaning up the ClusterIP to ExternalName test service +[AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 +Jan 14 04:10:19.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 +STEP: Destroying namespace "services-8747" for this suite. 01/14/23 04:10:19.356 +------------------------------ +• [SLOW TEST] [6.855 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should be able to change the type from ClusterIP to ExternalName [Conformance] + test/e2e/network/service.go:1515 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:10:12.508 + Jan 14 04:10:12.509: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename services 01/14/23 04:10:12.509 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:10:12.523 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:10:12.525 + [BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should be able to change the type from ClusterIP to ExternalName [Conformance] + test/e2e/network/service.go:1515 + STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-8747 01/14/23 04:10:12.527 + STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service 01/14/23 04:10:12.535 + STEP: creating service externalsvc in namespace services-8747 01/14/23 04:10:12.535 + STEP: creating replication controller externalsvc in namespace services-8747 01/14/23 04:10:12.547 + I0114 04:10:12.557375 25 runners.go:193] Created replication controller with name: externalsvc, namespace: services-8747, replica count: 2 + I0114 04:10:15.609126 25 runners.go:193] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + STEP: changing the ClusterIP service to type=ExternalName 01/14/23 04:10:15.612 + Jan 14 04:10:15.623: INFO: Creating new exec pod + Jan 14 04:10:15.633: INFO: Waiting up to 5m0s for pod "execpodmctrj" in namespace "services-8747" to be "running" + Jan 14 04:10:15.636: INFO: Pod "execpodmctrj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.867558ms + Jan 14 04:10:17.640: INFO: Pod "execpodmctrj": Phase="Running", Reason="", readiness=true. Elapsed: 2.007346558s + Jan 14 04:10:17.640: INFO: Pod "execpodmctrj" satisfied condition "running" + Jan 14 04:10:17.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-8747 exec execpodmctrj -- /bin/sh -x -c nslookup clusterip-service.services-8747.svc.cluster.local' + Jan 14 04:10:17.766: INFO: stderr: "+ nslookup clusterip-service.services-8747.svc.cluster.local\n" + Jan 14 04:10:17.766: INFO: stdout: "Server:\t\t10.55.255.254\nAddress:\t10.55.255.254#53\n\nclusterip-service.services-8747.svc.cluster.local\tcanonical name = externalsvc.services-8747.svc.cluster.local.\nName:\texternalsvc.services-8747.svc.cluster.local\nAddress: 10.55.254.38\n\n" + STEP: deleting ReplicationController externalsvc in namespace services-8747, will wait for the garbage collector to delete the pods 01/14/23 04:10:17.766 + Jan 14 04:10:17.826: INFO: Deleting ReplicationController externalsvc took: 5.42344ms + Jan 14 04:10:17.926: INFO: Terminating ReplicationController externalsvc pods took: 100.464101ms + Jan 14 04:10:19.343: INFO: Cleaning up the ClusterIP to ExternalName test service + [AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 + Jan 14 04:10:19.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 + STEP: Destroying namespace "services-8747" for this suite. 01/14/23 04:10:19.356 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-api-machinery] Watchers + should be able to restart watching from the last resource version observed by the previous watch [Conformance] + test/e2e/apimachinery/watch.go:191 +[BeforeEach] [sig-api-machinery] Watchers + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:10:19.364 +Jan 14 04:10:19.364: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename watch 01/14/23 04:10:19.365 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:10:19.38 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:10:19.383 +[BeforeEach] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:31 +[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] + test/e2e/apimachinery/watch.go:191 +STEP: creating a watch on configmaps 01/14/23 04:10:19.386 +STEP: creating a new configmap 01/14/23 04:10:19.387 +STEP: modifying the configmap once 01/14/23 04:10:19.392 +STEP: closing the watch once it receives two notifications 01/14/23 04:10:19.399 +Jan 14 04:10:19.399: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8665 b18e8188-6c08-4ed3-aa71-b09fe36855e4 435711 0 2023-01-14 04:10:19 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-01-14 04:10:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Jan 14 04:10:19.399: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8665 b18e8188-6c08-4ed3-aa71-b09fe36855e4 435712 0 2023-01-14 04:10:19 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-01-14 04:10:19 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying the configmap a second time, while the watch is closed 01/14/23 04:10:19.399 +STEP: creating a new watch on configmaps from the last resource version observed by the first watch 01/14/23 04:10:19.406 +STEP: deleting the configmap 01/14/23 04:10:19.407 +STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed 01/14/23 04:10:19.412 +Jan 14 04:10:19.412: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8665 b18e8188-6c08-4ed3-aa71-b09fe36855e4 435713 0 2023-01-14 04:10:19 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-01-14 04:10:19 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Jan 14 04:10:19.412: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8665 b18e8188-6c08-4ed3-aa71-b09fe36855e4 435714 0 2023-01-14 04:10:19 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-01-14 04:10:19 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + test/e2e/framework/node/init/init.go:32 +Jan 14 04:10:19.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Watchers + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Watchers + tear down framework | framework.go:193 +STEP: Destroying namespace "watch-8665" for this suite. 01/14/23 04:10:19.417 +------------------------------ +• [0.058 seconds] +[sig-api-machinery] Watchers +test/e2e/apimachinery/framework.go:23 + should be able to restart watching from the last resource version observed by the previous watch [Conformance] + test/e2e/apimachinery/watch.go:191 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Watchers + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:10:19.364 + Jan 14 04:10:19.364: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename watch 01/14/23 04:10:19.365 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:10:19.38 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:10:19.383 + [BeforeEach] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:31 + [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] + test/e2e/apimachinery/watch.go:191 + STEP: creating a watch on configmaps 01/14/23 04:10:19.386 + STEP: creating a new configmap 01/14/23 04:10:19.387 + STEP: modifying the configmap once 01/14/23 04:10:19.392 + STEP: closing the watch once it receives two notifications 01/14/23 04:10:19.399 + Jan 14 04:10:19.399: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8665 b18e8188-6c08-4ed3-aa71-b09fe36855e4 435711 0 2023-01-14 04:10:19 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-01-14 04:10:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} + Jan 14 04:10:19.399: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8665 b18e8188-6c08-4ed3-aa71-b09fe36855e4 435712 0 2023-01-14 04:10:19 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-01-14 04:10:19 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} + STEP: modifying the configmap a second time, while the watch is closed 01/14/23 04:10:19.399 + STEP: creating a new watch on configmaps from the last resource version observed by the first watch 01/14/23 04:10:19.406 + STEP: deleting the configmap 01/14/23 04:10:19.407 + STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed 01/14/23 04:10:19.412 + Jan 14 04:10:19.412: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8665 b18e8188-6c08-4ed3-aa71-b09fe36855e4 435713 0 2023-01-14 04:10:19 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-01-14 04:10:19 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} + Jan 14 04:10:19.412: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8665 b18e8188-6c08-4ed3-aa71-b09fe36855e4 435714 0 2023-01-14 04:10:19 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-01-14 04:10:19 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} + [AfterEach] [sig-api-machinery] Watchers + test/e2e/framework/node/init/init.go:32 + Jan 14 04:10:19.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Watchers + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Watchers + tear down framework | framework.go:193 + STEP: Destroying namespace "watch-8665" for this suite. 01/14/23 04:10:19.417 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Proxy version v1 + A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] + test/e2e/network/proxy.go:286 +[BeforeEach] version v1 + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:10:19.423 +Jan 14 04:10:19.423: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename proxy 01/14/23 04:10:19.424 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:10:19.439 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:10:19.441 +[BeforeEach] version v1 + test/e2e/framework/metrics/init/init.go:31 +[It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] + test/e2e/network/proxy.go:286 +Jan 14 04:10:19.444: INFO: Creating pod... +Jan 14 04:10:19.451: INFO: Waiting up to 5m0s for pod "agnhost" in namespace "proxy-4850" to be "running" +Jan 14 04:10:19.454: INFO: Pod "agnhost": Phase="Pending", Reason="", readiness=false. Elapsed: 3.02726ms +Jan 14 04:10:21.459: INFO: Pod "agnhost": Phase="Running", Reason="", readiness=true. Elapsed: 2.007800121s +Jan 14 04:10:21.459: INFO: Pod "agnhost" satisfied condition "running" +Jan 14 04:10:21.459: INFO: Creating service... +Jan 14 04:10:21.468: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-4850/pods/agnhost/proxy/some/path/with/DELETE +Jan 14 04:10:21.472: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE +Jan 14 04:10:21.472: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-4850/pods/agnhost/proxy/some/path/with/GET +Jan 14 04:10:21.477: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET +Jan 14 04:10:21.477: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-4850/pods/agnhost/proxy/some/path/with/HEAD +Jan 14 04:10:21.480: INFO: http.Client request:HEAD | StatusCode:200 +Jan 14 04:10:21.480: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-4850/pods/agnhost/proxy/some/path/with/OPTIONS +Jan 14 04:10:21.483: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS +Jan 14 04:10:21.483: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-4850/pods/agnhost/proxy/some/path/with/PATCH +Jan 14 04:10:21.486: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH +Jan 14 04:10:21.486: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-4850/pods/agnhost/proxy/some/path/with/POST +Jan 14 04:10:21.489: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST +Jan 14 04:10:21.489: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-4850/pods/agnhost/proxy/some/path/with/PUT +Jan 14 04:10:21.492: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT +Jan 14 04:10:21.492: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-4850/services/test-service/proxy/some/path/with/DELETE +Jan 14 04:10:21.497: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE +Jan 14 04:10:21.497: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-4850/services/test-service/proxy/some/path/with/GET +Jan 14 04:10:21.501: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET +Jan 14 04:10:21.501: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-4850/services/test-service/proxy/some/path/with/HEAD +Jan 14 04:10:21.505: INFO: http.Client request:HEAD | StatusCode:200 +Jan 14 04:10:21.505: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-4850/services/test-service/proxy/some/path/with/OPTIONS +Jan 14 04:10:21.509: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS +Jan 14 04:10:21.509: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-4850/services/test-service/proxy/some/path/with/PATCH +Jan 14 04:10:21.513: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH +Jan 14 04:10:21.513: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-4850/services/test-service/proxy/some/path/with/POST +Jan 14 04:10:21.517: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST +Jan 14 04:10:21.517: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-4850/services/test-service/proxy/some/path/with/PUT +Jan 14 04:10:21.521: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT +[AfterEach] version v1 + test/e2e/framework/node/init/init.go:32 +Jan 14 04:10:21.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] version v1 + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] version v1 + dump namespaces | framework.go:196 +[DeferCleanup (Each)] version v1 + tear down framework | framework.go:193 +STEP: Destroying namespace "proxy-4850" for this suite. 01/14/23 04:10:21.525 +------------------------------ +• [2.108 seconds] +[sig-network] Proxy +test/e2e/network/common/framework.go:23 + version v1 + test/e2e/network/proxy.go:74 + A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] + test/e2e/network/proxy.go:286 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] version v1 + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:10:19.423 + Jan 14 04:10:19.423: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename proxy 01/14/23 04:10:19.424 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:10:19.439 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:10:19.441 + [BeforeEach] version v1 + test/e2e/framework/metrics/init/init.go:31 + [It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] + test/e2e/network/proxy.go:286 + Jan 14 04:10:19.444: INFO: Creating pod... + Jan 14 04:10:19.451: INFO: Waiting up to 5m0s for pod "agnhost" in namespace "proxy-4850" to be "running" + Jan 14 04:10:19.454: INFO: Pod "agnhost": Phase="Pending", Reason="", readiness=false. Elapsed: 3.02726ms + Jan 14 04:10:21.459: INFO: Pod "agnhost": Phase="Running", Reason="", readiness=true. Elapsed: 2.007800121s + Jan 14 04:10:21.459: INFO: Pod "agnhost" satisfied condition "running" + Jan 14 04:10:21.459: INFO: Creating service... + Jan 14 04:10:21.468: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-4850/pods/agnhost/proxy/some/path/with/DELETE + Jan 14 04:10:21.472: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE + Jan 14 04:10:21.472: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-4850/pods/agnhost/proxy/some/path/with/GET + Jan 14 04:10:21.477: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET + Jan 14 04:10:21.477: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-4850/pods/agnhost/proxy/some/path/with/HEAD + Jan 14 04:10:21.480: INFO: http.Client request:HEAD | StatusCode:200 + Jan 14 04:10:21.480: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-4850/pods/agnhost/proxy/some/path/with/OPTIONS + Jan 14 04:10:21.483: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS + Jan 14 04:10:21.483: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-4850/pods/agnhost/proxy/some/path/with/PATCH + Jan 14 04:10:21.486: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH + Jan 14 04:10:21.486: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-4850/pods/agnhost/proxy/some/path/with/POST + Jan 14 04:10:21.489: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST + Jan 14 04:10:21.489: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-4850/pods/agnhost/proxy/some/path/with/PUT + Jan 14 04:10:21.492: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT + Jan 14 04:10:21.492: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-4850/services/test-service/proxy/some/path/with/DELETE + Jan 14 04:10:21.497: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE + Jan 14 04:10:21.497: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-4850/services/test-service/proxy/some/path/with/GET + Jan 14 04:10:21.501: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET + Jan 14 04:10:21.501: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-4850/services/test-service/proxy/some/path/with/HEAD + Jan 14 04:10:21.505: INFO: http.Client request:HEAD | StatusCode:200 + Jan 14 04:10:21.505: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-4850/services/test-service/proxy/some/path/with/OPTIONS + Jan 14 04:10:21.509: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS + Jan 14 04:10:21.509: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-4850/services/test-service/proxy/some/path/with/PATCH + Jan 14 04:10:21.513: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH + Jan 14 04:10:21.513: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-4850/services/test-service/proxy/some/path/with/POST + Jan 14 04:10:21.517: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST + Jan 14 04:10:21.517: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-4850/services/test-service/proxy/some/path/with/PUT + Jan 14 04:10:21.521: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT + [AfterEach] version v1 + test/e2e/framework/node/init/init.go:32 + Jan 14 04:10:21.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] version v1 + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] version v1 + dump namespaces | framework.go:196 + [DeferCleanup (Each)] version v1 + tear down framework | framework.go:193 + STEP: Destroying namespace "proxy-4850" for this suite. 01/14/23 04:10:21.525 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + test/e2e/apps/statefulset.go:587 +[BeforeEach] [sig-apps] StatefulSet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:10:21.532 +Jan 14 04:10:21.532: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename statefulset 01/14/23 04:10:21.532 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:10:21.55 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:10:21.552 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 +STEP: Creating service test in namespace statefulset-7256 01/14/23 04:10:21.554 +[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + test/e2e/apps/statefulset.go:587 +STEP: Initializing watcher for selector baz=blah,foo=bar 01/14/23 04:10:21.558 +STEP: Creating stateful set ss in namespace statefulset-7256 01/14/23 04:10:21.562 +STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7256 01/14/23 04:10:21.567 +Jan 14 04:10:21.569: INFO: Found 0 stateful pods, waiting for 1 +Jan 14 04:10:31.575: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod 01/14/23 04:10:31.575 +Jan 14 04:10:31.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-7256 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Jan 14 04:10:31.695: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Jan 14 04:10:31.695: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Jan 14 04:10:31.695: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Jan 14 04:10:31.699: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true +Jan 14 04:10:41.704: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Jan 14 04:10:41.705: INFO: Waiting for statefulset status.replicas updated to 0 +Jan 14 04:10:41.720: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999755s +Jan 14 04:10:42.724: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.994948285s +Jan 14 04:10:43.728: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.990953108s +Jan 14 04:10:44.732: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.986792504s +Jan 14 04:10:45.737: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.982382553s +Jan 14 04:10:46.741: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.978041852s +Jan 14 04:10:47.745: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.973644836s +Jan 14 04:10:48.750: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.969407419s +Jan 14 04:10:49.754: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.965138235s +Jan 14 04:10:50.759: INFO: Verifying statefulset ss doesn't scale past 1 for another 960.08268ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7256 01/14/23 04:10:51.759 +Jan 14 04:10:51.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-7256 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Jan 14 04:10:51.880: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Jan 14 04:10:51.880: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Jan 14 04:10:51.880: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Jan 14 04:10:51.884: INFO: Found 1 stateful pods, waiting for 3 +Jan 14 04:11:01.890: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Jan 14 04:11:01.890: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Jan 14 04:11:01.890: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Verifying that stateful set ss was scaled up in order 01/14/23 04:11:01.89 +STEP: Scale down will halt with unhealthy stateful pod 01/14/23 04:11:01.89 +Jan 14 04:11:01.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-7256 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Jan 14 04:11:02.011: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Jan 14 04:11:02.011: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Jan 14 04:11:02.011: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Jan 14 04:11:02.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-7256 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Jan 14 04:11:02.120: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Jan 14 04:11:02.120: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Jan 14 04:11:02.120: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Jan 14 04:11:02.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-7256 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Jan 14 04:11:02.240: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Jan 14 04:11:02.240: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Jan 14 04:11:02.240: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Jan 14 04:11:02.240: INFO: Waiting for statefulset status.replicas updated to 0 +Jan 14 04:11:02.244: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 +Jan 14 04:11:12.254: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Jan 14 04:11:12.254: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Jan 14 04:11:12.254: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Jan 14 04:11:12.266: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999813s +Jan 14 04:11:13.277: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996597864s +Jan 14 04:11:14.282: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.98538807s +Jan 14 04:11:15.286: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.980760423s +Jan 14 04:11:16.290: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.97660909s +Jan 14 04:11:17.295: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.972157594s +Jan 14 04:11:18.299: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.967598144s +Jan 14 04:11:19.304: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.963141967s +Jan 14 04:11:20.309: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.957793346s +Jan 14 04:11:21.314: INFO: Verifying statefulset ss doesn't scale past 3 for another 952.710929ms +STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7256 01/14/23 04:11:22.315 +Jan 14 04:11:22.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-7256 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Jan 14 04:11:22.435: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Jan 14 04:11:22.435: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Jan 14 04:11:22.436: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Jan 14 04:11:22.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-7256 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Jan 14 04:11:22.544: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Jan 14 04:11:22.544: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Jan 14 04:11:22.544: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Jan 14 04:11:22.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-7256 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Jan 14 04:11:22.655: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Jan 14 04:11:22.655: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Jan 14 04:11:22.655: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Jan 14 04:11:22.655: INFO: Scaling statefulset ss to 0 +STEP: Verifying that stateful set ss was scaled down in reverse order 01/14/23 04:11:32.67 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 +Jan 14 04:11:32.670: INFO: Deleting all statefulset in ns statefulset-7256 +Jan 14 04:11:32.672: INFO: Scaling statefulset ss to 0 +Jan 14 04:11:32.681: INFO: Waiting for statefulset status.replicas updated to 0 +Jan 14 04:11:32.683: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + test/e2e/framework/node/init/init.go:32 +Jan 14 04:11:32.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] StatefulSet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] StatefulSet + tear down framework | framework.go:193 +STEP: Destroying namespace "statefulset-7256" for this suite. 01/14/23 04:11:32.697 +------------------------------ +• [SLOW TEST] [71.171 seconds] +[sig-apps] StatefulSet +test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:103 + Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + test/e2e/apps/statefulset.go:587 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] StatefulSet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:10:21.532 + Jan 14 04:10:21.532: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename statefulset 01/14/23 04:10:21.532 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:10:21.55 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:10:21.552 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 + [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 + STEP: Creating service test in namespace statefulset-7256 01/14/23 04:10:21.554 + [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] + test/e2e/apps/statefulset.go:587 + STEP: Initializing watcher for selector baz=blah,foo=bar 01/14/23 04:10:21.558 + STEP: Creating stateful set ss in namespace statefulset-7256 01/14/23 04:10:21.562 + STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7256 01/14/23 04:10:21.567 + Jan 14 04:10:21.569: INFO: Found 0 stateful pods, waiting for 1 + Jan 14 04:10:31.575: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true + STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod 01/14/23 04:10:31.575 + Jan 14 04:10:31.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-7256 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Jan 14 04:10:31.695: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Jan 14 04:10:31.695: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Jan 14 04:10:31.695: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + Jan 14 04:10:31.699: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true + Jan 14 04:10:41.704: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false + Jan 14 04:10:41.705: INFO: Waiting for statefulset status.replicas updated to 0 + Jan 14 04:10:41.720: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999755s + Jan 14 04:10:42.724: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.994948285s + Jan 14 04:10:43.728: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.990953108s + Jan 14 04:10:44.732: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.986792504s + Jan 14 04:10:45.737: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.982382553s + Jan 14 04:10:46.741: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.978041852s + Jan 14 04:10:47.745: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.973644836s + Jan 14 04:10:48.750: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.969407419s + Jan 14 04:10:49.754: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.965138235s + Jan 14 04:10:50.759: INFO: Verifying statefulset ss doesn't scale past 1 for another 960.08268ms + STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7256 01/14/23 04:10:51.759 + Jan 14 04:10:51.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-7256 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' + Jan 14 04:10:51.880: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" + Jan 14 04:10:51.880: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" + Jan 14 04:10:51.880: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + + Jan 14 04:10:51.884: INFO: Found 1 stateful pods, waiting for 3 + Jan 14 04:11:01.890: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true + Jan 14 04:11:01.890: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true + Jan 14 04:11:01.890: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true + STEP: Verifying that stateful set ss was scaled up in order 01/14/23 04:11:01.89 + STEP: Scale down will halt with unhealthy stateful pod 01/14/23 04:11:01.89 + Jan 14 04:11:01.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-7256 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Jan 14 04:11:02.011: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Jan 14 04:11:02.011: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Jan 14 04:11:02.011: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + Jan 14 04:11:02.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-7256 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Jan 14 04:11:02.120: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Jan 14 04:11:02.120: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Jan 14 04:11:02.120: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + Jan 14 04:11:02.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-7256 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Jan 14 04:11:02.240: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Jan 14 04:11:02.240: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Jan 14 04:11:02.240: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + Jan 14 04:11:02.240: INFO: Waiting for statefulset status.replicas updated to 0 + Jan 14 04:11:02.244: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 + Jan 14 04:11:12.254: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false + Jan 14 04:11:12.254: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false + Jan 14 04:11:12.254: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false + Jan 14 04:11:12.266: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999813s + Jan 14 04:11:13.277: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996597864s + Jan 14 04:11:14.282: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.98538807s + Jan 14 04:11:15.286: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.980760423s + Jan 14 04:11:16.290: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.97660909s + Jan 14 04:11:17.295: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.972157594s + Jan 14 04:11:18.299: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.967598144s + Jan 14 04:11:19.304: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.963141967s + Jan 14 04:11:20.309: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.957793346s + Jan 14 04:11:21.314: INFO: Verifying statefulset ss doesn't scale past 3 for another 952.710929ms + STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7256 01/14/23 04:11:22.315 + Jan 14 04:11:22.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-7256 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' + Jan 14 04:11:22.435: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" + Jan 14 04:11:22.435: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" + Jan 14 04:11:22.436: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + + Jan 14 04:11:22.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-7256 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' + Jan 14 04:11:22.544: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" + Jan 14 04:11:22.544: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" + Jan 14 04:11:22.544: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + + Jan 14 04:11:22.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-7256 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' + Jan 14 04:11:22.655: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" + Jan 14 04:11:22.655: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" + Jan 14 04:11:22.655: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + + Jan 14 04:11:22.655: INFO: Scaling statefulset ss to 0 + STEP: Verifying that stateful set ss was scaled down in reverse order 01/14/23 04:11:32.67 + [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 + Jan 14 04:11:32.670: INFO: Deleting all statefulset in ns statefulset-7256 + Jan 14 04:11:32.672: INFO: Scaling statefulset ss to 0 + Jan 14 04:11:32.681: INFO: Waiting for statefulset status.replicas updated to 0 + Jan 14 04:11:32.683: INFO: Deleting statefulset ss + [AfterEach] [sig-apps] StatefulSet + test/e2e/framework/node/init/init.go:32 + Jan 14 04:11:32.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] StatefulSet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] StatefulSet + tear down framework | framework.go:193 + STEP: Destroying namespace "statefulset-7256" for this suite. 01/14/23 04:11:32.697 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:193 +[BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:11:32.704 +Jan 14 04:11:32.704: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename projected 01/14/23 04:11:32.705 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:11:32.719 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:11:32.722 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 +[It] should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:193 +STEP: Creating a pod to test downward API volume plugin 01/14/23 04:11:32.724 +Jan 14 04:11:32.733: INFO: Waiting up to 5m0s for pod "downwardapi-volume-88ea84f1-096a-4a26-871d-c6156f3784d1" in namespace "projected-7759" to be "Succeeded or Failed" +Jan 14 04:11:32.736: INFO: Pod "downwardapi-volume-88ea84f1-096a-4a26-871d-c6156f3784d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.972472ms +Jan 14 04:11:34.740: INFO: Pod "downwardapi-volume-88ea84f1-096a-4a26-871d-c6156f3784d1": Phase="Running", Reason="", readiness=false. Elapsed: 2.007278594s +Jan 14 04:11:36.741: INFO: Pod "downwardapi-volume-88ea84f1-096a-4a26-871d-c6156f3784d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008097627s +STEP: Saw pod success 01/14/23 04:11:36.741 +Jan 14 04:11:36.741: INFO: Pod "downwardapi-volume-88ea84f1-096a-4a26-871d-c6156f3784d1" satisfied condition "Succeeded or Failed" +Jan 14 04:11:36.744: INFO: Trying to get logs from node 10.0.1.106 pod downwardapi-volume-88ea84f1-096a-4a26-871d-c6156f3784d1 container client-container: +STEP: delete the pod 01/14/23 04:11:36.755 +Jan 14 04:11:36.767: INFO: Waiting for pod downwardapi-volume-88ea84f1-096a-4a26-871d-c6156f3784d1 to disappear +Jan 14 04:11:36.770: INFO: Pod downwardapi-volume-88ea84f1-096a-4a26-871d-c6156f3784d1 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 +Jan 14 04:11:36.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-7759" for this suite. 01/14/23 04:11:36.774 +------------------------------ +• [4.077 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:193 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:11:32.704 + Jan 14 04:11:32.704: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename projected 01/14/23 04:11:32.705 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:11:32.719 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:11:32.722 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 + [It] should provide container's cpu limit [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:193 + STEP: Creating a pod to test downward API volume plugin 01/14/23 04:11:32.724 + Jan 14 04:11:32.733: INFO: Waiting up to 5m0s for pod "downwardapi-volume-88ea84f1-096a-4a26-871d-c6156f3784d1" in namespace "projected-7759" to be "Succeeded or Failed" + Jan 14 04:11:32.736: INFO: Pod "downwardapi-volume-88ea84f1-096a-4a26-871d-c6156f3784d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.972472ms + Jan 14 04:11:34.740: INFO: Pod "downwardapi-volume-88ea84f1-096a-4a26-871d-c6156f3784d1": Phase="Running", Reason="", readiness=false. Elapsed: 2.007278594s + Jan 14 04:11:36.741: INFO: Pod "downwardapi-volume-88ea84f1-096a-4a26-871d-c6156f3784d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008097627s + STEP: Saw pod success 01/14/23 04:11:36.741 + Jan 14 04:11:36.741: INFO: Pod "downwardapi-volume-88ea84f1-096a-4a26-871d-c6156f3784d1" satisfied condition "Succeeded or Failed" + Jan 14 04:11:36.744: INFO: Trying to get logs from node 10.0.1.106 pod downwardapi-volume-88ea84f1-096a-4a26-871d-c6156f3784d1 container client-container: + STEP: delete the pod 01/14/23 04:11:36.755 + Jan 14 04:11:36.767: INFO: Waiting for pod downwardapi-volume-88ea84f1-096a-4a26-871d-c6156f3784d1 to disappear + Jan 14 04:11:36.770: INFO: Pod downwardapi-volume-88ea84f1-096a-4a26-871d-c6156f3784d1 no longer exists + [AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 + Jan 14 04:11:36.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-7759" for this suite. 01/14/23 04:11:36.774 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Discovery + should validate PreferredVersion for each APIGroup [Conformance] + test/e2e/apimachinery/discovery.go:122 +[BeforeEach] [sig-api-machinery] Discovery + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:11:36.782 +Jan 14 04:11:36.782: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename discovery 01/14/23 04:11:36.783 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:11:36.799 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:11:36.801 +[BeforeEach] [sig-api-machinery] Discovery + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] Discovery + test/e2e/apimachinery/discovery.go:43 +STEP: Setting up server cert 01/14/23 04:11:36.804 +[It] should validate PreferredVersion for each APIGroup [Conformance] + test/e2e/apimachinery/discovery.go:122 +Jan 14 04:11:37.263: INFO: Checking APIGroup: apiregistration.k8s.io +Jan 14 04:11:37.264: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 +Jan 14 04:11:37.264: INFO: Versions found [{apiregistration.k8s.io/v1 v1}] +Jan 14 04:11:37.264: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 +Jan 14 04:11:37.264: INFO: Checking APIGroup: apps +Jan 14 04:11:37.265: INFO: PreferredVersion.GroupVersion: apps/v1 +Jan 14 04:11:37.265: INFO: Versions found [{apps/v1 v1}] +Jan 14 04:11:37.265: INFO: apps/v1 matches apps/v1 +Jan 14 04:11:37.265: INFO: Checking APIGroup: events.k8s.io +Jan 14 04:11:37.266: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 +Jan 14 04:11:37.266: INFO: Versions found [{events.k8s.io/v1 v1}] +Jan 14 04:11:37.266: INFO: events.k8s.io/v1 matches events.k8s.io/v1 +Jan 14 04:11:37.266: INFO: Checking APIGroup: authentication.k8s.io +Jan 14 04:11:37.267: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 +Jan 14 04:11:37.267: INFO: Versions found [{authentication.k8s.io/v1 v1}] +Jan 14 04:11:37.267: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 +Jan 14 04:11:37.267: INFO: Checking APIGroup: authorization.k8s.io +Jan 14 04:11:37.268: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 +Jan 14 04:11:37.268: INFO: Versions found [{authorization.k8s.io/v1 v1}] +Jan 14 04:11:37.268: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 +Jan 14 04:11:37.268: INFO: Checking APIGroup: autoscaling +Jan 14 04:11:37.269: INFO: PreferredVersion.GroupVersion: autoscaling/v2 +Jan 14 04:11:37.269: INFO: Versions found [{autoscaling/v2 v2} {autoscaling/v1 v1}] +Jan 14 04:11:37.269: INFO: autoscaling/v2 matches autoscaling/v2 +Jan 14 04:11:37.269: INFO: Checking APIGroup: batch +Jan 14 04:11:37.270: INFO: PreferredVersion.GroupVersion: batch/v1 +Jan 14 04:11:37.270: INFO: Versions found [{batch/v1 v1}] +Jan 14 04:11:37.270: INFO: batch/v1 matches batch/v1 +Jan 14 04:11:37.270: INFO: Checking APIGroup: certificates.k8s.io +Jan 14 04:11:37.271: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 +Jan 14 04:11:37.271: INFO: Versions found [{certificates.k8s.io/v1 v1}] +Jan 14 04:11:37.271: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 +Jan 14 04:11:37.271: INFO: Checking APIGroup: networking.k8s.io +Jan 14 04:11:37.271: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 +Jan 14 04:11:37.271: INFO: Versions found [{networking.k8s.io/v1 v1}] +Jan 14 04:11:37.271: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 +Jan 14 04:11:37.271: INFO: Checking APIGroup: policy +Jan 14 04:11:37.272: INFO: PreferredVersion.GroupVersion: policy/v1 +Jan 14 04:11:37.272: INFO: Versions found [{policy/v1 v1}] +Jan 14 04:11:37.272: INFO: policy/v1 matches policy/v1 +Jan 14 04:11:37.272: INFO: Checking APIGroup: rbac.authorization.k8s.io +Jan 14 04:11:37.273: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 +Jan 14 04:11:37.273: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1}] +Jan 14 04:11:37.273: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 +Jan 14 04:11:37.273: INFO: Checking APIGroup: storage.k8s.io +Jan 14 04:11:37.274: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 +Jan 14 04:11:37.274: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] +Jan 14 04:11:37.274: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 +Jan 14 04:11:37.274: INFO: Checking APIGroup: admissionregistration.k8s.io +Jan 14 04:11:37.275: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 +Jan 14 04:11:37.275: INFO: Versions found [{admissionregistration.k8s.io/v1 v1}] +Jan 14 04:11:37.275: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 +Jan 14 04:11:37.275: INFO: Checking APIGroup: apiextensions.k8s.io +Jan 14 04:11:37.275: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 +Jan 14 04:11:37.275: INFO: Versions found [{apiextensions.k8s.io/v1 v1}] +Jan 14 04:11:37.275: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 +Jan 14 04:11:37.275: INFO: Checking APIGroup: scheduling.k8s.io +Jan 14 04:11:37.276: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 +Jan 14 04:11:37.276: INFO: Versions found [{scheduling.k8s.io/v1 v1}] +Jan 14 04:11:37.276: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 +Jan 14 04:11:37.276: INFO: Checking APIGroup: coordination.k8s.io +Jan 14 04:11:37.277: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 +Jan 14 04:11:37.277: INFO: Versions found [{coordination.k8s.io/v1 v1}] +Jan 14 04:11:37.277: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 +Jan 14 04:11:37.277: INFO: Checking APIGroup: node.k8s.io +Jan 14 04:11:37.278: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 +Jan 14 04:11:37.278: INFO: Versions found [{node.k8s.io/v1 v1}] +Jan 14 04:11:37.278: INFO: node.k8s.io/v1 matches node.k8s.io/v1 +Jan 14 04:11:37.278: INFO: Checking APIGroup: discovery.k8s.io +Jan 14 04:11:37.279: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 +Jan 14 04:11:37.279: INFO: Versions found [{discovery.k8s.io/v1 v1}] +Jan 14 04:11:37.279: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 +Jan 14 04:11:37.279: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io +Jan 14 04:11:37.280: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta3 +Jan 14 04:11:37.280: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta3 v1beta3} {flowcontrol.apiserver.k8s.io/v1beta2 v1beta2}] +Jan 14 04:11:37.280: INFO: flowcontrol.apiserver.k8s.io/v1beta3 matches flowcontrol.apiserver.k8s.io/v1beta3 +Jan 14 04:11:37.280: INFO: Checking APIGroup: cloud.tencent.com +Jan 14 04:11:37.280: INFO: PreferredVersion.GroupVersion: cloud.tencent.com/v1alpha1 +Jan 14 04:11:37.280: INFO: Versions found [{cloud.tencent.com/v1alpha1 v1alpha1}] +Jan 14 04:11:37.280: INFO: cloud.tencent.com/v1alpha1 matches cloud.tencent.com/v1alpha1 +Jan 14 04:11:37.280: INFO: Checking APIGroup: monitor.tencent.io +Jan 14 04:11:37.281: INFO: PreferredVersion.GroupVersion: monitor.tencent.io/v1alpha1 +Jan 14 04:11:37.281: INFO: Versions found [{monitor.tencent.io/v1alpha1 v1alpha1}] +Jan 14 04:11:37.281: INFO: monitor.tencent.io/v1alpha1 matches monitor.tencent.io/v1alpha1 +Jan 14 04:11:37.281: INFO: Checking APIGroup: networking.tke.cloud.tencent.com +Jan 14 04:11:37.282: INFO: PreferredVersion.GroupVersion: networking.tke.cloud.tencent.com/v1alpha1 +Jan 14 04:11:37.282: INFO: Versions found [{networking.tke.cloud.tencent.com/v1alpha1 v1alpha1}] +Jan 14 04:11:37.282: INFO: networking.tke.cloud.tencent.com/v1alpha1 matches networking.tke.cloud.tencent.com/v1alpha1 +Jan 14 04:11:37.282: INFO: Checking APIGroup: snapshot.storage.k8s.io +Jan 14 04:11:37.283: INFO: PreferredVersion.GroupVersion: snapshot.storage.k8s.io/v1beta1 +Jan 14 04:11:37.283: INFO: Versions found [{snapshot.storage.k8s.io/v1beta1 v1beta1}] +Jan 14 04:11:37.283: INFO: snapshot.storage.k8s.io/v1beta1 matches snapshot.storage.k8s.io/v1beta1 +Jan 14 04:11:37.283: INFO: Checking APIGroup: custom.metrics.k8s.io +Jan 14 04:11:37.283: INFO: PreferredVersion.GroupVersion: custom.metrics.k8s.io/v1beta1 +Jan 14 04:11:37.283: INFO: Versions found [{custom.metrics.k8s.io/v1beta1 v1beta1}] +Jan 14 04:11:37.283: INFO: custom.metrics.k8s.io/v1beta1 matches custom.metrics.k8s.io/v1beta1 +Jan 14 04:11:37.283: INFO: Checking APIGroup: metrics.k8s.io +Jan 14 04:11:37.284: INFO: PreferredVersion.GroupVersion: metrics.k8s.io/v1beta1 +Jan 14 04:11:37.284: INFO: Versions found [{metrics.k8s.io/v1beta1 v1beta1}] +Jan 14 04:11:37.284: INFO: metrics.k8s.io/v1beta1 matches metrics.k8s.io/v1beta1 +[AfterEach] [sig-api-machinery] Discovery + test/e2e/framework/node/init/init.go:32 +Jan 14 04:11:37.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Discovery + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Discovery + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Discovery + tear down framework | framework.go:193 +STEP: Destroying namespace "discovery-8756" for this suite. 01/14/23 04:11:37.289 +------------------------------ +• [0.513 seconds] +[sig-api-machinery] Discovery +test/e2e/apimachinery/framework.go:23 + should validate PreferredVersion for each APIGroup [Conformance] + test/e2e/apimachinery/discovery.go:122 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Discovery + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:11:36.782 + Jan 14 04:11:36.782: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename discovery 01/14/23 04:11:36.783 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:11:36.799 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:11:36.801 + [BeforeEach] [sig-api-machinery] Discovery + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] Discovery + test/e2e/apimachinery/discovery.go:43 + STEP: Setting up server cert 01/14/23 04:11:36.804 + [It] should validate PreferredVersion for each APIGroup [Conformance] + test/e2e/apimachinery/discovery.go:122 + Jan 14 04:11:37.263: INFO: Checking APIGroup: apiregistration.k8s.io + Jan 14 04:11:37.264: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 + Jan 14 04:11:37.264: INFO: Versions found [{apiregistration.k8s.io/v1 v1}] + Jan 14 04:11:37.264: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 + Jan 14 04:11:37.264: INFO: Checking APIGroup: apps + Jan 14 04:11:37.265: INFO: PreferredVersion.GroupVersion: apps/v1 + Jan 14 04:11:37.265: INFO: Versions found [{apps/v1 v1}] + Jan 14 04:11:37.265: INFO: apps/v1 matches apps/v1 + Jan 14 04:11:37.265: INFO: Checking APIGroup: events.k8s.io + Jan 14 04:11:37.266: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 + Jan 14 04:11:37.266: INFO: Versions found [{events.k8s.io/v1 v1}] + Jan 14 04:11:37.266: INFO: events.k8s.io/v1 matches events.k8s.io/v1 + Jan 14 04:11:37.266: INFO: Checking APIGroup: authentication.k8s.io + Jan 14 04:11:37.267: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 + Jan 14 04:11:37.267: INFO: Versions found [{authentication.k8s.io/v1 v1}] + Jan 14 04:11:37.267: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 + Jan 14 04:11:37.267: INFO: Checking APIGroup: authorization.k8s.io + Jan 14 04:11:37.268: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 + Jan 14 04:11:37.268: INFO: Versions found [{authorization.k8s.io/v1 v1}] + Jan 14 04:11:37.268: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 + Jan 14 04:11:37.268: INFO: Checking APIGroup: autoscaling + Jan 14 04:11:37.269: INFO: PreferredVersion.GroupVersion: autoscaling/v2 + Jan 14 04:11:37.269: INFO: Versions found [{autoscaling/v2 v2} {autoscaling/v1 v1}] + Jan 14 04:11:37.269: INFO: autoscaling/v2 matches autoscaling/v2 + Jan 14 04:11:37.269: INFO: Checking APIGroup: batch + Jan 14 04:11:37.270: INFO: PreferredVersion.GroupVersion: batch/v1 + Jan 14 04:11:37.270: INFO: Versions found [{batch/v1 v1}] + Jan 14 04:11:37.270: INFO: batch/v1 matches batch/v1 + Jan 14 04:11:37.270: INFO: Checking APIGroup: certificates.k8s.io + Jan 14 04:11:37.271: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 + Jan 14 04:11:37.271: INFO: Versions found [{certificates.k8s.io/v1 v1}] + Jan 14 04:11:37.271: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 + Jan 14 04:11:37.271: INFO: Checking APIGroup: networking.k8s.io + Jan 14 04:11:37.271: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 + Jan 14 04:11:37.271: INFO: Versions found [{networking.k8s.io/v1 v1}] + Jan 14 04:11:37.271: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 + Jan 14 04:11:37.271: INFO: Checking APIGroup: policy + Jan 14 04:11:37.272: INFO: PreferredVersion.GroupVersion: policy/v1 + Jan 14 04:11:37.272: INFO: Versions found [{policy/v1 v1}] + Jan 14 04:11:37.272: INFO: policy/v1 matches policy/v1 + Jan 14 04:11:37.272: INFO: Checking APIGroup: rbac.authorization.k8s.io + Jan 14 04:11:37.273: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 + Jan 14 04:11:37.273: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1}] + Jan 14 04:11:37.273: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 + Jan 14 04:11:37.273: INFO: Checking APIGroup: storage.k8s.io + Jan 14 04:11:37.274: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 + Jan 14 04:11:37.274: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] + Jan 14 04:11:37.274: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 + Jan 14 04:11:37.274: INFO: Checking APIGroup: admissionregistration.k8s.io + Jan 14 04:11:37.275: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 + Jan 14 04:11:37.275: INFO: Versions found [{admissionregistration.k8s.io/v1 v1}] + Jan 14 04:11:37.275: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 + Jan 14 04:11:37.275: INFO: Checking APIGroup: apiextensions.k8s.io + Jan 14 04:11:37.275: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 + Jan 14 04:11:37.275: INFO: Versions found [{apiextensions.k8s.io/v1 v1}] + Jan 14 04:11:37.275: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 + Jan 14 04:11:37.275: INFO: Checking APIGroup: scheduling.k8s.io + Jan 14 04:11:37.276: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 + Jan 14 04:11:37.276: INFO: Versions found [{scheduling.k8s.io/v1 v1}] + Jan 14 04:11:37.276: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 + Jan 14 04:11:37.276: INFO: Checking APIGroup: coordination.k8s.io + Jan 14 04:11:37.277: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 + Jan 14 04:11:37.277: INFO: Versions found [{coordination.k8s.io/v1 v1}] + Jan 14 04:11:37.277: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 + Jan 14 04:11:37.277: INFO: Checking APIGroup: node.k8s.io + Jan 14 04:11:37.278: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 + Jan 14 04:11:37.278: INFO: Versions found [{node.k8s.io/v1 v1}] + Jan 14 04:11:37.278: INFO: node.k8s.io/v1 matches node.k8s.io/v1 + Jan 14 04:11:37.278: INFO: Checking APIGroup: discovery.k8s.io + Jan 14 04:11:37.279: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 + Jan 14 04:11:37.279: INFO: Versions found [{discovery.k8s.io/v1 v1}] + Jan 14 04:11:37.279: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 + Jan 14 04:11:37.279: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io + Jan 14 04:11:37.280: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta3 + Jan 14 04:11:37.280: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta3 v1beta3} {flowcontrol.apiserver.k8s.io/v1beta2 v1beta2}] + Jan 14 04:11:37.280: INFO: flowcontrol.apiserver.k8s.io/v1beta3 matches flowcontrol.apiserver.k8s.io/v1beta3 + Jan 14 04:11:37.280: INFO: Checking APIGroup: cloud.tencent.com + Jan 14 04:11:37.280: INFO: PreferredVersion.GroupVersion: cloud.tencent.com/v1alpha1 + Jan 14 04:11:37.280: INFO: Versions found [{cloud.tencent.com/v1alpha1 v1alpha1}] + Jan 14 04:11:37.280: INFO: cloud.tencent.com/v1alpha1 matches cloud.tencent.com/v1alpha1 + Jan 14 04:11:37.280: INFO: Checking APIGroup: monitor.tencent.io + Jan 14 04:11:37.281: INFO: PreferredVersion.GroupVersion: monitor.tencent.io/v1alpha1 + Jan 14 04:11:37.281: INFO: Versions found [{monitor.tencent.io/v1alpha1 v1alpha1}] + Jan 14 04:11:37.281: INFO: monitor.tencent.io/v1alpha1 matches monitor.tencent.io/v1alpha1 + Jan 14 04:11:37.281: INFO: Checking APIGroup: networking.tke.cloud.tencent.com + Jan 14 04:11:37.282: INFO: PreferredVersion.GroupVersion: networking.tke.cloud.tencent.com/v1alpha1 + Jan 14 04:11:37.282: INFO: Versions found [{networking.tke.cloud.tencent.com/v1alpha1 v1alpha1}] + Jan 14 04:11:37.282: INFO: networking.tke.cloud.tencent.com/v1alpha1 matches networking.tke.cloud.tencent.com/v1alpha1 + Jan 14 04:11:37.282: INFO: Checking APIGroup: snapshot.storage.k8s.io + Jan 14 04:11:37.283: INFO: PreferredVersion.GroupVersion: snapshot.storage.k8s.io/v1beta1 + Jan 14 04:11:37.283: INFO: Versions found [{snapshot.storage.k8s.io/v1beta1 v1beta1}] + Jan 14 04:11:37.283: INFO: snapshot.storage.k8s.io/v1beta1 matches snapshot.storage.k8s.io/v1beta1 + Jan 14 04:11:37.283: INFO: Checking APIGroup: custom.metrics.k8s.io + Jan 14 04:11:37.283: INFO: PreferredVersion.GroupVersion: custom.metrics.k8s.io/v1beta1 + Jan 14 04:11:37.283: INFO: Versions found [{custom.metrics.k8s.io/v1beta1 v1beta1}] + Jan 14 04:11:37.283: INFO: custom.metrics.k8s.io/v1beta1 matches custom.metrics.k8s.io/v1beta1 + Jan 14 04:11:37.283: INFO: Checking APIGroup: metrics.k8s.io + Jan 14 04:11:37.284: INFO: PreferredVersion.GroupVersion: metrics.k8s.io/v1beta1 + Jan 14 04:11:37.284: INFO: Versions found [{metrics.k8s.io/v1beta1 v1beta1}] + Jan 14 04:11:37.284: INFO: metrics.k8s.io/v1beta1 matches metrics.k8s.io/v1beta1 + [AfterEach] [sig-api-machinery] Discovery + test/e2e/framework/node/init/init.go:32 + Jan 14 04:11:37.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Discovery + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Discovery + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Discovery + tear down framework | framework.go:193 + STEP: Destroying namespace "discovery-8756" for this suite. 01/14/23 04:11:37.289 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:124 +[BeforeEach] [sig-storage] Projected configMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:11:37.296 +Jan 14 04:11:37.296: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename projected 01/14/23 04:11:37.297 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:11:37.311 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:11:37.313 +[BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:31 +[It] updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:124 +STEP: Creating projection with configMap that has name projected-configmap-test-upd-742e0d4d-a91d-44db-a0d2-56ef189b04f6 01/14/23 04:11:37.32 +STEP: Creating the pod 01/14/23 04:11:37.323 +Jan 14 04:11:37.332: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a5efe43a-89c1-45de-bbe6-7f51e965b758" in namespace "projected-8652" to be "running and ready" +Jan 14 04:11:37.335: INFO: Pod "pod-projected-configmaps-a5efe43a-89c1-45de-bbe6-7f51e965b758": Phase="Pending", Reason="", readiness=false. Elapsed: 2.719048ms +Jan 14 04:11:37.335: INFO: The phase of Pod pod-projected-configmaps-a5efe43a-89c1-45de-bbe6-7f51e965b758 is Pending, waiting for it to be Running (with Ready = true) +Jan 14 04:11:39.340: INFO: Pod "pod-projected-configmaps-a5efe43a-89c1-45de-bbe6-7f51e965b758": Phase="Running", Reason="", readiness=true. Elapsed: 2.007486941s +Jan 14 04:11:39.340: INFO: The phase of Pod pod-projected-configmaps-a5efe43a-89c1-45de-bbe6-7f51e965b758 is Running (Ready = true) +Jan 14 04:11:39.340: INFO: Pod "pod-projected-configmaps-a5efe43a-89c1-45de-bbe6-7f51e965b758" satisfied condition "running and ready" +STEP: Updating configmap projected-configmap-test-upd-742e0d4d-a91d-44db-a0d2-56ef189b04f6 01/14/23 04:11:39.348 +STEP: waiting to observe update in volume 01/14/23 04:11:39.353 +[AfterEach] [sig-storage] Projected configMap + test/e2e/framework/node/init/init.go:32 +Jan 14 04:11:41.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected configMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected configMap + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-8652" for this suite. 01/14/23 04:11:41.372 +------------------------------ +• [4.084 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:124 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected configMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:11:37.296 + Jan 14 04:11:37.296: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename projected 01/14/23 04:11:37.297 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:11:37.311 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:11:37.313 + [BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:31 + [It] updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:124 + STEP: Creating projection with configMap that has name projected-configmap-test-upd-742e0d4d-a91d-44db-a0d2-56ef189b04f6 01/14/23 04:11:37.32 + STEP: Creating the pod 01/14/23 04:11:37.323 + Jan 14 04:11:37.332: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a5efe43a-89c1-45de-bbe6-7f51e965b758" in namespace "projected-8652" to be "running and ready" + Jan 14 04:11:37.335: INFO: Pod "pod-projected-configmaps-a5efe43a-89c1-45de-bbe6-7f51e965b758": Phase="Pending", Reason="", readiness=false. Elapsed: 2.719048ms + Jan 14 04:11:37.335: INFO: The phase of Pod pod-projected-configmaps-a5efe43a-89c1-45de-bbe6-7f51e965b758 is Pending, waiting for it to be Running (with Ready = true) + Jan 14 04:11:39.340: INFO: Pod "pod-projected-configmaps-a5efe43a-89c1-45de-bbe6-7f51e965b758": Phase="Running", Reason="", readiness=true. Elapsed: 2.007486941s + Jan 14 04:11:39.340: INFO: The phase of Pod pod-projected-configmaps-a5efe43a-89c1-45de-bbe6-7f51e965b758 is Running (Ready = true) + Jan 14 04:11:39.340: INFO: Pod "pod-projected-configmaps-a5efe43a-89c1-45de-bbe6-7f51e965b758" satisfied condition "running and ready" + STEP: Updating configmap projected-configmap-test-upd-742e0d4d-a91d-44db-a0d2-56ef189b04f6 01/14/23 04:11:39.348 + STEP: waiting to observe update in volume 01/14/23 04:11:39.353 + [AfterEach] [sig-storage] Projected configMap + test/e2e/framework/node/init/init.go:32 + Jan 14 04:11:41.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected configMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected configMap + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-8652" for this suite. 01/14/23 04:11:41.372 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be immutable if `immutable` field is set [Conformance] + test/e2e/common/storage/configmap_volume.go:504 +[BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:11:41.38 +Jan 14 04:11:41.380: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename configmap 01/14/23 04:11:41.381 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:11:41.395 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:11:41.397 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 +[It] should be immutable if `immutable` field is set [Conformance] + test/e2e/common/storage/configmap_volume.go:504 +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 +Jan 14 04:11:41.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 +STEP: Destroying namespace "configmap-2957" for this suite. 01/14/23 04:11:41.438 +------------------------------ +• [0.065 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + should be immutable if `immutable` field is set [Conformance] + test/e2e/common/storage/configmap_volume.go:504 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:11:41.38 + Jan 14 04:11:41.380: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename configmap 01/14/23 04:11:41.381 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:11:41.395 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:11:41.397 + [BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 + [It] should be immutable if `immutable` field is set [Conformance] + test/e2e/common/storage/configmap_volume.go:504 + [AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 + Jan 14 04:11:41.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 + STEP: Destroying namespace "configmap-2957" for this suite. 01/14/23 04:11:41.438 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + patching/updating a validating webhook should work [Conformance] + test/e2e/apimachinery/webhook.go:413 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:11:41.445 +Jan 14 04:11:41.446: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename webhook 01/14/23 04:11:41.446 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:11:41.46 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:11:41.463 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 01/14/23 04:11:41.476 +STEP: Create role binding to let webhook read extension-apiserver-authentication 01/14/23 04:11:42.132 +STEP: Deploying the webhook pod 01/14/23 04:11:42.139 +STEP: Wait for the deployment to be ready 01/14/23 04:11:42.151 +Jan 14 04:11:42.160: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 01/14/23 04:11:44.172 +STEP: Verifying the service has paired with the endpoint 01/14/23 04:11:44.182 +Jan 14 04:11:45.182: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] patching/updating a validating webhook should work [Conformance] + test/e2e/apimachinery/webhook.go:413 +STEP: Creating a validating webhook configuration 01/14/23 04:11:45.186 +STEP: Creating a configMap that does not comply to the validation webhook rules 01/14/23 04:11:45.2 +STEP: Updating a validating webhook configuration's rules to not include the create operation 01/14/23 04:11:45.206 +STEP: Creating a configMap that does not comply to the validation webhook rules 01/14/23 04:11:45.214 +STEP: Patching a validating webhook configuration's rules to include the create operation 01/14/23 04:11:45.224 +STEP: Creating a configMap that does not comply to the validation webhook rules 01/14/23 04:11:45.23 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:11:45.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "webhook-5284" for this suite. 01/14/23 04:11:45.274 +STEP: Destroying namespace "webhook-5284-markers" for this suite. 01/14/23 04:11:45.287 +------------------------------ +• [3.849 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + patching/updating a validating webhook should work [Conformance] + test/e2e/apimachinery/webhook.go:413 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:11:41.445 + Jan 14 04:11:41.446: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename webhook 01/14/23 04:11:41.446 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:11:41.46 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:11:41.463 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 01/14/23 04:11:41.476 + STEP: Create role binding to let webhook read extension-apiserver-authentication 01/14/23 04:11:42.132 + STEP: Deploying the webhook pod 01/14/23 04:11:42.139 + STEP: Wait for the deployment to be ready 01/14/23 04:11:42.151 + Jan 14 04:11:42.160: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 01/14/23 04:11:44.172 + STEP: Verifying the service has paired with the endpoint 01/14/23 04:11:44.182 + Jan 14 04:11:45.182: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] patching/updating a validating webhook should work [Conformance] + test/e2e/apimachinery/webhook.go:413 + STEP: Creating a validating webhook configuration 01/14/23 04:11:45.186 + STEP: Creating a configMap that does not comply to the validation webhook rules 01/14/23 04:11:45.2 + STEP: Updating a validating webhook configuration's rules to not include the create operation 01/14/23 04:11:45.206 + STEP: Creating a configMap that does not comply to the validation webhook rules 01/14/23 04:11:45.214 + STEP: Patching a validating webhook configuration's rules to include the create operation 01/14/23 04:11:45.224 + STEP: Creating a configMap that does not comply to the validation webhook rules 01/14/23 04:11:45.23 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:11:45.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "webhook-5284" for this suite. 01/14/23 04:11:45.274 + STEP: Destroying namespace "webhook-5284-markers" for this suite. 01/14/23 04:11:45.287 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-network] Services + should complete a service status lifecycle [Conformance] + test/e2e/network/service.go:3428 +[BeforeEach] [sig-network] Services + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:11:45.294 +Jan 14 04:11:45.295: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename services 01/14/23 04:11:45.295 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:11:45.31 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:11:45.313 +[BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should complete a service status lifecycle [Conformance] + test/e2e/network/service.go:3428 +STEP: creating a Service 01/14/23 04:11:45.318 +STEP: watching for the Service to be added 01/14/23 04:11:45.328 +Jan 14 04:11:45.329: INFO: Found Service test-service-bz9s7 in namespace services-3727 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] +Jan 14 04:11:45.329: INFO: Service test-service-bz9s7 created +STEP: Getting /status 01/14/23 04:11:45.329 +Jan 14 04:11:45.332: INFO: Service test-service-bz9s7 has LoadBalancer: {[]} +STEP: patching the ServiceStatus 01/14/23 04:11:45.332 +STEP: watching for the Service to be patched 01/14/23 04:11:45.337 +Jan 14 04:11:45.339: INFO: observed Service test-service-bz9s7 in namespace services-3727 with annotations: map[] & LoadBalancer: {[]} +Jan 14 04:11:45.339: INFO: Found Service test-service-bz9s7 in namespace services-3727 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} +Jan 14 04:11:45.339: INFO: Service test-service-bz9s7 has service status patched +STEP: updating the ServiceStatus 01/14/23 04:11:45.339 +Jan 14 04:11:45.346: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the Service to be updated 01/14/23 04:11:45.346 +Jan 14 04:11:45.348: INFO: Observed Service test-service-bz9s7 in namespace services-3727 with annotations: map[] & Conditions: {[]} +Jan 14 04:11:45.348: INFO: Observed event: &Service{ObjectMeta:{test-service-bz9s7 services-3727 17a7d871-e766-4585-8d44-8818f930de9c 436540 0 2023-01-14 04:11:45 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2023-01-14 04:11:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}}} } {e2e.test Update v1 2023-01-14 04:11:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}} status}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.55.253.115,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.55.253.115],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} +Jan 14 04:11:45.348: INFO: Found Service test-service-bz9s7 in namespace services-3727 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Jan 14 04:11:45.348: INFO: Service test-service-bz9s7 has service status updated +STEP: patching the service 01/14/23 04:11:45.348 +STEP: watching for the Service to be patched 01/14/23 04:11:45.358 +Jan 14 04:11:45.360: INFO: observed Service test-service-bz9s7 in namespace services-3727 with labels: map[test-service-static:true] +Jan 14 04:11:45.360: INFO: observed Service test-service-bz9s7 in namespace services-3727 with labels: map[test-service-static:true] +Jan 14 04:11:45.360: INFO: observed Service test-service-bz9s7 in namespace services-3727 with labels: map[test-service-static:true] +Jan 14 04:11:45.360: INFO: Found Service test-service-bz9s7 in namespace services-3727 with labels: map[test-service:patched test-service-static:true] +Jan 14 04:11:45.360: INFO: Service test-service-bz9s7 patched +STEP: deleting the service 01/14/23 04:11:45.36 +STEP: watching for the Service to be deleted 01/14/23 04:11:45.371 +Jan 14 04:11:45.372: INFO: Observed event: ADDED +Jan 14 04:11:45.372: INFO: Observed event: MODIFIED +Jan 14 04:11:45.372: INFO: Observed event: MODIFIED +Jan 14 04:11:45.373: INFO: Observed event: MODIFIED +Jan 14 04:11:45.373: INFO: Found Service test-service-bz9s7 in namespace services-3727 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] +Jan 14 04:11:45.373: INFO: Service test-service-bz9s7 deleted +[AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 +Jan 14 04:11:45.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 +STEP: Destroying namespace "services-3727" for this suite. 01/14/23 04:11:45.377 +------------------------------ +• [0.089 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should complete a service status lifecycle [Conformance] + test/e2e/network/service.go:3428 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:11:45.294 + Jan 14 04:11:45.295: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename services 01/14/23 04:11:45.295 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:11:45.31 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:11:45.313 + [BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should complete a service status lifecycle [Conformance] + test/e2e/network/service.go:3428 + STEP: creating a Service 01/14/23 04:11:45.318 + STEP: watching for the Service to be added 01/14/23 04:11:45.328 + Jan 14 04:11:45.329: INFO: Found Service test-service-bz9s7 in namespace services-3727 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] + Jan 14 04:11:45.329: INFO: Service test-service-bz9s7 created + STEP: Getting /status 01/14/23 04:11:45.329 + Jan 14 04:11:45.332: INFO: Service test-service-bz9s7 has LoadBalancer: {[]} + STEP: patching the ServiceStatus 01/14/23 04:11:45.332 + STEP: watching for the Service to be patched 01/14/23 04:11:45.337 + Jan 14 04:11:45.339: INFO: observed Service test-service-bz9s7 in namespace services-3727 with annotations: map[] & LoadBalancer: {[]} + Jan 14 04:11:45.339: INFO: Found Service test-service-bz9s7 in namespace services-3727 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} + Jan 14 04:11:45.339: INFO: Service test-service-bz9s7 has service status patched + STEP: updating the ServiceStatus 01/14/23 04:11:45.339 + Jan 14 04:11:45.346: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} + STEP: watching for the Service to be updated 01/14/23 04:11:45.346 + Jan 14 04:11:45.348: INFO: Observed Service test-service-bz9s7 in namespace services-3727 with annotations: map[] & Conditions: {[]} + Jan 14 04:11:45.348: INFO: Observed event: &Service{ObjectMeta:{test-service-bz9s7 services-3727 17a7d871-e766-4585-8d44-8818f930de9c 436540 0 2023-01-14 04:11:45 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2023-01-14 04:11:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}}} } {e2e.test Update v1 2023-01-14 04:11:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}} status}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.55.253.115,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.55.253.115],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} + Jan 14 04:11:45.348: INFO: Found Service test-service-bz9s7 in namespace services-3727 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] + Jan 14 04:11:45.348: INFO: Service test-service-bz9s7 has service status updated + STEP: patching the service 01/14/23 04:11:45.348 + STEP: watching for the Service to be patched 01/14/23 04:11:45.358 + Jan 14 04:11:45.360: INFO: observed Service test-service-bz9s7 in namespace services-3727 with labels: map[test-service-static:true] + Jan 14 04:11:45.360: INFO: observed Service test-service-bz9s7 in namespace services-3727 with labels: map[test-service-static:true] + Jan 14 04:11:45.360: INFO: observed Service test-service-bz9s7 in namespace services-3727 with labels: map[test-service-static:true] + Jan 14 04:11:45.360: INFO: Found Service test-service-bz9s7 in namespace services-3727 with labels: map[test-service:patched test-service-static:true] + Jan 14 04:11:45.360: INFO: Service test-service-bz9s7 patched + STEP: deleting the service 01/14/23 04:11:45.36 + STEP: watching for the Service to be deleted 01/14/23 04:11:45.371 + Jan 14 04:11:45.372: INFO: Observed event: ADDED + Jan 14 04:11:45.372: INFO: Observed event: MODIFIED + Jan 14 04:11:45.372: INFO: Observed event: MODIFIED + Jan 14 04:11:45.373: INFO: Observed event: MODIFIED + Jan 14 04:11:45.373: INFO: Found Service test-service-bz9s7 in namespace services-3727 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] + Jan 14 04:11:45.373: INFO: Service test-service-bz9s7 deleted + [AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 + Jan 14 04:11:45.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 + STEP: Destroying namespace "services-3727" for this suite. 01/14/23 04:11:45.377 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute poststart exec hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:134 +[BeforeEach] [sig-node] Container Lifecycle Hook + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:11:45.385 +Jan 14 04:11:45.385: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename container-lifecycle-hook 01/14/23 04:11:45.386 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:11:45.406 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:11:45.408 +[BeforeEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:77 +STEP: create the container to handle the HTTPGet hook request. 01/14/23 04:11:45.416 +Jan 14 04:11:45.425: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-86" to be "running and ready" +Jan 14 04:11:45.428: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 2.767694ms +Jan 14 04:11:45.428: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Jan 14 04:11:47.433: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 2.00749244s +Jan 14 04:11:47.433: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) +Jan 14 04:11:47.433: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" +[It] should execute poststart exec hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:134 +STEP: create the pod with lifecycle hook 01/14/23 04:11:47.436 +Jan 14 04:11:47.444: INFO: Waiting up to 5m0s for pod "pod-with-poststart-exec-hook" in namespace "container-lifecycle-hook-86" to be "running and ready" +Jan 14 04:11:47.446: INFO: Pod "pod-with-poststart-exec-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 2.771663ms +Jan 14 04:11:47.447: INFO: The phase of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) +Jan 14 04:11:49.451: INFO: Pod "pod-with-poststart-exec-hook": Phase="Running", Reason="", readiness=true. Elapsed: 2.007346237s +Jan 14 04:11:49.451: INFO: The phase of Pod pod-with-poststart-exec-hook is Running (Ready = true) +Jan 14 04:11:49.451: INFO: Pod "pod-with-poststart-exec-hook" satisfied condition "running and ready" +STEP: check poststart hook 01/14/23 04:11:49.454 +STEP: delete the pod with lifecycle hook 01/14/23 04:11:49.467 +Jan 14 04:11:49.476: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jan 14 04:11:49.479: INFO: Pod pod-with-poststart-exec-hook still exists +Jan 14 04:11:51.480: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jan 14 04:11:51.483: INFO: Pod pod-with-poststart-exec-hook no longer exists +[AfterEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/node/init/init.go:32 +Jan 14 04:11:51.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + tear down framework | framework.go:193 +STEP: Destroying namespace "container-lifecycle-hook-86" for this suite. 01/14/23 04:11:51.488 +------------------------------ +• [SLOW TEST] [6.109 seconds] +[sig-node] Container Lifecycle Hook +test/e2e/common/node/framework.go:23 + when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:46 + should execute poststart exec hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:134 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Container Lifecycle Hook + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:11:45.385 + Jan 14 04:11:45.385: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename container-lifecycle-hook 01/14/23 04:11:45.386 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:11:45.406 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:11:45.408 + [BeforeEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:77 + STEP: create the container to handle the HTTPGet hook request. 01/14/23 04:11:45.416 + Jan 14 04:11:45.425: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-86" to be "running and ready" + Jan 14 04:11:45.428: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 2.767694ms + Jan 14 04:11:45.428: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) + Jan 14 04:11:47.433: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 2.00749244s + Jan 14 04:11:47.433: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) + Jan 14 04:11:47.433: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" + [It] should execute poststart exec hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:134 + STEP: create the pod with lifecycle hook 01/14/23 04:11:47.436 + Jan 14 04:11:47.444: INFO: Waiting up to 5m0s for pod "pod-with-poststart-exec-hook" in namespace "container-lifecycle-hook-86" to be "running and ready" + Jan 14 04:11:47.446: INFO: Pod "pod-with-poststart-exec-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 2.771663ms + Jan 14 04:11:47.447: INFO: The phase of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) + Jan 14 04:11:49.451: INFO: Pod "pod-with-poststart-exec-hook": Phase="Running", Reason="", readiness=true. Elapsed: 2.007346237s + Jan 14 04:11:49.451: INFO: The phase of Pod pod-with-poststart-exec-hook is Running (Ready = true) + Jan 14 04:11:49.451: INFO: Pod "pod-with-poststart-exec-hook" satisfied condition "running and ready" + STEP: check poststart hook 01/14/23 04:11:49.454 + STEP: delete the pod with lifecycle hook 01/14/23 04:11:49.467 + Jan 14 04:11:49.476: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear + Jan 14 04:11:49.479: INFO: Pod pod-with-poststart-exec-hook still exists + Jan 14 04:11:51.480: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear + Jan 14 04:11:51.483: INFO: Pod pod-with-poststart-exec-hook no longer exists + [AfterEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/node/init/init.go:32 + Jan 14 04:11:51.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + tear down framework | framework.go:193 + STEP: Destroying namespace "container-lifecycle-hook-86" for this suite. 01/14/23 04:11:51.488 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should test the lifecycle of a ReplicationController [Conformance] + test/e2e/apps/rc.go:110 +[BeforeEach] [sig-apps] ReplicationController + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:11:51.495 +Jan 14 04:11:51.495: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename replication-controller 01/14/23 04:11:51.495 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:11:51.511 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:11:51.513 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:57 +[It] should test the lifecycle of a ReplicationController [Conformance] + test/e2e/apps/rc.go:110 +STEP: creating a ReplicationController 01/14/23 04:11:51.518 +STEP: waiting for RC to be added 01/14/23 04:11:51.523 +STEP: waiting for available Replicas 01/14/23 04:11:51.523 +STEP: patching ReplicationController 01/14/23 04:11:52.344 +STEP: waiting for RC to be modified 01/14/23 04:11:52.354 +STEP: patching ReplicationController status 01/14/23 04:11:52.355 +STEP: waiting for RC to be modified 01/14/23 04:11:52.363 +STEP: waiting for available Replicas 01/14/23 04:11:52.363 +STEP: fetching ReplicationController status 01/14/23 04:11:52.368 +STEP: patching ReplicationController scale 01/14/23 04:11:52.371 +STEP: waiting for RC to be modified 01/14/23 04:11:52.377 +STEP: waiting for ReplicationController's scale to be the max amount 01/14/23 04:11:52.378 +STEP: fetching ReplicationController; ensuring that it's patched 01/14/23 04:11:53.158 +STEP: updating ReplicationController status 01/14/23 04:11:53.161 +STEP: waiting for RC to be modified 01/14/23 04:11:53.166 +STEP: listing all ReplicationControllers 01/14/23 04:11:53.166 +STEP: checking that ReplicationController has expected values 01/14/23 04:11:53.169 +STEP: deleting ReplicationControllers by collection 01/14/23 04:11:53.169 +STEP: waiting for ReplicationController to have a DELETED watchEvent 01/14/23 04:11:53.176 +[AfterEach] [sig-apps] ReplicationController + test/e2e/framework/node/init/init.go:32 +Jan 14 04:11:53.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] ReplicationController + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] ReplicationController + tear down framework | framework.go:193 +STEP: Destroying namespace "replication-controller-3615" for this suite. 01/14/23 04:11:53.224 +------------------------------ +• [1.735 seconds] +[sig-apps] ReplicationController +test/e2e/apps/framework.go:23 + should test the lifecycle of a ReplicationController [Conformance] + test/e2e/apps/rc.go:110 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ReplicationController + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:11:51.495 + Jan 14 04:11:51.495: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename replication-controller 01/14/23 04:11:51.495 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:11:51.511 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:11:51.513 + [BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:57 + [It] should test the lifecycle of a ReplicationController [Conformance] + test/e2e/apps/rc.go:110 + STEP: creating a ReplicationController 01/14/23 04:11:51.518 + STEP: waiting for RC to be added 01/14/23 04:11:51.523 + STEP: waiting for available Replicas 01/14/23 04:11:51.523 + STEP: patching ReplicationController 01/14/23 04:11:52.344 + STEP: waiting for RC to be modified 01/14/23 04:11:52.354 + STEP: patching ReplicationController status 01/14/23 04:11:52.355 + STEP: waiting for RC to be modified 01/14/23 04:11:52.363 + STEP: waiting for available Replicas 01/14/23 04:11:52.363 + STEP: fetching ReplicationController status 01/14/23 04:11:52.368 + STEP: patching ReplicationController scale 01/14/23 04:11:52.371 + STEP: waiting for RC to be modified 01/14/23 04:11:52.377 + STEP: waiting for ReplicationController's scale to be the max amount 01/14/23 04:11:52.378 + STEP: fetching ReplicationController; ensuring that it's patched 01/14/23 04:11:53.158 + STEP: updating ReplicationController status 01/14/23 04:11:53.161 + STEP: waiting for RC to be modified 01/14/23 04:11:53.166 + STEP: listing all ReplicationControllers 01/14/23 04:11:53.166 + STEP: checking that ReplicationController has expected values 01/14/23 04:11:53.169 + STEP: deleting ReplicationControllers by collection 01/14/23 04:11:53.169 + STEP: waiting for ReplicationController to have a DELETED watchEvent 01/14/23 04:11:53.176 + [AfterEach] [sig-apps] ReplicationController + test/e2e/framework/node/init/init.go:32 + Jan 14 04:11:53.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] ReplicationController + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] ReplicationController + tear down framework | framework.go:193 + STEP: Destroying namespace "replication-controller-3615" for this suite. 01/14/23 04:11:53.224 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for multiple CRDs of different groups [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:276 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:11:53.23 +Jan 14 04:11:53.230: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename crd-publish-openapi 01/14/23 04:11:53.231 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:11:53.245 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:11:53.248 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[It] works for multiple CRDs of different groups [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:276 +STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation 01/14/23 04:11:53.25 +Jan 14 04:11:53.250: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +Jan 14 04:11:55.019: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:12:02.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "crd-publish-openapi-1772" for this suite. 01/14/23 04:12:02.219 +------------------------------ +• [SLOW TEST] [8.994 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + works for multiple CRDs of different groups [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:276 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:11:53.23 + Jan 14 04:11:53.230: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename crd-publish-openapi 01/14/23 04:11:53.231 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:11:53.245 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:11:53.248 + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [It] works for multiple CRDs of different groups [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:276 + STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation 01/14/23 04:11:53.25 + Jan 14 04:11:53.250: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + Jan 14 04:11:55.019: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:12:02.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "crd-publish-openapi-1772" for this suite. 01/14/23 04:12:02.219 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should verify changes to a daemon set status [Conformance] + test/e2e/apps/daemon_set.go:862 +[BeforeEach] [sig-apps] Daemon set [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:12:02.225 +Jan 14 04:12:02.225: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename daemonsets 01/14/23 04:12:02.226 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:12:02.241 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:12:02.243 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 +[It] should verify changes to a daemon set status [Conformance] + test/e2e/apps/daemon_set.go:862 +STEP: Creating simple DaemonSet "daemon-set" 01/14/23 04:12:02.269 +STEP: Check that daemon pods launch on every node of the cluster. 01/14/23 04:12:02.274 +Jan 14 04:12:02.279: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:12:02.279: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:12:02.279: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:12:02.282: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jan 14 04:12:02.282: INFO: Node 10.0.1.106 is running 0 daemon pod, expected 1 +Jan 14 04:12:03.292: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:12:03.292: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:12:03.292: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:12:03.296: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Jan 14 04:12:03.296: INFO: Node 10.0.1.106 is running 0 daemon pod, expected 1 +Jan 14 04:12:04.288: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:12:04.288: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:12:04.288: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:12:04.292: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 +Jan 14 04:12:04.292: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set +STEP: Getting /status 01/14/23 04:12:04.295 +Jan 14 04:12:04.298: INFO: Daemon Set daemon-set has Conditions: [] +STEP: updating the DaemonSet Status 01/14/23 04:12:04.298 +Jan 14 04:12:04.307: INFO: updatedStatus.Conditions: []v1.DaemonSetCondition{v1.DaemonSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the daemon set status to be updated 01/14/23 04:12:04.307 +Jan 14 04:12:04.309: INFO: Observed &DaemonSet event: ADDED +Jan 14 04:12:04.309: INFO: Observed &DaemonSet event: MODIFIED +Jan 14 04:12:04.309: INFO: Observed &DaemonSet event: MODIFIED +Jan 14 04:12:04.309: INFO: Observed &DaemonSet event: MODIFIED +Jan 14 04:12:04.310: INFO: Observed &DaemonSet event: MODIFIED +Jan 14 04:12:04.310: INFO: Found daemon set daemon-set in namespace daemonsets-5166 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Jan 14 04:12:04.310: INFO: Daemon set daemon-set has an updated status +STEP: patching the DaemonSet Status 01/14/23 04:12:04.31 +STEP: watching for the daemon set status to be patched 01/14/23 04:12:04.317 +Jan 14 04:12:04.319: INFO: Observed &DaemonSet event: ADDED +Jan 14 04:12:04.319: INFO: Observed &DaemonSet event: MODIFIED +Jan 14 04:12:04.319: INFO: Observed &DaemonSet event: MODIFIED +Jan 14 04:12:04.319: INFO: Observed &DaemonSet event: MODIFIED +Jan 14 04:12:04.319: INFO: Observed &DaemonSet event: MODIFIED +Jan 14 04:12:04.319: INFO: Observed daemon set daemon-set in namespace daemonsets-5166 with annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Jan 14 04:12:04.319: INFO: Observed &DaemonSet event: MODIFIED +Jan 14 04:12:04.319: INFO: Found daemon set daemon-set in namespace daemonsets-5166 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusPatched True 0001-01-01 00:00:00 +0000 UTC }] +Jan 14 04:12:04.319: INFO: Daemon set daemon-set has a patched status +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 +STEP: Deleting DaemonSet "daemon-set" 01/14/23 04:12:04.324 +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5166, will wait for the garbage collector to delete the pods 01/14/23 04:12:04.324 +Jan 14 04:12:04.386: INFO: Deleting DaemonSet.extensions daemon-set took: 8.901664ms +Jan 14 04:12:04.487: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.765096ms +Jan 14 04:12:06.691: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jan 14 04:12:06.691: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Jan 14 04:12:06.694: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"436880"},"items":null} + +Jan 14 04:12:06.697: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"436880"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:12:06.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "daemonsets-5166" for this suite. 01/14/23 04:12:06.713 +------------------------------ +• [4.494 seconds] +[sig-apps] Daemon set [Serial] +test/e2e/apps/framework.go:23 + should verify changes to a daemon set status [Conformance] + test/e2e/apps/daemon_set.go:862 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Daemon set [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:12:02.225 + Jan 14 04:12:02.225: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename daemonsets 01/14/23 04:12:02.226 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:12:02.241 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:12:02.243 + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 + [It] should verify changes to a daemon set status [Conformance] + test/e2e/apps/daemon_set.go:862 + STEP: Creating simple DaemonSet "daemon-set" 01/14/23 04:12:02.269 + STEP: Check that daemon pods launch on every node of the cluster. 01/14/23 04:12:02.274 + Jan 14 04:12:02.279: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:12:02.279: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:12:02.279: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:12:02.282: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jan 14 04:12:02.282: INFO: Node 10.0.1.106 is running 0 daemon pod, expected 1 + Jan 14 04:12:03.292: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:12:03.292: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:12:03.292: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:12:03.296: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 + Jan 14 04:12:03.296: INFO: Node 10.0.1.106 is running 0 daemon pod, expected 1 + Jan 14 04:12:04.288: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:12:04.288: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:12:04.288: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:12:04.292: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 + Jan 14 04:12:04.292: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set + STEP: Getting /status 01/14/23 04:12:04.295 + Jan 14 04:12:04.298: INFO: Daemon Set daemon-set has Conditions: [] + STEP: updating the DaemonSet Status 01/14/23 04:12:04.298 + Jan 14 04:12:04.307: INFO: updatedStatus.Conditions: []v1.DaemonSetCondition{v1.DaemonSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} + STEP: watching for the daemon set status to be updated 01/14/23 04:12:04.307 + Jan 14 04:12:04.309: INFO: Observed &DaemonSet event: ADDED + Jan 14 04:12:04.309: INFO: Observed &DaemonSet event: MODIFIED + Jan 14 04:12:04.309: INFO: Observed &DaemonSet event: MODIFIED + Jan 14 04:12:04.309: INFO: Observed &DaemonSet event: MODIFIED + Jan 14 04:12:04.310: INFO: Observed &DaemonSet event: MODIFIED + Jan 14 04:12:04.310: INFO: Found daemon set daemon-set in namespace daemonsets-5166 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] + Jan 14 04:12:04.310: INFO: Daemon set daemon-set has an updated status + STEP: patching the DaemonSet Status 01/14/23 04:12:04.31 + STEP: watching for the daemon set status to be patched 01/14/23 04:12:04.317 + Jan 14 04:12:04.319: INFO: Observed &DaemonSet event: ADDED + Jan 14 04:12:04.319: INFO: Observed &DaemonSet event: MODIFIED + Jan 14 04:12:04.319: INFO: Observed &DaemonSet event: MODIFIED + Jan 14 04:12:04.319: INFO: Observed &DaemonSet event: MODIFIED + Jan 14 04:12:04.319: INFO: Observed &DaemonSet event: MODIFIED + Jan 14 04:12:04.319: INFO: Observed daemon set daemon-set in namespace daemonsets-5166 with annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] + Jan 14 04:12:04.319: INFO: Observed &DaemonSet event: MODIFIED + Jan 14 04:12:04.319: INFO: Found daemon set daemon-set in namespace daemonsets-5166 with labels: map[daemonset-name:daemon-set] annotations: map[deprecated.daemonset.template.generation:1] & Conditions: [{StatusPatched True 0001-01-01 00:00:00 +0000 UTC }] + Jan 14 04:12:04.319: INFO: Daemon set daemon-set has a patched status + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 + STEP: Deleting DaemonSet "daemon-set" 01/14/23 04:12:04.324 + STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5166, will wait for the garbage collector to delete the pods 01/14/23 04:12:04.324 + Jan 14 04:12:04.386: INFO: Deleting DaemonSet.extensions daemon-set took: 8.901664ms + Jan 14 04:12:04.487: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.765096ms + Jan 14 04:12:06.691: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jan 14 04:12:06.691: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set + Jan 14 04:12:06.694: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"436880"},"items":null} + + Jan 14 04:12:06.697: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"436880"},"items":null} + + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:12:06.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "daemonsets-5166" for this suite. 01/14/23 04:12:06.713 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:122 +[BeforeEach] [sig-network] Networking + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:12:06.719 +Jan 14 04:12:06.720: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename pod-network-test 01/14/23 04:12:06.72 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:12:06.734 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:12:06.737 +[BeforeEach] [sig-network] Networking + test/e2e/framework/metrics/init/init.go:31 +[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:122 +STEP: Performing setup for networking test in namespace pod-network-test-809 01/14/23 04:12:06.739 +STEP: creating a selector 01/14/23 04:12:06.739 +STEP: Creating the service pods in kubernetes 01/14/23 04:12:06.739 +Jan 14 04:12:06.739: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Jan 14 04:12:06.777: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-809" to be "running and ready" +Jan 14 04:12:06.784: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 7.139675ms +Jan 14 04:12:06.784: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Jan 14 04:12:08.789: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.012313385s +Jan 14 04:12:08.789: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jan 14 04:12:10.788: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.011565137s +Jan 14 04:12:10.788: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jan 14 04:12:12.789: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.012056802s +Jan 14 04:12:12.789: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jan 14 04:12:14.788: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.011445895s +Jan 14 04:12:14.788: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jan 14 04:12:16.789: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.011952865s +Jan 14 04:12:16.789: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jan 14 04:12:18.790: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 12.013456168s +Jan 14 04:12:18.790: INFO: The phase of Pod netserver-0 is Running (Ready = true) +Jan 14 04:12:18.790: INFO: Pod "netserver-0" satisfied condition "running and ready" +Jan 14 04:12:18.793: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-809" to be "running and ready" +Jan 14 04:12:18.796: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 2.879269ms +Jan 14 04:12:18.796: INFO: The phase of Pod netserver-1 is Running (Ready = true) +Jan 14 04:12:18.796: INFO: Pod "netserver-1" satisfied condition "running and ready" +Jan 14 04:12:18.799: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-809" to be "running and ready" +Jan 14 04:12:18.802: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 2.88646ms +Jan 14 04:12:18.802: INFO: The phase of Pod netserver-2 is Running (Ready = true) +Jan 14 04:12:18.802: INFO: Pod "netserver-2" satisfied condition "running and ready" +STEP: Creating test pods 01/14/23 04:12:18.81 +Jan 14 04:12:18.827: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-809" to be "running" +Jan 14 04:12:18.830: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.931278ms +Jan 14 04:12:20.836: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.008086991s +Jan 14 04:12:20.836: INFO: Pod "test-container-pod" satisfied condition "running" +Jan 14 04:12:20.839: INFO: Waiting up to 5m0s for pod "host-test-container-pod" in namespace "pod-network-test-809" to be "running" +Jan 14 04:12:20.841: INFO: Pod "host-test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.713686ms +Jan 14 04:12:20.841: INFO: Pod "host-test-container-pod" satisfied condition "running" +Jan 14 04:12:20.844: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 +Jan 14 04:12:20.844: INFO: Going to poll 10.52.1.46 on port 8081 at least 0 times, with a maximum of 39 tries before failing +Jan 14 04:12:20.846: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.52.1.46 8081 | grep -v '^\s*$'] Namespace:pod-network-test-809 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jan 14 04:12:20.846: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +Jan 14 04:12:20.847: INFO: ExecWithOptions: Clientset creation +Jan 14 04:12:20.847: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/pod-network-test-809/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+10.52.1.46+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +Jan 14 04:12:21.893: INFO: Found all 1 expected endpoints: [netserver-0] +Jan 14 04:12:21.893: INFO: Going to poll 10.52.0.244 on port 8081 at least 0 times, with a maximum of 39 tries before failing +Jan 14 04:12:21.897: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.52.0.244 8081 | grep -v '^\s*$'] Namespace:pod-network-test-809 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jan 14 04:12:21.897: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +Jan 14 04:12:21.897: INFO: ExecWithOptions: Clientset creation +Jan 14 04:12:21.897: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/pod-network-test-809/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+10.52.0.244+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +Jan 14 04:12:22.947: INFO: Found all 1 expected endpoints: [netserver-1] +Jan 14 04:12:22.947: INFO: Going to poll 10.52.1.118 on port 8081 at least 0 times, with a maximum of 39 tries before failing +Jan 14 04:12:22.951: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.52.1.118 8081 | grep -v '^\s*$'] Namespace:pod-network-test-809 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jan 14 04:12:22.951: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +Jan 14 04:12:22.951: INFO: ExecWithOptions: Clientset creation +Jan 14 04:12:22.951: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/pod-network-test-809/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+10.52.1.118+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +Jan 14 04:12:23.997: INFO: Found all 1 expected endpoints: [netserver-2] +[AfterEach] [sig-network] Networking + test/e2e/framework/node/init/init.go:32 +Jan 14 04:12:23.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Networking + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Networking + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Networking + tear down framework | framework.go:193 +STEP: Destroying namespace "pod-network-test-809" for this suite. 01/14/23 04:12:24.003 +------------------------------ +• [SLOW TEST] [17.289 seconds] +[sig-network] Networking +test/e2e/common/network/framework.go:23 + Granular Checks: Pods + test/e2e/common/network/networking.go:32 + should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:122 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Networking + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:12:06.719 + Jan 14 04:12:06.720: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename pod-network-test 01/14/23 04:12:06.72 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:12:06.734 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:12:06.737 + [BeforeEach] [sig-network] Networking + test/e2e/framework/metrics/init/init.go:31 + [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:122 + STEP: Performing setup for networking test in namespace pod-network-test-809 01/14/23 04:12:06.739 + STEP: creating a selector 01/14/23 04:12:06.739 + STEP: Creating the service pods in kubernetes 01/14/23 04:12:06.739 + Jan 14 04:12:06.739: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable + Jan 14 04:12:06.777: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-809" to be "running and ready" + Jan 14 04:12:06.784: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 7.139675ms + Jan 14 04:12:06.784: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) + Jan 14 04:12:08.789: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.012313385s + Jan 14 04:12:08.789: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jan 14 04:12:10.788: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.011565137s + Jan 14 04:12:10.788: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jan 14 04:12:12.789: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.012056802s + Jan 14 04:12:12.789: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jan 14 04:12:14.788: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.011445895s + Jan 14 04:12:14.788: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jan 14 04:12:16.789: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.011952865s + Jan 14 04:12:16.789: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jan 14 04:12:18.790: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 12.013456168s + Jan 14 04:12:18.790: INFO: The phase of Pod netserver-0 is Running (Ready = true) + Jan 14 04:12:18.790: INFO: Pod "netserver-0" satisfied condition "running and ready" + Jan 14 04:12:18.793: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-809" to be "running and ready" + Jan 14 04:12:18.796: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 2.879269ms + Jan 14 04:12:18.796: INFO: The phase of Pod netserver-1 is Running (Ready = true) + Jan 14 04:12:18.796: INFO: Pod "netserver-1" satisfied condition "running and ready" + Jan 14 04:12:18.799: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-809" to be "running and ready" + Jan 14 04:12:18.802: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 2.88646ms + Jan 14 04:12:18.802: INFO: The phase of Pod netserver-2 is Running (Ready = true) + Jan 14 04:12:18.802: INFO: Pod "netserver-2" satisfied condition "running and ready" + STEP: Creating test pods 01/14/23 04:12:18.81 + Jan 14 04:12:18.827: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-809" to be "running" + Jan 14 04:12:18.830: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.931278ms + Jan 14 04:12:20.836: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.008086991s + Jan 14 04:12:20.836: INFO: Pod "test-container-pod" satisfied condition "running" + Jan 14 04:12:20.839: INFO: Waiting up to 5m0s for pod "host-test-container-pod" in namespace "pod-network-test-809" to be "running" + Jan 14 04:12:20.841: INFO: Pod "host-test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.713686ms + Jan 14 04:12:20.841: INFO: Pod "host-test-container-pod" satisfied condition "running" + Jan 14 04:12:20.844: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 + Jan 14 04:12:20.844: INFO: Going to poll 10.52.1.46 on port 8081 at least 0 times, with a maximum of 39 tries before failing + Jan 14 04:12:20.846: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.52.1.46 8081 | grep -v '^\s*$'] Namespace:pod-network-test-809 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jan 14 04:12:20.846: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + Jan 14 04:12:20.847: INFO: ExecWithOptions: Clientset creation + Jan 14 04:12:20.847: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/pod-network-test-809/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+10.52.1.46+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) + Jan 14 04:12:21.893: INFO: Found all 1 expected endpoints: [netserver-0] + Jan 14 04:12:21.893: INFO: Going to poll 10.52.0.244 on port 8081 at least 0 times, with a maximum of 39 tries before failing + Jan 14 04:12:21.897: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.52.0.244 8081 | grep -v '^\s*$'] Namespace:pod-network-test-809 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jan 14 04:12:21.897: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + Jan 14 04:12:21.897: INFO: ExecWithOptions: Clientset creation + Jan 14 04:12:21.897: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/pod-network-test-809/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+10.52.0.244+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) + Jan 14 04:12:22.947: INFO: Found all 1 expected endpoints: [netserver-1] + Jan 14 04:12:22.947: INFO: Going to poll 10.52.1.118 on port 8081 at least 0 times, with a maximum of 39 tries before failing + Jan 14 04:12:22.951: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.52.1.118 8081 | grep -v '^\s*$'] Namespace:pod-network-test-809 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jan 14 04:12:22.951: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + Jan 14 04:12:22.951: INFO: ExecWithOptions: Clientset creation + Jan 14 04:12:22.951: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/pod-network-test-809/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+10.52.1.118+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) + Jan 14 04:12:23.997: INFO: Found all 1 expected endpoints: [netserver-2] + [AfterEach] [sig-network] Networking + test/e2e/framework/node/init/init.go:32 + Jan 14 04:12:23.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Networking + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Networking + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Networking + tear down framework | framework.go:193 + STEP: Destroying namespace "pod-network-test-809" for this suite. 01/14/23 04:12:24.003 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-cli] Kubectl client Kubectl logs + should be able to retrieve and filter logs [Conformance] + test/e2e/kubectl/kubectl.go:1592 +[BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:12:24.009 +Jan 14 04:12:24.009: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename kubectl 01/14/23 04:12:24.01 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:12:24.024 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:12:24.027 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[BeforeEach] Kubectl logs + test/e2e/kubectl/kubectl.go:1572 +STEP: creating an pod 01/14/23 04:12:24.029 +Jan 14 04:12:24.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4907 run logs-generator --image=registry.k8s.io/e2e-test-images/agnhost:2.43 --restart=Never --pod-running-timeout=2m0s -- logs-generator --log-lines-total 100 --run-duration 20s' +Jan 14 04:12:24.099: INFO: stderr: "" +Jan 14 04:12:24.099: INFO: stdout: "pod/logs-generator created\n" +[It] should be able to retrieve and filter logs [Conformance] + test/e2e/kubectl/kubectl.go:1592 +STEP: Waiting for log generator to start. 01/14/23 04:12:24.099 +Jan 14 04:12:24.100: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] +Jan 14 04:12:24.100: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-4907" to be "running and ready, or succeeded" +Jan 14 04:12:24.103: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 3.838505ms +Jan 14 04:12:24.104: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'logs-generator' on '10.0.1.106' to be 'Running' but was 'Pending' +Jan 14 04:12:26.109: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.008994211s +Jan 14 04:12:26.109: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" +Jan 14 04:12:26.109: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] +STEP: checking for a matching strings 01/14/23 04:12:26.109 +Jan 14 04:12:26.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4907 logs logs-generator logs-generator' +Jan 14 04:12:26.181: INFO: stderr: "" +Jan 14 04:12:26.181: INFO: stdout: "I0114 04:12:24.636585 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/nrk 577\nI0114 04:12:24.836733 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/vfxd 410\nI0114 04:12:25.037249 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/f2v 539\nI0114 04:12:25.237624 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/6zt 347\nI0114 04:12:25.436944 1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/db8 367\nI0114 04:12:25.637261 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/6p28 431\nI0114 04:12:25.837581 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/b72 496\nI0114 04:12:26.036898 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/2w97 540\n" +Jan 14 04:12:28.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4907 logs logs-generator logs-generator' +Jan 14 04:12:28.257: INFO: stderr: "" +Jan 14 04:12:28.257: INFO: stdout: "I0114 04:12:24.636585 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/nrk 577\nI0114 04:12:24.836733 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/vfxd 410\nI0114 04:12:25.037249 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/f2v 539\nI0114 04:12:25.237624 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/6zt 347\nI0114 04:12:25.436944 1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/db8 367\nI0114 04:12:25.637261 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/6p28 431\nI0114 04:12:25.837581 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/b72 496\nI0114 04:12:26.036898 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/2w97 540\nI0114 04:12:26.237234 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/v5nf 539\nI0114 04:12:26.437557 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/2n72 367\nI0114 04:12:26.636805 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/2gs 316\nI0114 04:12:26.837132 1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/dwh5 305\nI0114 04:12:27.037463 1 logs_generator.go:76] 12 POST /api/v1/namespaces/default/pods/mghq 422\nI0114 04:12:27.236714 1 logs_generator.go:76] 13 GET /api/v1/namespaces/kube-system/pods/7hl 588\nI0114 04:12:27.437039 1 logs_generator.go:76] 14 POST /api/v1/namespaces/kube-system/pods/vs9 436\nI0114 04:12:27.637423 1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/7ztn 491\nI0114 04:12:27.836674 1 logs_generator.go:76] 16 POST /api/v1/namespaces/default/pods/tnx 535\nI0114 04:12:28.037076 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/4jbx 280\nI0114 04:12:28.237410 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/nvv 325\n" +STEP: limiting log lines 01/14/23 04:12:28.257 +Jan 14 04:12:28.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4907 logs logs-generator logs-generator --tail=1' +Jan 14 04:12:28.334: INFO: stderr: "" +Jan 14 04:12:28.334: INFO: stdout: "I0114 04:12:28.237410 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/nvv 325\n" +Jan 14 04:12:28.334: INFO: got output "I0114 04:12:28.237410 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/nvv 325\n" +STEP: limiting log bytes 01/14/23 04:12:28.334 +Jan 14 04:12:28.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4907 logs logs-generator logs-generator --limit-bytes=1' +Jan 14 04:12:28.402: INFO: stderr: "" +Jan 14 04:12:28.402: INFO: stdout: "I" +Jan 14 04:12:28.402: INFO: got output "I" +STEP: exposing timestamps 01/14/23 04:12:28.402 +Jan 14 04:12:28.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4907 logs logs-generator logs-generator --tail=1 --timestamps' +Jan 14 04:12:28.470: INFO: stderr: "" +Jan 14 04:12:28.470: INFO: stdout: "2023-01-14T12:12:28.436781634+08:00 I0114 04:12:28.436690 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/g6z 325\n" +Jan 14 04:12:28.470: INFO: got output "2023-01-14T12:12:28.436781634+08:00 I0114 04:12:28.436690 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/g6z 325\n" +STEP: restricting to a time range 01/14/23 04:12:28.47 +Jan 14 04:12:30.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4907 logs logs-generator logs-generator --since=1s' +Jan 14 04:12:31.041: INFO: stderr: "" +Jan 14 04:12:31.041: INFO: stdout: "I0114 04:12:30.237612 1 logs_generator.go:76] 28 GET /api/v1/namespaces/ns/pods/8tz 506\nI0114 04:12:30.436921 1 logs_generator.go:76] 29 POST /api/v1/namespaces/ns/pods/pg8p 519\nI0114 04:12:30.637268 1 logs_generator.go:76] 30 GET /api/v1/namespaces/default/pods/dk8 209\nI0114 04:12:30.837583 1 logs_generator.go:76] 31 PUT /api/v1/namespaces/default/pods/8kds 531\nI0114 04:12:31.036896 1 logs_generator.go:76] 32 GET /api/v1/namespaces/ns/pods/hcg 392\n" +Jan 14 04:12:31.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4907 logs logs-generator logs-generator --since=24h' +Jan 14 04:12:31.110: INFO: stderr: "" +Jan 14 04:12:31.110: INFO: stdout: "I0114 04:12:24.636585 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/nrk 577\nI0114 04:12:24.836733 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/vfxd 410\nI0114 04:12:25.037249 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/f2v 539\nI0114 04:12:25.237624 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/6zt 347\nI0114 04:12:25.436944 1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/db8 367\nI0114 04:12:25.637261 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/6p28 431\nI0114 04:12:25.837581 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/b72 496\nI0114 04:12:26.036898 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/2w97 540\nI0114 04:12:26.237234 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/v5nf 539\nI0114 04:12:26.437557 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/2n72 367\nI0114 04:12:26.636805 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/2gs 316\nI0114 04:12:26.837132 1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/dwh5 305\nI0114 04:12:27.037463 1 logs_generator.go:76] 12 POST /api/v1/namespaces/default/pods/mghq 422\nI0114 04:12:27.236714 1 logs_generator.go:76] 13 GET /api/v1/namespaces/kube-system/pods/7hl 588\nI0114 04:12:27.437039 1 logs_generator.go:76] 14 POST /api/v1/namespaces/kube-system/pods/vs9 436\nI0114 04:12:27.637423 1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/7ztn 491\nI0114 04:12:27.836674 1 logs_generator.go:76] 16 POST /api/v1/namespaces/default/pods/tnx 535\nI0114 04:12:28.037076 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/4jbx 280\nI0114 04:12:28.237410 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/nvv 325\nI0114 04:12:28.436690 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/g6z 325\nI0114 04:12:28.637010 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/2cgt 594\nI0114 04:12:28.837332 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/6hgs 540\nI0114 04:12:29.037655 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/kube-system/pods/bzg8 212\nI0114 04:12:29.236970 1 logs_generator.go:76] 23 POST /api/v1/namespaces/kube-system/pods/hhcg 422\nI0114 04:12:29.437332 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/ns/pods/4tq9 322\nI0114 04:12:29.636631 1 logs_generator.go:76] 25 GET /api/v1/namespaces/default/pods/lwt 231\nI0114 04:12:29.836939 1 logs_generator.go:76] 26 POST /api/v1/namespaces/kube-system/pods/4z7b 357\nI0114 04:12:30.037268 1 logs_generator.go:76] 27 POST /api/v1/namespaces/default/pods/k4rr 572\nI0114 04:12:30.237612 1 logs_generator.go:76] 28 GET /api/v1/namespaces/ns/pods/8tz 506\nI0114 04:12:30.436921 1 logs_generator.go:76] 29 POST /api/v1/namespaces/ns/pods/pg8p 519\nI0114 04:12:30.637268 1 logs_generator.go:76] 30 GET /api/v1/namespaces/default/pods/dk8 209\nI0114 04:12:30.837583 1 logs_generator.go:76] 31 PUT /api/v1/namespaces/default/pods/8kds 531\nI0114 04:12:31.036896 1 logs_generator.go:76] 32 GET /api/v1/namespaces/ns/pods/hcg 392\n" +[AfterEach] Kubectl logs + test/e2e/kubectl/kubectl.go:1577 +Jan 14 04:12:31.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4907 delete pod logs-generator' +Jan 14 04:12:31.465: INFO: stderr: "" +Jan 14 04:12:31.465: INFO: stdout: "pod \"logs-generator\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 +Jan 14 04:12:31.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 +STEP: Destroying namespace "kubectl-4907" for this suite. 01/14/23 04:12:31.47 +------------------------------ +• [SLOW TEST] [7.466 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl logs + test/e2e/kubectl/kubectl.go:1569 + should be able to retrieve and filter logs [Conformance] + test/e2e/kubectl/kubectl.go:1592 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:12:24.009 + Jan 14 04:12:24.009: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename kubectl 01/14/23 04:12:24.01 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:12:24.024 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:12:24.027 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [BeforeEach] Kubectl logs + test/e2e/kubectl/kubectl.go:1572 + STEP: creating an pod 01/14/23 04:12:24.029 + Jan 14 04:12:24.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4907 run logs-generator --image=registry.k8s.io/e2e-test-images/agnhost:2.43 --restart=Never --pod-running-timeout=2m0s -- logs-generator --log-lines-total 100 --run-duration 20s' + Jan 14 04:12:24.099: INFO: stderr: "" + Jan 14 04:12:24.099: INFO: stdout: "pod/logs-generator created\n" + [It] should be able to retrieve and filter logs [Conformance] + test/e2e/kubectl/kubectl.go:1592 + STEP: Waiting for log generator to start. 01/14/23 04:12:24.099 + Jan 14 04:12:24.100: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] + Jan 14 04:12:24.100: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-4907" to be "running and ready, or succeeded" + Jan 14 04:12:24.103: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 3.838505ms + Jan 14 04:12:24.104: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'logs-generator' on '10.0.1.106' to be 'Running' but was 'Pending' + Jan 14 04:12:26.109: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.008994211s + Jan 14 04:12:26.109: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" + Jan 14 04:12:26.109: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] + STEP: checking for a matching strings 01/14/23 04:12:26.109 + Jan 14 04:12:26.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4907 logs logs-generator logs-generator' + Jan 14 04:12:26.181: INFO: stderr: "" + Jan 14 04:12:26.181: INFO: stdout: "I0114 04:12:24.636585 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/nrk 577\nI0114 04:12:24.836733 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/vfxd 410\nI0114 04:12:25.037249 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/f2v 539\nI0114 04:12:25.237624 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/6zt 347\nI0114 04:12:25.436944 1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/db8 367\nI0114 04:12:25.637261 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/6p28 431\nI0114 04:12:25.837581 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/b72 496\nI0114 04:12:26.036898 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/2w97 540\n" + Jan 14 04:12:28.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4907 logs logs-generator logs-generator' + Jan 14 04:12:28.257: INFO: stderr: "" + Jan 14 04:12:28.257: INFO: stdout: "I0114 04:12:24.636585 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/nrk 577\nI0114 04:12:24.836733 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/vfxd 410\nI0114 04:12:25.037249 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/f2v 539\nI0114 04:12:25.237624 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/6zt 347\nI0114 04:12:25.436944 1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/db8 367\nI0114 04:12:25.637261 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/6p28 431\nI0114 04:12:25.837581 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/b72 496\nI0114 04:12:26.036898 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/2w97 540\nI0114 04:12:26.237234 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/v5nf 539\nI0114 04:12:26.437557 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/2n72 367\nI0114 04:12:26.636805 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/2gs 316\nI0114 04:12:26.837132 1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/dwh5 305\nI0114 04:12:27.037463 1 logs_generator.go:76] 12 POST /api/v1/namespaces/default/pods/mghq 422\nI0114 04:12:27.236714 1 logs_generator.go:76] 13 GET /api/v1/namespaces/kube-system/pods/7hl 588\nI0114 04:12:27.437039 1 logs_generator.go:76] 14 POST /api/v1/namespaces/kube-system/pods/vs9 436\nI0114 04:12:27.637423 1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/7ztn 491\nI0114 04:12:27.836674 1 logs_generator.go:76] 16 POST /api/v1/namespaces/default/pods/tnx 535\nI0114 04:12:28.037076 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/4jbx 280\nI0114 04:12:28.237410 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/nvv 325\n" + STEP: limiting log lines 01/14/23 04:12:28.257 + Jan 14 04:12:28.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4907 logs logs-generator logs-generator --tail=1' + Jan 14 04:12:28.334: INFO: stderr: "" + Jan 14 04:12:28.334: INFO: stdout: "I0114 04:12:28.237410 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/nvv 325\n" + Jan 14 04:12:28.334: INFO: got output "I0114 04:12:28.237410 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/nvv 325\n" + STEP: limiting log bytes 01/14/23 04:12:28.334 + Jan 14 04:12:28.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4907 logs logs-generator logs-generator --limit-bytes=1' + Jan 14 04:12:28.402: INFO: stderr: "" + Jan 14 04:12:28.402: INFO: stdout: "I" + Jan 14 04:12:28.402: INFO: got output "I" + STEP: exposing timestamps 01/14/23 04:12:28.402 + Jan 14 04:12:28.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4907 logs logs-generator logs-generator --tail=1 --timestamps' + Jan 14 04:12:28.470: INFO: stderr: "" + Jan 14 04:12:28.470: INFO: stdout: "2023-01-14T12:12:28.436781634+08:00 I0114 04:12:28.436690 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/g6z 325\n" + Jan 14 04:12:28.470: INFO: got output "2023-01-14T12:12:28.436781634+08:00 I0114 04:12:28.436690 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/g6z 325\n" + STEP: restricting to a time range 01/14/23 04:12:28.47 + Jan 14 04:12:30.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4907 logs logs-generator logs-generator --since=1s' + Jan 14 04:12:31.041: INFO: stderr: "" + Jan 14 04:12:31.041: INFO: stdout: "I0114 04:12:30.237612 1 logs_generator.go:76] 28 GET /api/v1/namespaces/ns/pods/8tz 506\nI0114 04:12:30.436921 1 logs_generator.go:76] 29 POST /api/v1/namespaces/ns/pods/pg8p 519\nI0114 04:12:30.637268 1 logs_generator.go:76] 30 GET /api/v1/namespaces/default/pods/dk8 209\nI0114 04:12:30.837583 1 logs_generator.go:76] 31 PUT /api/v1/namespaces/default/pods/8kds 531\nI0114 04:12:31.036896 1 logs_generator.go:76] 32 GET /api/v1/namespaces/ns/pods/hcg 392\n" + Jan 14 04:12:31.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4907 logs logs-generator logs-generator --since=24h' + Jan 14 04:12:31.110: INFO: stderr: "" + Jan 14 04:12:31.110: INFO: stdout: "I0114 04:12:24.636585 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/nrk 577\nI0114 04:12:24.836733 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/vfxd 410\nI0114 04:12:25.037249 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/f2v 539\nI0114 04:12:25.237624 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/6zt 347\nI0114 04:12:25.436944 1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/db8 367\nI0114 04:12:25.637261 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/6p28 431\nI0114 04:12:25.837581 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/b72 496\nI0114 04:12:26.036898 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/2w97 540\nI0114 04:12:26.237234 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/v5nf 539\nI0114 04:12:26.437557 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/2n72 367\nI0114 04:12:26.636805 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/2gs 316\nI0114 04:12:26.837132 1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/dwh5 305\nI0114 04:12:27.037463 1 logs_generator.go:76] 12 POST /api/v1/namespaces/default/pods/mghq 422\nI0114 04:12:27.236714 1 logs_generator.go:76] 13 GET /api/v1/namespaces/kube-system/pods/7hl 588\nI0114 04:12:27.437039 1 logs_generator.go:76] 14 POST /api/v1/namespaces/kube-system/pods/vs9 436\nI0114 04:12:27.637423 1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/7ztn 491\nI0114 04:12:27.836674 1 logs_generator.go:76] 16 POST /api/v1/namespaces/default/pods/tnx 535\nI0114 04:12:28.037076 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/4jbx 280\nI0114 04:12:28.237410 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/nvv 325\nI0114 04:12:28.436690 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/g6z 325\nI0114 04:12:28.637010 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/2cgt 594\nI0114 04:12:28.837332 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/6hgs 540\nI0114 04:12:29.037655 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/kube-system/pods/bzg8 212\nI0114 04:12:29.236970 1 logs_generator.go:76] 23 POST /api/v1/namespaces/kube-system/pods/hhcg 422\nI0114 04:12:29.437332 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/ns/pods/4tq9 322\nI0114 04:12:29.636631 1 logs_generator.go:76] 25 GET /api/v1/namespaces/default/pods/lwt 231\nI0114 04:12:29.836939 1 logs_generator.go:76] 26 POST /api/v1/namespaces/kube-system/pods/4z7b 357\nI0114 04:12:30.037268 1 logs_generator.go:76] 27 POST /api/v1/namespaces/default/pods/k4rr 572\nI0114 04:12:30.237612 1 logs_generator.go:76] 28 GET /api/v1/namespaces/ns/pods/8tz 506\nI0114 04:12:30.436921 1 logs_generator.go:76] 29 POST /api/v1/namespaces/ns/pods/pg8p 519\nI0114 04:12:30.637268 1 logs_generator.go:76] 30 GET /api/v1/namespaces/default/pods/dk8 209\nI0114 04:12:30.837583 1 logs_generator.go:76] 31 PUT /api/v1/namespaces/default/pods/8kds 531\nI0114 04:12:31.036896 1 logs_generator.go:76] 32 GET /api/v1/namespaces/ns/pods/hcg 392\n" + [AfterEach] Kubectl logs + test/e2e/kubectl/kubectl.go:1577 + Jan 14 04:12:31.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-4907 delete pod logs-generator' + Jan 14 04:12:31.465: INFO: stderr: "" + Jan 14 04:12:31.465: INFO: stdout: "pod \"logs-generator\" deleted\n" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 + Jan 14 04:12:31.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 + STEP: Destroying namespace "kubectl-4907" for this suite. 01/14/23 04:12:31.47 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[sig-apps] CronJob + should schedule multiple jobs concurrently [Conformance] + test/e2e/apps/cronjob.go:69 +[BeforeEach] [sig-apps] CronJob + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:12:31.476 +Jan 14 04:12:31.476: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename cronjob 01/14/23 04:12:31.476 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:12:31.511 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:12:31.513 +[BeforeEach] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:31 +[It] should schedule multiple jobs concurrently [Conformance] + test/e2e/apps/cronjob.go:69 +STEP: Creating a cronjob 01/14/23 04:12:31.558 +STEP: Ensuring more than one job is running at a time 01/14/23 04:12:31.563 +STEP: Ensuring at least two running jobs exists by listing jobs explicitly 01/14/23 04:14:01.567 +STEP: Removing cronjob 01/14/23 04:14:01.57 +[AfterEach] [sig-apps] CronJob + test/e2e/framework/node/init/init.go:32 +Jan 14 04:14:01.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] CronJob + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] CronJob + tear down framework | framework.go:193 +STEP: Destroying namespace "cronjob-8341" for this suite. 01/14/23 04:14:01.579 +------------------------------ +• [SLOW TEST] [90.109 seconds] +[sig-apps] CronJob +test/e2e/apps/framework.go:23 + should schedule multiple jobs concurrently [Conformance] + test/e2e/apps/cronjob.go:69 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] CronJob + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:12:31.476 + Jan 14 04:12:31.476: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename cronjob 01/14/23 04:12:31.476 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:12:31.511 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:12:31.513 + [BeforeEach] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:31 + [It] should schedule multiple jobs concurrently [Conformance] + test/e2e/apps/cronjob.go:69 + STEP: Creating a cronjob 01/14/23 04:12:31.558 + STEP: Ensuring more than one job is running at a time 01/14/23 04:12:31.563 + STEP: Ensuring at least two running jobs exists by listing jobs explicitly 01/14/23 04:14:01.567 + STEP: Removing cronjob 01/14/23 04:14:01.57 + [AfterEach] [sig-apps] CronJob + test/e2e/framework/node/init/init.go:32 + Jan 14 04:14:01.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] CronJob + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] CronJob + tear down framework | framework.go:193 + STEP: Destroying namespace "cronjob-8341" for this suite. 01/14/23 04:14:01.579 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Job + should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + test/e2e/apps/job.go:426 +[BeforeEach] [sig-apps] Job + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:14:01.586 +Jan 14 04:14:01.586: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename job 01/14/23 04:14:01.587 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:14:01.603 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:14:01.606 +[BeforeEach] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:31 +[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + test/e2e/apps/job.go:426 +STEP: Creating a job 01/14/23 04:14:01.608 +STEP: Ensuring job reaches completions 01/14/23 04:14:01.62 +[AfterEach] [sig-apps] Job + test/e2e/framework/node/init/init.go:32 +Jan 14 04:14:11.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Job + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Job + tear down framework | framework.go:193 +STEP: Destroying namespace "job-3671" for this suite. 01/14/23 04:14:11.634 +------------------------------ +• [SLOW TEST] [10.054 seconds] +[sig-apps] Job +test/e2e/apps/framework.go:23 + should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + test/e2e/apps/job.go:426 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Job + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:14:01.586 + Jan 14 04:14:01.586: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename job 01/14/23 04:14:01.587 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:14:01.603 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:14:01.606 + [BeforeEach] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:31 + [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] + test/e2e/apps/job.go:426 + STEP: Creating a job 01/14/23 04:14:01.608 + STEP: Ensuring job reaches completions 01/14/23 04:14:01.62 + [AfterEach] [sig-apps] Job + test/e2e/framework/node/init/init.go:32 + Jan 14 04:14:11.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Job + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Job + tear down framework | framework.go:193 + STEP: Destroying namespace "job-3671" for this suite. 01/14/23 04:14:11.634 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:78 +[BeforeEach] [sig-storage] Projected secret + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:14:11.641 +Jan 14 04:14:11.641: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename projected 01/14/23 04:14:11.642 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:14:11.657 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:14:11.659 +[BeforeEach] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:78 +STEP: Creating projection with secret that has name projected-secret-test-map-709f8b29-34c4-4b53-8978-6bc8a6009090 01/14/23 04:14:11.661 +STEP: Creating a pod to test consume secrets 01/14/23 04:14:11.665 +Jan 14 04:14:11.679: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-23bd796d-1cc8-4800-a81f-984701120dbd" in namespace "projected-1805" to be "Succeeded or Failed" +Jan 14 04:14:11.682: INFO: Pod "pod-projected-secrets-23bd796d-1cc8-4800-a81f-984701120dbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.904277ms +Jan 14 04:14:13.689: INFO: Pod "pod-projected-secrets-23bd796d-1cc8-4800-a81f-984701120dbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009407682s +Jan 14 04:14:15.687: INFO: Pod "pod-projected-secrets-23bd796d-1cc8-4800-a81f-984701120dbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007668609s +STEP: Saw pod success 01/14/23 04:14:15.687 +Jan 14 04:14:15.687: INFO: Pod "pod-projected-secrets-23bd796d-1cc8-4800-a81f-984701120dbd" satisfied condition "Succeeded or Failed" +Jan 14 04:14:15.690: INFO: Trying to get logs from node 10.0.1.99 pod pod-projected-secrets-23bd796d-1cc8-4800-a81f-984701120dbd container projected-secret-volume-test: +STEP: delete the pod 01/14/23 04:14:15.703 +Jan 14 04:14:15.724: INFO: Waiting for pod pod-projected-secrets-23bd796d-1cc8-4800-a81f-984701120dbd to disappear +Jan 14 04:14:15.727: INFO: Pod pod-projected-secrets-23bd796d-1cc8-4800-a81f-984701120dbd no longer exists +[AfterEach] [sig-storage] Projected secret + test/e2e/framework/node/init/init.go:32 +Jan 14 04:14:15.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected secret + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected secret + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-1805" for this suite. 01/14/23 04:14:15.731 +------------------------------ +• [4.095 seconds] +[sig-storage] Projected secret +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:78 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected secret + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:14:11.641 + Jan 14 04:14:11.641: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename projected 01/14/23 04:14:11.642 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:14:11.657 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:14:11.659 + [BeforeEach] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:78 + STEP: Creating projection with secret that has name projected-secret-test-map-709f8b29-34c4-4b53-8978-6bc8a6009090 01/14/23 04:14:11.661 + STEP: Creating a pod to test consume secrets 01/14/23 04:14:11.665 + Jan 14 04:14:11.679: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-23bd796d-1cc8-4800-a81f-984701120dbd" in namespace "projected-1805" to be "Succeeded or Failed" + Jan 14 04:14:11.682: INFO: Pod "pod-projected-secrets-23bd796d-1cc8-4800-a81f-984701120dbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.904277ms + Jan 14 04:14:13.689: INFO: Pod "pod-projected-secrets-23bd796d-1cc8-4800-a81f-984701120dbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009407682s + Jan 14 04:14:15.687: INFO: Pod "pod-projected-secrets-23bd796d-1cc8-4800-a81f-984701120dbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007668609s + STEP: Saw pod success 01/14/23 04:14:15.687 + Jan 14 04:14:15.687: INFO: Pod "pod-projected-secrets-23bd796d-1cc8-4800-a81f-984701120dbd" satisfied condition "Succeeded or Failed" + Jan 14 04:14:15.690: INFO: Trying to get logs from node 10.0.1.99 pod pod-projected-secrets-23bd796d-1cc8-4800-a81f-984701120dbd container projected-secret-volume-test: + STEP: delete the pod 01/14/23 04:14:15.703 + Jan 14 04:14:15.724: INFO: Waiting for pod pod-projected-secrets-23bd796d-1cc8-4800-a81f-984701120dbd to disappear + Jan 14 04:14:15.727: INFO: Pod pod-projected-secrets-23bd796d-1cc8-4800-a81f-984701120dbd no longer exists + [AfterEach] [sig-storage] Projected secret + test/e2e/framework/node/init/init.go:32 + Jan 14 04:14:15.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected secret + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected secret + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-1805" for this suite. 01/14/23 04:14:15.731 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:177 +[BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:14:15.736 +Jan 14 04:14:15.736: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename emptydir 01/14/23 04:14:15.737 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:14:15.755 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:14:15.757 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 +[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:177 +STEP: Creating a pod to test emptydir 0666 on node default medium 01/14/23 04:14:15.76 +Jan 14 04:14:15.770: INFO: Waiting up to 5m0s for pod "pod-d59d470e-c430-4635-9821-5ea714c51151" in namespace "emptydir-7931" to be "Succeeded or Failed" +Jan 14 04:14:15.774: INFO: Pod "pod-d59d470e-c430-4635-9821-5ea714c51151": Phase="Pending", Reason="", readiness=false. Elapsed: 3.592549ms +Jan 14 04:14:17.780: INFO: Pod "pod-d59d470e-c430-4635-9821-5ea714c51151": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009240611s +Jan 14 04:14:19.779: INFO: Pod "pod-d59d470e-c430-4635-9821-5ea714c51151": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008854429s +STEP: Saw pod success 01/14/23 04:14:19.779 +Jan 14 04:14:19.779: INFO: Pod "pod-d59d470e-c430-4635-9821-5ea714c51151" satisfied condition "Succeeded or Failed" +Jan 14 04:14:19.854: INFO: Trying to get logs from node 10.0.1.99 pod pod-d59d470e-c430-4635-9821-5ea714c51151 container test-container: +STEP: delete the pod 01/14/23 04:14:19.861 +Jan 14 04:14:19.893: INFO: Waiting for pod pod-d59d470e-c430-4635-9821-5ea714c51151 to disappear +Jan 14 04:14:19.896: INFO: Pod pod-d59d470e-c430-4635-9821-5ea714c51151 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 +Jan 14 04:14:19.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 +STEP: Destroying namespace "emptydir-7931" for this suite. 01/14/23 04:14:19.901 +------------------------------ +• [4.170 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:177 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:14:15.736 + Jan 14 04:14:15.736: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename emptydir 01/14/23 04:14:15.737 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:14:15.755 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:14:15.757 + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 + [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:177 + STEP: Creating a pod to test emptydir 0666 on node default medium 01/14/23 04:14:15.76 + Jan 14 04:14:15.770: INFO: Waiting up to 5m0s for pod "pod-d59d470e-c430-4635-9821-5ea714c51151" in namespace "emptydir-7931" to be "Succeeded or Failed" + Jan 14 04:14:15.774: INFO: Pod "pod-d59d470e-c430-4635-9821-5ea714c51151": Phase="Pending", Reason="", readiness=false. Elapsed: 3.592549ms + Jan 14 04:14:17.780: INFO: Pod "pod-d59d470e-c430-4635-9821-5ea714c51151": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009240611s + Jan 14 04:14:19.779: INFO: Pod "pod-d59d470e-c430-4635-9821-5ea714c51151": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008854429s + STEP: Saw pod success 01/14/23 04:14:19.779 + Jan 14 04:14:19.779: INFO: Pod "pod-d59d470e-c430-4635-9821-5ea714c51151" satisfied condition "Succeeded or Failed" + Jan 14 04:14:19.854: INFO: Trying to get logs from node 10.0.1.99 pod pod-d59d470e-c430-4635-9821-5ea714c51151 container test-container: + STEP: delete the pod 01/14/23 04:14:19.861 + Jan 14 04:14:19.893: INFO: Waiting for pod pod-d59d470e-c430-4635-9821-5ea714c51151 to disappear + Jan 14 04:14:19.896: INFO: Pod pod-d59d470e-c430-4635-9821-5ea714c51151 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 + Jan 14 04:14:19.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 + STEP: Destroying namespace "emptydir-7931" for this suite. 01/14/23 04:14:19.901 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:109 +[BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:14:19.907 +Jan 14 04:14:19.907: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename configmap 01/14/23 04:14:19.908 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:14:20.034 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:14:20.037 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:109 +STEP: Creating configMap with name configmap-test-volume-map-45cf31c5-e7e2-493a-94f7-df9ef02ffe35 01/14/23 04:14:20.04 +STEP: Creating a pod to test consume configMaps 01/14/23 04:14:20.047 +Jan 14 04:14:20.074: INFO: Waiting up to 5m0s for pod "pod-configmaps-df48a3e7-8496-4e2a-a4ae-abb5c095538b" in namespace "configmap-7681" to be "Succeeded or Failed" +Jan 14 04:14:20.137: INFO: Pod "pod-configmaps-df48a3e7-8496-4e2a-a4ae-abb5c095538b": Phase="Pending", Reason="", readiness=false. Elapsed: 62.805059ms +Jan 14 04:14:22.142: INFO: Pod "pod-configmaps-df48a3e7-8496-4e2a-a4ae-abb5c095538b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068335313s +Jan 14 04:14:24.141: INFO: Pod "pod-configmaps-df48a3e7-8496-4e2a-a4ae-abb5c095538b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067137615s +STEP: Saw pod success 01/14/23 04:14:24.141 +Jan 14 04:14:24.141: INFO: Pod "pod-configmaps-df48a3e7-8496-4e2a-a4ae-abb5c095538b" satisfied condition "Succeeded or Failed" +Jan 14 04:14:24.144: INFO: Trying to get logs from node 10.0.1.99 pod pod-configmaps-df48a3e7-8496-4e2a-a4ae-abb5c095538b container agnhost-container: +STEP: delete the pod 01/14/23 04:14:24.15 +Jan 14 04:14:24.163: INFO: Waiting for pod pod-configmaps-df48a3e7-8496-4e2a-a4ae-abb5c095538b to disappear +Jan 14 04:14:24.166: INFO: Pod pod-configmaps-df48a3e7-8496-4e2a-a4ae-abb5c095538b no longer exists +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 +Jan 14 04:14:24.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 +STEP: Destroying namespace "configmap-7681" for this suite. 01/14/23 04:14:24.171 +------------------------------ +• [4.268 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:109 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:14:19.907 + Jan 14 04:14:19.907: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename configmap 01/14/23 04:14:19.908 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:14:20.034 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:14:20.037 + [BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:109 + STEP: Creating configMap with name configmap-test-volume-map-45cf31c5-e7e2-493a-94f7-df9ef02ffe35 01/14/23 04:14:20.04 + STEP: Creating a pod to test consume configMaps 01/14/23 04:14:20.047 + Jan 14 04:14:20.074: INFO: Waiting up to 5m0s for pod "pod-configmaps-df48a3e7-8496-4e2a-a4ae-abb5c095538b" in namespace "configmap-7681" to be "Succeeded or Failed" + Jan 14 04:14:20.137: INFO: Pod "pod-configmaps-df48a3e7-8496-4e2a-a4ae-abb5c095538b": Phase="Pending", Reason="", readiness=false. Elapsed: 62.805059ms + Jan 14 04:14:22.142: INFO: Pod "pod-configmaps-df48a3e7-8496-4e2a-a4ae-abb5c095538b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068335313s + Jan 14 04:14:24.141: INFO: Pod "pod-configmaps-df48a3e7-8496-4e2a-a4ae-abb5c095538b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067137615s + STEP: Saw pod success 01/14/23 04:14:24.141 + Jan 14 04:14:24.141: INFO: Pod "pod-configmaps-df48a3e7-8496-4e2a-a4ae-abb5c095538b" satisfied condition "Succeeded or Failed" + Jan 14 04:14:24.144: INFO: Trying to get logs from node 10.0.1.99 pod pod-configmaps-df48a3e7-8496-4e2a-a4ae-abb5c095538b container agnhost-container: + STEP: delete the pod 01/14/23 04:14:24.15 + Jan 14 04:14:24.163: INFO: Waiting for pod pod-configmaps-df48a3e7-8496-4e2a-a4ae-abb5c095538b to disappear + Jan 14 04:14:24.166: INFO: Pod pod-configmaps-df48a3e7-8496-4e2a-a4ae-abb5c095538b no longer exists + [AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 + Jan 14 04:14:24.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 + STEP: Destroying namespace "configmap-7681" for this suite. 01/14/23 04:14:24.171 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:197 +[BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:14:24.176 +Jan 14 04:14:24.176: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename emptydir 01/14/23 04:14:24.177 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:14:24.194 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:14:24.196 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 +[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:197 +STEP: Creating a pod to test emptydir 0644 on node default medium 01/14/23 04:14:24.198 +Jan 14 04:14:24.207: INFO: Waiting up to 5m0s for pod "pod-6c7df812-11f2-4c85-beb0-503378efb8c1" in namespace "emptydir-3944" to be "Succeeded or Failed" +Jan 14 04:14:24.210: INFO: Pod "pod-6c7df812-11f2-4c85-beb0-503378efb8c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.838579ms +Jan 14 04:14:26.214: INFO: Pod "pod-6c7df812-11f2-4c85-beb0-503378efb8c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00745256s +Jan 14 04:14:28.215: INFO: Pod "pod-6c7df812-11f2-4c85-beb0-503378efb8c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008019197s +STEP: Saw pod success 01/14/23 04:14:28.215 +Jan 14 04:14:28.215: INFO: Pod "pod-6c7df812-11f2-4c85-beb0-503378efb8c1" satisfied condition "Succeeded or Failed" +Jan 14 04:14:28.218: INFO: Trying to get logs from node 10.0.1.106 pod pod-6c7df812-11f2-4c85-beb0-503378efb8c1 container test-container: +STEP: delete the pod 01/14/23 04:14:28.231 +Jan 14 04:14:28.244: INFO: Waiting for pod pod-6c7df812-11f2-4c85-beb0-503378efb8c1 to disappear +Jan 14 04:14:28.246: INFO: Pod pod-6c7df812-11f2-4c85-beb0-503378efb8c1 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 +Jan 14 04:14:28.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 +STEP: Destroying namespace "emptydir-3944" for this suite. 01/14/23 04:14:28.251 +------------------------------ +• [4.079 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:197 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:14:24.176 + Jan 14 04:14:24.176: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename emptydir 01/14/23 04:14:24.177 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:14:24.194 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:14:24.196 + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 + [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:197 + STEP: Creating a pod to test emptydir 0644 on node default medium 01/14/23 04:14:24.198 + Jan 14 04:14:24.207: INFO: Waiting up to 5m0s for pod "pod-6c7df812-11f2-4c85-beb0-503378efb8c1" in namespace "emptydir-3944" to be "Succeeded or Failed" + Jan 14 04:14:24.210: INFO: Pod "pod-6c7df812-11f2-4c85-beb0-503378efb8c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.838579ms + Jan 14 04:14:26.214: INFO: Pod "pod-6c7df812-11f2-4c85-beb0-503378efb8c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00745256s + Jan 14 04:14:28.215: INFO: Pod "pod-6c7df812-11f2-4c85-beb0-503378efb8c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008019197s + STEP: Saw pod success 01/14/23 04:14:28.215 + Jan 14 04:14:28.215: INFO: Pod "pod-6c7df812-11f2-4c85-beb0-503378efb8c1" satisfied condition "Succeeded or Failed" + Jan 14 04:14:28.218: INFO: Trying to get logs from node 10.0.1.106 pod pod-6c7df812-11f2-4c85-beb0-503378efb8c1 container test-container: + STEP: delete the pod 01/14/23 04:14:28.231 + Jan 14 04:14:28.244: INFO: Waiting for pod pod-6c7df812-11f2-4c85-beb0-503378efb8c1 to disappear + Jan 14 04:14:28.246: INFO: Pod pod-6c7df812-11f2-4c85-beb0-503378efb8c1 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 + Jan 14 04:14:28.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 + STEP: Destroying namespace "emptydir-3944" for this suite. 01/14/23 04:14:28.251 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:195 +[BeforeEach] [sig-node] Container Runtime + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:14:28.256 +Jan 14 04:14:28.256: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename container-runtime 01/14/23 04:14:28.257 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:14:28.27 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:14:28.272 +[BeforeEach] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:31 +[It] should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:195 +STEP: create the container 01/14/23 04:14:28.274 +STEP: wait for the container to reach Succeeded 01/14/23 04:14:28.282 +STEP: get the container status 01/14/23 04:14:32.301 +STEP: the container should be terminated 01/14/23 04:14:32.305 +STEP: the termination message should be set 01/14/23 04:14:32.305 +Jan 14 04:14:32.305: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- +STEP: delete the container 01/14/23 04:14:32.305 +[AfterEach] [sig-node] Container Runtime + test/e2e/framework/node/init/init.go:32 +Jan 14 04:14:32.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Container Runtime + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Container Runtime + tear down framework | framework.go:193 +STEP: Destroying namespace "container-runtime-166" for this suite. 01/14/23 04:14:32.326 +------------------------------ +• [4.075 seconds] +[sig-node] Container Runtime +test/e2e/common/node/framework.go:23 + blackbox test + test/e2e/common/node/runtime.go:44 + on terminated container + test/e2e/common/node/runtime.go:137 + should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:195 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Container Runtime + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:14:28.256 + Jan 14 04:14:28.256: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename container-runtime 01/14/23 04:14:28.257 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:14:28.27 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:14:28.272 + [BeforeEach] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:31 + [It] should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:195 + STEP: create the container 01/14/23 04:14:28.274 + STEP: wait for the container to reach Succeeded 01/14/23 04:14:28.282 + STEP: get the container status 01/14/23 04:14:32.301 + STEP: the container should be terminated 01/14/23 04:14:32.305 + STEP: the termination message should be set 01/14/23 04:14:32.305 + Jan 14 04:14:32.305: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- + STEP: delete the container 01/14/23 04:14:32.305 + [AfterEach] [sig-node] Container Runtime + test/e2e/framework/node/init/init.go:32 + Jan 14 04:14:32.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Container Runtime + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Container Runtime + tear down framework | framework.go:193 + STEP: Destroying namespace "container-runtime-166" for this suite. 01/14/23 04:14:32.326 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + test/e2e/apimachinery/resource_quota.go:75 +[BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:14:32.333 +Jan 14 04:14:32.333: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename resourcequota 01/14/23 04:14:32.334 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:14:32.35 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:14:32.352 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 +[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + test/e2e/apimachinery/resource_quota.go:75 +STEP: Counting existing ResourceQuota 01/14/23 04:14:32.354 +STEP: Creating a ResourceQuota 01/14/23 04:14:37.358 +STEP: Ensuring resource quota status is calculated 01/14/23 04:14:37.362 +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 +Jan 14 04:14:39.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 +STEP: Destroying namespace "resourcequota-3973" for this suite. 01/14/23 04:14:39.371 +------------------------------ +• [SLOW TEST] [7.043 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + test/e2e/apimachinery/resource_quota.go:75 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:14:32.333 + Jan 14 04:14:32.333: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename resourcequota 01/14/23 04:14:32.334 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:14:32.35 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:14:32.352 + [BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 + [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] + test/e2e/apimachinery/resource_quota.go:75 + STEP: Counting existing ResourceQuota 01/14/23 04:14:32.354 + STEP: Creating a ResourceQuota 01/14/23 04:14:37.358 + STEP: Ensuring resource quota status is calculated 01/14/23 04:14:37.362 + [AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 + Jan 14 04:14:39.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 + STEP: Destroying namespace "resourcequota-3973" for this suite. 01/14/23 04:14:39.371 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's memory limit [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:207 +[BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:14:39.377 +Jan 14 04:14:39.377: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename projected 01/14/23 04:14:39.378 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:14:39.392 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:14:39.395 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 +[It] should provide container's memory limit [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:207 +STEP: Creating a pod to test downward API volume plugin 01/14/23 04:14:39.398 +Jan 14 04:14:39.408: INFO: Waiting up to 5m0s for pod "downwardapi-volume-74657793-1ee2-4aa0-8729-16645c836043" in namespace "projected-9892" to be "Succeeded or Failed" +Jan 14 04:14:39.410: INFO: Pod "downwardapi-volume-74657793-1ee2-4aa0-8729-16645c836043": Phase="Pending", Reason="", readiness=false. Elapsed: 2.778362ms +Jan 14 04:14:41.415: INFO: Pod "downwardapi-volume-74657793-1ee2-4aa0-8729-16645c836043": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007081403s +Jan 14 04:14:43.415: INFO: Pod "downwardapi-volume-74657793-1ee2-4aa0-8729-16645c836043": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007302208s +STEP: Saw pod success 01/14/23 04:14:43.415 +Jan 14 04:14:43.415: INFO: Pod "downwardapi-volume-74657793-1ee2-4aa0-8729-16645c836043" satisfied condition "Succeeded or Failed" +Jan 14 04:14:43.418: INFO: Trying to get logs from node 10.0.1.99 pod downwardapi-volume-74657793-1ee2-4aa0-8729-16645c836043 container client-container: +STEP: delete the pod 01/14/23 04:14:43.424 +Jan 14 04:14:43.435: INFO: Waiting for pod downwardapi-volume-74657793-1ee2-4aa0-8729-16645c836043 to disappear +Jan 14 04:14:43.438: INFO: Pod downwardapi-volume-74657793-1ee2-4aa0-8729-16645c836043 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 +Jan 14 04:14:43.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-9892" for this suite. 01/14/23 04:14:43.443 +------------------------------ +• [4.073 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should provide container's memory limit [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:207 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:14:39.377 + Jan 14 04:14:39.377: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename projected 01/14/23 04:14:39.378 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:14:39.392 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:14:39.395 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 + [It] should provide container's memory limit [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:207 + STEP: Creating a pod to test downward API volume plugin 01/14/23 04:14:39.398 + Jan 14 04:14:39.408: INFO: Waiting up to 5m0s for pod "downwardapi-volume-74657793-1ee2-4aa0-8729-16645c836043" in namespace "projected-9892" to be "Succeeded or Failed" + Jan 14 04:14:39.410: INFO: Pod "downwardapi-volume-74657793-1ee2-4aa0-8729-16645c836043": Phase="Pending", Reason="", readiness=false. Elapsed: 2.778362ms + Jan 14 04:14:41.415: INFO: Pod "downwardapi-volume-74657793-1ee2-4aa0-8729-16645c836043": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007081403s + Jan 14 04:14:43.415: INFO: Pod "downwardapi-volume-74657793-1ee2-4aa0-8729-16645c836043": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007302208s + STEP: Saw pod success 01/14/23 04:14:43.415 + Jan 14 04:14:43.415: INFO: Pod "downwardapi-volume-74657793-1ee2-4aa0-8729-16645c836043" satisfied condition "Succeeded or Failed" + Jan 14 04:14:43.418: INFO: Trying to get logs from node 10.0.1.99 pod downwardapi-volume-74657793-1ee2-4aa0-8729-16645c836043 container client-container: + STEP: delete the pod 01/14/23 04:14:43.424 + Jan 14 04:14:43.435: INFO: Waiting for pod downwardapi-volume-74657793-1ee2-4aa0-8729-16645c836043 to disappear + Jan 14 04:14:43.438: INFO: Pod downwardapi-volume-74657793-1ee2-4aa0-8729-16645c836043 no longer exists + [AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 + Jan 14 04:14:43.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-9892" for this suite. 01/14/23 04:14:43.443 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Secrets + should fail to create secret due to empty secret key [Conformance] + test/e2e/common/node/secrets.go:140 +[BeforeEach] [sig-node] Secrets + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:14:43.452 +Jan 14 04:14:43.452: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename secrets 01/14/23 04:14:43.452 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:14:43.467 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:14:43.469 +[BeforeEach] [sig-node] Secrets + test/e2e/framework/metrics/init/init.go:31 +[It] should fail to create secret due to empty secret key [Conformance] + test/e2e/common/node/secrets.go:140 +STEP: Creating projection with secret that has name secret-emptykey-test-5c8553d4-3710-41b8-ac4e-802b5e693b91 01/14/23 04:14:43.471 +[AfterEach] [sig-node] Secrets + test/e2e/framework/node/init/init.go:32 +Jan 14 04:14:43.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Secrets + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Secrets + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Secrets + tear down framework | framework.go:193 +STEP: Destroying namespace "secrets-9022" for this suite. 01/14/23 04:14:43.476 +------------------------------ +• [0.032 seconds] +[sig-node] Secrets +test/e2e/common/node/framework.go:23 + should fail to create secret due to empty secret key [Conformance] + test/e2e/common/node/secrets.go:140 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Secrets + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:14:43.452 + Jan 14 04:14:43.452: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename secrets 01/14/23 04:14:43.452 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:14:43.467 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:14:43.469 + [BeforeEach] [sig-node] Secrets + test/e2e/framework/metrics/init/init.go:31 + [It] should fail to create secret due to empty secret key [Conformance] + test/e2e/common/node/secrets.go:140 + STEP: Creating projection with secret that has name secret-emptykey-test-5c8553d4-3710-41b8-ac4e-802b5e693b91 01/14/23 04:14:43.471 + [AfterEach] [sig-node] Secrets + test/e2e/framework/node/init/init.go:32 + Jan 14 04:14:43.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Secrets + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Secrets + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Secrets + tear down framework | framework.go:193 + STEP: Destroying namespace "secrets-9022" for this suite. 01/14/23 04:14:43.476 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should rollback without unnecessary restarts [Conformance] + test/e2e/apps/daemon_set.go:432 +[BeforeEach] [sig-apps] Daemon set [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:14:43.484 +Jan 14 04:14:43.484: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename daemonsets 01/14/23 04:14:43.484 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:14:43.498 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:14:43.501 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 +[It] should rollback without unnecessary restarts [Conformance] + test/e2e/apps/daemon_set.go:432 +Jan 14 04:14:43.523: INFO: Create a RollingUpdate DaemonSet +Jan 14 04:14:43.529: INFO: Check that daemon pods launch on every node of the cluster +Jan 14 04:14:43.533: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:14:43.533: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:14:43.533: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:14:43.536: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jan 14 04:14:43.536: INFO: Node 10.0.1.106 is running 0 daemon pod, expected 1 +Jan 14 04:14:44.541: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:14:44.541: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:14:44.541: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:14:44.545: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Jan 14 04:14:44.545: INFO: Node 10.0.1.106 is running 0 daemon pod, expected 1 +Jan 14 04:14:45.542: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:14:45.542: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:14:45.542: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:14:45.546: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 +Jan 14 04:14:45.546: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set +Jan 14 04:14:45.546: INFO: Update the DaemonSet to trigger a rollout +Jan 14 04:14:45.556: INFO: Updating DaemonSet daemon-set +Jan 14 04:14:47.573: INFO: Roll back the DaemonSet before rollout is complete +Jan 14 04:14:47.583: INFO: Updating DaemonSet daemon-set +Jan 14 04:14:47.583: INFO: Make sure DaemonSet rollback is complete +Jan 14 04:14:47.587: INFO: Wrong image for pod: daemon-set-vpcgw. Expected: registry.k8s.io/e2e-test-images/httpd:2.4.38-4, got: foo:non-existent. +Jan 14 04:14:47.587: INFO: Pod daemon-set-vpcgw is not available +Jan 14 04:14:47.591: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:14:47.591: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:14:47.591: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:14:48.601: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:14:48.601: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:14:48.601: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:14:49.601: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:14:49.601: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:14:49.601: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:14:50.601: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:14:50.601: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:14:50.601: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:14:51.601: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:14:51.601: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:14:51.601: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:14:52.600: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:14:52.601: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:14:52.601: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:14:53.596: INFO: Pod daemon-set-85klp is not available +Jan 14 04:14:53.601: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:14:53.601: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:14:53.601: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 +STEP: Deleting DaemonSet "daemon-set" 01/14/23 04:14:53.608 +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7950, will wait for the garbage collector to delete the pods 01/14/23 04:14:53.608 +Jan 14 04:14:53.670: INFO: Deleting DaemonSet.extensions daemon-set took: 7.988802ms +Jan 14 04:14:53.770: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.104199ms +Jan 14 04:14:55.573: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jan 14 04:14:55.574: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Jan 14 04:14:55.576: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"438279"},"items":null} + +Jan 14 04:14:55.579: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"438279"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:14:55.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "daemonsets-7950" for this suite. 01/14/23 04:14:55.597 +------------------------------ +• [SLOW TEST] [12.118 seconds] +[sig-apps] Daemon set [Serial] +test/e2e/apps/framework.go:23 + should rollback without unnecessary restarts [Conformance] + test/e2e/apps/daemon_set.go:432 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Daemon set [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:14:43.484 + Jan 14 04:14:43.484: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename daemonsets 01/14/23 04:14:43.484 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:14:43.498 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:14:43.501 + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 + [It] should rollback without unnecessary restarts [Conformance] + test/e2e/apps/daemon_set.go:432 + Jan 14 04:14:43.523: INFO: Create a RollingUpdate DaemonSet + Jan 14 04:14:43.529: INFO: Check that daemon pods launch on every node of the cluster + Jan 14 04:14:43.533: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:14:43.533: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:14:43.533: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:14:43.536: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jan 14 04:14:43.536: INFO: Node 10.0.1.106 is running 0 daemon pod, expected 1 + Jan 14 04:14:44.541: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:14:44.541: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:14:44.541: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:14:44.545: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 + Jan 14 04:14:44.545: INFO: Node 10.0.1.106 is running 0 daemon pod, expected 1 + Jan 14 04:14:45.542: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:14:45.542: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:14:45.542: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:14:45.546: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 + Jan 14 04:14:45.546: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set + Jan 14 04:14:45.546: INFO: Update the DaemonSet to trigger a rollout + Jan 14 04:14:45.556: INFO: Updating DaemonSet daemon-set + Jan 14 04:14:47.573: INFO: Roll back the DaemonSet before rollout is complete + Jan 14 04:14:47.583: INFO: Updating DaemonSet daemon-set + Jan 14 04:14:47.583: INFO: Make sure DaemonSet rollback is complete + Jan 14 04:14:47.587: INFO: Wrong image for pod: daemon-set-vpcgw. Expected: registry.k8s.io/e2e-test-images/httpd:2.4.38-4, got: foo:non-existent. + Jan 14 04:14:47.587: INFO: Pod daemon-set-vpcgw is not available + Jan 14 04:14:47.591: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:14:47.591: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:14:47.591: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:14:48.601: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:14:48.601: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:14:48.601: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:14:49.601: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:14:49.601: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:14:49.601: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:14:50.601: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:14:50.601: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:14:50.601: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:14:51.601: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:14:51.601: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:14:51.601: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:14:52.600: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:14:52.601: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:14:52.601: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:14:53.596: INFO: Pod daemon-set-85klp is not available + Jan 14 04:14:53.601: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:14:53.601: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:14:53.601: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 + STEP: Deleting DaemonSet "daemon-set" 01/14/23 04:14:53.608 + STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7950, will wait for the garbage collector to delete the pods 01/14/23 04:14:53.608 + Jan 14 04:14:53.670: INFO: Deleting DaemonSet.extensions daemon-set took: 7.988802ms + Jan 14 04:14:53.770: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.104199ms + Jan 14 04:14:55.573: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jan 14 04:14:55.574: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set + Jan 14 04:14:55.576: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"438279"},"items":null} + + Jan 14 04:14:55.579: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"438279"},"items":null} + + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:14:55.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "daemonsets-7950" for this suite. 01/14/23 04:14:55.597 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should be able to deny attaching pod [Conformance] + test/e2e/apimachinery/webhook.go:209 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:14:55.602 +Jan 14 04:14:55.602: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename webhook 01/14/23 04:14:55.603 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:14:55.618 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:14:55.62 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 01/14/23 04:14:55.631 +STEP: Create role binding to let webhook read extension-apiserver-authentication 01/14/23 04:14:56.183 +STEP: Deploying the webhook pod 01/14/23 04:14:56.193 +STEP: Wait for the deployment to be ready 01/14/23 04:14:56.204 +Jan 14 04:14:56.213: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +Jan 14 04:14:58.224: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 14, 56, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 14, 56, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 14, 56, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 14, 56, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} +STEP: Deploying the webhook service 01/14/23 04:15:00.228 +STEP: Verifying the service has paired with the endpoint 01/14/23 04:15:00.238 +Jan 14 04:15:01.238: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should be able to deny attaching pod [Conformance] + test/e2e/apimachinery/webhook.go:209 +STEP: Registering the webhook via the AdmissionRegistration API 01/14/23 04:15:01.242 +STEP: create a pod 01/14/23 04:15:01.255 +Jan 14 04:15:01.265: INFO: Waiting up to 5m0s for pod "to-be-attached-pod" in namespace "webhook-191" to be "running" +Jan 14 04:15:01.268: INFO: Pod "to-be-attached-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.003106ms +Jan 14 04:15:03.273: INFO: Pod "to-be-attached-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.008222991s +Jan 14 04:15:03.273: INFO: Pod "to-be-attached-pod" satisfied condition "running" +STEP: 'kubectl attach' the pod, should be denied by the webhook 01/14/23 04:15:03.273 +Jan 14 04:15:03.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=webhook-191 attach --namespace=webhook-191 to-be-attached-pod -i -c=container1' +Jan 14 04:15:03.346: INFO: rc: 1 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:15:03.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "webhook-191" for this suite. 01/14/23 04:15:03.399 +STEP: Destroying namespace "webhook-191-markers" for this suite. 01/14/23 04:15:03.405 +------------------------------ +• [SLOW TEST] [7.808 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should be able to deny attaching pod [Conformance] + test/e2e/apimachinery/webhook.go:209 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:14:55.602 + Jan 14 04:14:55.602: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename webhook 01/14/23 04:14:55.603 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:14:55.618 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:14:55.62 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 01/14/23 04:14:55.631 + STEP: Create role binding to let webhook read extension-apiserver-authentication 01/14/23 04:14:56.183 + STEP: Deploying the webhook pod 01/14/23 04:14:56.193 + STEP: Wait for the deployment to be ready 01/14/23 04:14:56.204 + Jan 14 04:14:56.213: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + Jan 14 04:14:58.224: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 14, 56, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 14, 56, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 14, 56, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 14, 56, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-865554f4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} + STEP: Deploying the webhook service 01/14/23 04:15:00.228 + STEP: Verifying the service has paired with the endpoint 01/14/23 04:15:00.238 + Jan 14 04:15:01.238: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should be able to deny attaching pod [Conformance] + test/e2e/apimachinery/webhook.go:209 + STEP: Registering the webhook via the AdmissionRegistration API 01/14/23 04:15:01.242 + STEP: create a pod 01/14/23 04:15:01.255 + Jan 14 04:15:01.265: INFO: Waiting up to 5m0s for pod "to-be-attached-pod" in namespace "webhook-191" to be "running" + Jan 14 04:15:01.268: INFO: Pod "to-be-attached-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.003106ms + Jan 14 04:15:03.273: INFO: Pod "to-be-attached-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.008222991s + Jan 14 04:15:03.273: INFO: Pod "to-be-attached-pod" satisfied condition "running" + STEP: 'kubectl attach' the pod, should be denied by the webhook 01/14/23 04:15:03.273 + Jan 14 04:15:03.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=webhook-191 attach --namespace=webhook-191 to-be-attached-pod -i -c=container1' + Jan 14 04:15:03.346: INFO: rc: 1 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:15:03.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "webhook-191" for this suite. 01/14/23 04:15:03.399 + STEP: Destroying namespace "webhook-191-markers" for this suite. 01/14/23 04:15:03.405 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-instrumentation] Events API + should delete a collection of events [Conformance] + test/e2e/instrumentation/events.go:207 +[BeforeEach] [sig-instrumentation] Events API + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:15:03.416 +Jan 14 04:15:03.416: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename events 01/14/23 04:15:03.416 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:15:03.433 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:15:03.435 +[BeforeEach] [sig-instrumentation] Events API + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-instrumentation] Events API + test/e2e/instrumentation/events.go:84 +[It] should delete a collection of events [Conformance] + test/e2e/instrumentation/events.go:207 +STEP: Create set of events 01/14/23 04:15:03.437 +STEP: get a list of Events with a label in the current namespace 01/14/23 04:15:03.455 +STEP: delete a list of events 01/14/23 04:15:03.458 +Jan 14 04:15:03.458: INFO: requesting DeleteCollection of events +STEP: check that the list of events matches the requested quantity 01/14/23 04:15:03.478 +[AfterEach] [sig-instrumentation] Events API + test/e2e/framework/node/init/init.go:32 +Jan 14 04:15:03.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-instrumentation] Events API + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-instrumentation] Events API + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-instrumentation] Events API + tear down framework | framework.go:193 +STEP: Destroying namespace "events-9334" for this suite. 01/14/23 04:15:03.485 +------------------------------ +• [0.074 seconds] +[sig-instrumentation] Events API +test/e2e/instrumentation/common/framework.go:23 + should delete a collection of events [Conformance] + test/e2e/instrumentation/events.go:207 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-instrumentation] Events API + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:15:03.416 + Jan 14 04:15:03.416: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename events 01/14/23 04:15:03.416 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:15:03.433 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:15:03.435 + [BeforeEach] [sig-instrumentation] Events API + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-instrumentation] Events API + test/e2e/instrumentation/events.go:84 + [It] should delete a collection of events [Conformance] + test/e2e/instrumentation/events.go:207 + STEP: Create set of events 01/14/23 04:15:03.437 + STEP: get a list of Events with a label in the current namespace 01/14/23 04:15:03.455 + STEP: delete a list of events 01/14/23 04:15:03.458 + Jan 14 04:15:03.458: INFO: requesting DeleteCollection of events + STEP: check that the list of events matches the requested quantity 01/14/23 04:15:03.478 + [AfterEach] [sig-instrumentation] Events API + test/e2e/framework/node/init/init.go:32 + Jan 14 04:15:03.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-instrumentation] Events API + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-instrumentation] Events API + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-instrumentation] Events API + tear down framework | framework.go:193 + STEP: Destroying namespace "events-9334" for this suite. 01/14/23 04:15:03.485 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-network] Services + should find a service from listing all namespaces [Conformance] + test/e2e/network/service.go:3219 +[BeforeEach] [sig-network] Services + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:15:03.49 +Jan 14 04:15:03.490: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename services 01/14/23 04:15:03.491 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:15:03.503 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:15:03.505 +[BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should find a service from listing all namespaces [Conformance] + test/e2e/network/service.go:3219 +STEP: fetching services 01/14/23 04:15:03.507 +[AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 +Jan 14 04:15:03.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 +STEP: Destroying namespace "services-559" for this suite. 01/14/23 04:15:03.514 +------------------------------ +• [0.030 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should find a service from listing all namespaces [Conformance] + test/e2e/network/service.go:3219 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:15:03.49 + Jan 14 04:15:03.490: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename services 01/14/23 04:15:03.491 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:15:03.503 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:15:03.505 + [BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should find a service from listing all namespaces [Conformance] + test/e2e/network/service.go:3219 + STEP: fetching services 01/14/23 04:15:03.507 + [AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 + Jan 14 04:15:03.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 + STEP: Destroying namespace "services-559" for this suite. 01/14/23 04:15:03.514 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-api-machinery] Watchers + should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + test/e2e/apimachinery/watch.go:257 +[BeforeEach] [sig-api-machinery] Watchers + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:15:03.52 +Jan 14 04:15:03.520: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename watch 01/14/23 04:15:03.521 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:15:03.534 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:15:03.536 +[BeforeEach] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:31 +[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + test/e2e/apimachinery/watch.go:257 +STEP: creating a watch on configmaps with a certain label 01/14/23 04:15:03.538 +STEP: creating a new configmap 01/14/23 04:15:03.539 +STEP: modifying the configmap once 01/14/23 04:15:03.545 +STEP: changing the label value of the configmap 01/14/23 04:15:03.552 +STEP: Expecting to observe a delete notification for the watched object 01/14/23 04:15:03.559 +Jan 14 04:15:03.559: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7728 31a7bae7-c719-48c0-9980-f812caf4583a 438429 0 2023-01-14 04:15:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-01-14 04:15:03 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} +Jan 14 04:15:03.560: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7728 31a7bae7-c719-48c0-9980-f812caf4583a 438432 0 2023-01-14 04:15:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-01-14 04:15:03 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +Jan 14 04:15:03.560: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7728 31a7bae7-c719-48c0-9980-f812caf4583a 438433 0 2023-01-14 04:15:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-01-14 04:15:03 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} +STEP: modifying the configmap a second time 01/14/23 04:15:03.56 +STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements 01/14/23 04:15:03.566 +STEP: changing the label value of the configmap back 01/14/23 04:15:13.568 +STEP: modifying the configmap a third time 01/14/23 04:15:13.579 +STEP: deleting the configmap 01/14/23 04:15:13.585 +STEP: Expecting to observe an add notification for the watched object when the label value was restored 01/14/23 04:15:13.591 +Jan 14 04:15:13.591: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7728 31a7bae7-c719-48c0-9980-f812caf4583a 438514 0 2023-01-14 04:15:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-01-14 04:15:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Jan 14 04:15:13.591: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7728 31a7bae7-c719-48c0-9980-f812caf4583a 438515 0 2023-01-14 04:15:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-01-14 04:15:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} +Jan 14 04:15:13.591: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7728 31a7bae7-c719-48c0-9980-f812caf4583a 438516 0 2023-01-14 04:15:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-01-14 04:15:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + test/e2e/framework/node/init/init.go:32 +Jan 14 04:15:13.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Watchers + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Watchers + tear down framework | framework.go:193 +STEP: Destroying namespace "watch-7728" for this suite. 01/14/23 04:15:13.596 +------------------------------ +• [SLOW TEST] [10.080 seconds] +[sig-api-machinery] Watchers +test/e2e/apimachinery/framework.go:23 + should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + test/e2e/apimachinery/watch.go:257 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Watchers + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:15:03.52 + Jan 14 04:15:03.520: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename watch 01/14/23 04:15:03.521 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:15:03.534 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:15:03.536 + [BeforeEach] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:31 + [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + test/e2e/apimachinery/watch.go:257 + STEP: creating a watch on configmaps with a certain label 01/14/23 04:15:03.538 + STEP: creating a new configmap 01/14/23 04:15:03.539 + STEP: modifying the configmap once 01/14/23 04:15:03.545 + STEP: changing the label value of the configmap 01/14/23 04:15:03.552 + STEP: Expecting to observe a delete notification for the watched object 01/14/23 04:15:03.559 + Jan 14 04:15:03.559: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7728 31a7bae7-c719-48c0-9980-f812caf4583a 438429 0 2023-01-14 04:15:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-01-14 04:15:03 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} + Jan 14 04:15:03.560: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7728 31a7bae7-c719-48c0-9980-f812caf4583a 438432 0 2023-01-14 04:15:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-01-14 04:15:03 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} + Jan 14 04:15:03.560: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7728 31a7bae7-c719-48c0-9980-f812caf4583a 438433 0 2023-01-14 04:15:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-01-14 04:15:03 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} + STEP: modifying the configmap a second time 01/14/23 04:15:03.56 + STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements 01/14/23 04:15:03.566 + STEP: changing the label value of the configmap back 01/14/23 04:15:13.568 + STEP: modifying the configmap a third time 01/14/23 04:15:13.579 + STEP: deleting the configmap 01/14/23 04:15:13.585 + STEP: Expecting to observe an add notification for the watched object when the label value was restored 01/14/23 04:15:13.591 + Jan 14 04:15:13.591: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7728 31a7bae7-c719-48c0-9980-f812caf4583a 438514 0 2023-01-14 04:15:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-01-14 04:15:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} + Jan 14 04:15:13.591: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7728 31a7bae7-c719-48c0-9980-f812caf4583a 438515 0 2023-01-14 04:15:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-01-14 04:15:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} + Jan 14 04:15:13.591: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7728 31a7bae7-c719-48c0-9980-f812caf4583a 438516 0 2023-01-14 04:15:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-01-14 04:15:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} + [AfterEach] [sig-api-machinery] Watchers + test/e2e/framework/node/init/init.go:32 + Jan 14 04:15:13.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Watchers + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Watchers + tear down framework | framework.go:193 + STEP: Destroying namespace "watch-7728" for this suite. 01/14/23 04:15:13.596 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-node] Probing container + should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:215 +[BeforeEach] [sig-node] Probing container + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:15:13.601 +Jan 14 04:15:13.601: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename container-probe 01/14/23 04:15:13.601 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:15:13.615 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:15:13.617 +[BeforeEach] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 +[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:215 +STEP: Creating pod test-webserver-666be72b-c8d8-4d34-8231-bda876ca120c in namespace container-probe-1952 01/14/23 04:15:13.619 +Jan 14 04:15:13.632: INFO: Waiting up to 5m0s for pod "test-webserver-666be72b-c8d8-4d34-8231-bda876ca120c" in namespace "container-probe-1952" to be "not pending" +Jan 14 04:15:13.636: INFO: Pod "test-webserver-666be72b-c8d8-4d34-8231-bda876ca120c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084676ms +Jan 14 04:15:15.640: INFO: Pod "test-webserver-666be72b-c8d8-4d34-8231-bda876ca120c": Phase="Running", Reason="", readiness=true. Elapsed: 2.008373977s +Jan 14 04:15:15.640: INFO: Pod "test-webserver-666be72b-c8d8-4d34-8231-bda876ca120c" satisfied condition "not pending" +Jan 14 04:15:15.640: INFO: Started pod test-webserver-666be72b-c8d8-4d34-8231-bda876ca120c in namespace container-probe-1952 +STEP: checking the pod's current state and verifying that restartCount is present 01/14/23 04:15:15.64 +Jan 14 04:15:15.643: INFO: Initial restart count of pod test-webserver-666be72b-c8d8-4d34-8231-bda876ca120c is 0 +STEP: deleting the pod 01/14/23 04:19:16.259 +[AfterEach] [sig-node] Probing container + test/e2e/framework/node/init/init.go:32 +Jan 14 04:19:16.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Probing container + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Probing container + tear down framework | framework.go:193 +STEP: Destroying namespace "container-probe-1952" for this suite. 01/14/23 04:19:16.282 +------------------------------ +• [SLOW TEST] [242.686 seconds] +[sig-node] Probing container +test/e2e/common/node/framework.go:23 + should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:215 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Probing container + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:15:13.601 + Jan 14 04:15:13.601: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename container-probe 01/14/23 04:15:13.601 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:15:13.615 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:15:13.617 + [BeforeEach] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 + [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:215 + STEP: Creating pod test-webserver-666be72b-c8d8-4d34-8231-bda876ca120c in namespace container-probe-1952 01/14/23 04:15:13.619 + Jan 14 04:15:13.632: INFO: Waiting up to 5m0s for pod "test-webserver-666be72b-c8d8-4d34-8231-bda876ca120c" in namespace "container-probe-1952" to be "not pending" + Jan 14 04:15:13.636: INFO: Pod "test-webserver-666be72b-c8d8-4d34-8231-bda876ca120c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084676ms + Jan 14 04:15:15.640: INFO: Pod "test-webserver-666be72b-c8d8-4d34-8231-bda876ca120c": Phase="Running", Reason="", readiness=true. Elapsed: 2.008373977s + Jan 14 04:15:15.640: INFO: Pod "test-webserver-666be72b-c8d8-4d34-8231-bda876ca120c" satisfied condition "not pending" + Jan 14 04:15:15.640: INFO: Started pod test-webserver-666be72b-c8d8-4d34-8231-bda876ca120c in namespace container-probe-1952 + STEP: checking the pod's current state and verifying that restartCount is present 01/14/23 04:15:15.64 + Jan 14 04:15:15.643: INFO: Initial restart count of pod test-webserver-666be72b-c8d8-4d34-8231-bda876ca120c is 0 + STEP: deleting the pod 01/14/23 04:19:16.259 + [AfterEach] [sig-node] Probing container + test/e2e/framework/node/init/init.go:32 + Jan 14 04:19:16.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Probing container + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Probing container + tear down framework | framework.go:193 + STEP: Destroying namespace "container-probe-1952" for this suite. 01/14/23 04:19:16.282 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow substituting values in a container's command [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:73 +[BeforeEach] [sig-node] Variable Expansion + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:19:16.288 +Jan 14 04:19:16.288: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename var-expansion 01/14/23 04:19:16.289 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:19:16.304 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:19:16.306 +[BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:31 +[It] should allow substituting values in a container's command [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:73 +STEP: Creating a pod to test substitution in container's command 01/14/23 04:19:16.308 +Jan 14 04:19:16.322: INFO: Waiting up to 5m0s for pod "var-expansion-b914bc97-4ddd-4015-a394-e3129aa60c81" in namespace "var-expansion-5006" to be "Succeeded or Failed" +Jan 14 04:19:16.325: INFO: Pod "var-expansion-b914bc97-4ddd-4015-a394-e3129aa60c81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.899804ms +Jan 14 04:19:18.330: INFO: Pod "var-expansion-b914bc97-4ddd-4015-a394-e3129aa60c81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008146054s +Jan 14 04:19:20.329: INFO: Pod "var-expansion-b914bc97-4ddd-4015-a394-e3129aa60c81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007138013s +STEP: Saw pod success 01/14/23 04:19:20.329 +Jan 14 04:19:20.329: INFO: Pod "var-expansion-b914bc97-4ddd-4015-a394-e3129aa60c81" satisfied condition "Succeeded or Failed" +Jan 14 04:19:20.332: INFO: Trying to get logs from node 10.0.1.106 pod var-expansion-b914bc97-4ddd-4015-a394-e3129aa60c81 container dapi-container: +STEP: delete the pod 01/14/23 04:19:20.343 +Jan 14 04:19:20.360: INFO: Waiting for pod var-expansion-b914bc97-4ddd-4015-a394-e3129aa60c81 to disappear +Jan 14 04:19:20.363: INFO: Pod var-expansion-b914bc97-4ddd-4015-a394-e3129aa60c81 no longer exists +[AfterEach] [sig-node] Variable Expansion + test/e2e/framework/node/init/init.go:32 +Jan 14 04:19:20.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Variable Expansion + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Variable Expansion + tear down framework | framework.go:193 +STEP: Destroying namespace "var-expansion-5006" for this suite. 01/14/23 04:19:20.367 +------------------------------ +• [4.085 seconds] +[sig-node] Variable Expansion +test/e2e/common/node/framework.go:23 + should allow substituting values in a container's command [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:73 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Variable Expansion + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:19:16.288 + Jan 14 04:19:16.288: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename var-expansion 01/14/23 04:19:16.289 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:19:16.304 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:19:16.306 + [BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:31 + [It] should allow substituting values in a container's command [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:73 + STEP: Creating a pod to test substitution in container's command 01/14/23 04:19:16.308 + Jan 14 04:19:16.322: INFO: Waiting up to 5m0s for pod "var-expansion-b914bc97-4ddd-4015-a394-e3129aa60c81" in namespace "var-expansion-5006" to be "Succeeded or Failed" + Jan 14 04:19:16.325: INFO: Pod "var-expansion-b914bc97-4ddd-4015-a394-e3129aa60c81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.899804ms + Jan 14 04:19:18.330: INFO: Pod "var-expansion-b914bc97-4ddd-4015-a394-e3129aa60c81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008146054s + Jan 14 04:19:20.329: INFO: Pod "var-expansion-b914bc97-4ddd-4015-a394-e3129aa60c81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007138013s + STEP: Saw pod success 01/14/23 04:19:20.329 + Jan 14 04:19:20.329: INFO: Pod "var-expansion-b914bc97-4ddd-4015-a394-e3129aa60c81" satisfied condition "Succeeded or Failed" + Jan 14 04:19:20.332: INFO: Trying to get logs from node 10.0.1.106 pod var-expansion-b914bc97-4ddd-4015-a394-e3129aa60c81 container dapi-container: + STEP: delete the pod 01/14/23 04:19:20.343 + Jan 14 04:19:20.360: INFO: Waiting for pod var-expansion-b914bc97-4ddd-4015-a394-e3129aa60c81 to disappear + Jan 14 04:19:20.363: INFO: Pod var-expansion-b914bc97-4ddd-4015-a394-e3129aa60c81 no longer exists + [AfterEach] [sig-node] Variable Expansion + test/e2e/framework/node/init/init.go:32 + Jan 14 04:19:20.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Variable Expansion + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Variable Expansion + tear down framework | framework.go:193 + STEP: Destroying namespace "var-expansion-5006" for this suite. 01/14/23 04:19:20.367 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-storage] Secrets + should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:99 +[BeforeEach] [sig-storage] Secrets + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:19:20.373 +Jan 14 04:19:20.373: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename secrets 01/14/23 04:19:20.374 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:19:20.391 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:19:20.393 +[BeforeEach] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:31 +[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:99 +STEP: Creating secret with name secret-test-0ed1507b-e201-45ce-a3fe-8f2120348435 01/14/23 04:19:20.413 +STEP: Creating a pod to test consume secrets 01/14/23 04:19:20.416 +Jan 14 04:19:20.425: INFO: Waiting up to 5m0s for pod "pod-secrets-ddb7c01e-dd8d-4bb1-a201-850eaf78dcfe" in namespace "secrets-2165" to be "Succeeded or Failed" +Jan 14 04:19:20.427: INFO: Pod "pod-secrets-ddb7c01e-dd8d-4bb1-a201-850eaf78dcfe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.731329ms +Jan 14 04:19:22.433: INFO: Pod "pod-secrets-ddb7c01e-dd8d-4bb1-a201-850eaf78dcfe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008096474s +Jan 14 04:19:24.432: INFO: Pod "pod-secrets-ddb7c01e-dd8d-4bb1-a201-850eaf78dcfe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007532336s +STEP: Saw pod success 01/14/23 04:19:24.432 +Jan 14 04:19:24.432: INFO: Pod "pod-secrets-ddb7c01e-dd8d-4bb1-a201-850eaf78dcfe" satisfied condition "Succeeded or Failed" +Jan 14 04:19:24.435: INFO: Trying to get logs from node 10.0.1.106 pod pod-secrets-ddb7c01e-dd8d-4bb1-a201-850eaf78dcfe container secret-volume-test: +STEP: delete the pod 01/14/23 04:19:24.441 +Jan 14 04:19:24.457: INFO: Waiting for pod pod-secrets-ddb7c01e-dd8d-4bb1-a201-850eaf78dcfe to disappear +Jan 14 04:19:24.460: INFO: Pod pod-secrets-ddb7c01e-dd8d-4bb1-a201-850eaf78dcfe no longer exists +[AfterEach] [sig-storage] Secrets + test/e2e/framework/node/init/init.go:32 +Jan 14 04:19:24.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Secrets + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Secrets + tear down framework | framework.go:193 +STEP: Destroying namespace "secrets-2165" for this suite. 01/14/23 04:19:24.464 +STEP: Destroying namespace "secret-namespace-7325" for this suite. 01/14/23 04:19:24.469 +------------------------------ +• [4.103 seconds] +[sig-storage] Secrets +test/e2e/common/storage/framework.go:23 + should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:99 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Secrets + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:19:20.373 + Jan 14 04:19:20.373: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename secrets 01/14/23 04:19:20.374 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:19:20.391 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:19:20.393 + [BeforeEach] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:31 + [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:99 + STEP: Creating secret with name secret-test-0ed1507b-e201-45ce-a3fe-8f2120348435 01/14/23 04:19:20.413 + STEP: Creating a pod to test consume secrets 01/14/23 04:19:20.416 + Jan 14 04:19:20.425: INFO: Waiting up to 5m0s for pod "pod-secrets-ddb7c01e-dd8d-4bb1-a201-850eaf78dcfe" in namespace "secrets-2165" to be "Succeeded or Failed" + Jan 14 04:19:20.427: INFO: Pod "pod-secrets-ddb7c01e-dd8d-4bb1-a201-850eaf78dcfe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.731329ms + Jan 14 04:19:22.433: INFO: Pod "pod-secrets-ddb7c01e-dd8d-4bb1-a201-850eaf78dcfe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008096474s + Jan 14 04:19:24.432: INFO: Pod "pod-secrets-ddb7c01e-dd8d-4bb1-a201-850eaf78dcfe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007532336s + STEP: Saw pod success 01/14/23 04:19:24.432 + Jan 14 04:19:24.432: INFO: Pod "pod-secrets-ddb7c01e-dd8d-4bb1-a201-850eaf78dcfe" satisfied condition "Succeeded or Failed" + Jan 14 04:19:24.435: INFO: Trying to get logs from node 10.0.1.106 pod pod-secrets-ddb7c01e-dd8d-4bb1-a201-850eaf78dcfe container secret-volume-test: + STEP: delete the pod 01/14/23 04:19:24.441 + Jan 14 04:19:24.457: INFO: Waiting for pod pod-secrets-ddb7c01e-dd8d-4bb1-a201-850eaf78dcfe to disappear + Jan 14 04:19:24.460: INFO: Pod pod-secrets-ddb7c01e-dd8d-4bb1-a201-850eaf78dcfe no longer exists + [AfterEach] [sig-storage] Secrets + test/e2e/framework/node/init/init.go:32 + Jan 14 04:19:24.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Secrets + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Secrets + tear down framework | framework.go:193 + STEP: Destroying namespace "secrets-2165" for this suite. 01/14/23 04:19:24.464 + STEP: Destroying namespace "secret-namespace-7325" for this suite. 01/14/23 04:19:24.469 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSlice + should support creating EndpointSlice API operations [Conformance] + test/e2e/network/endpointslice.go:353 +[BeforeEach] [sig-network] EndpointSlice + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:19:24.477 +Jan 14 04:19:24.477: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename endpointslice 01/14/23 04:19:24.478 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:19:24.494 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:19:24.496 +[BeforeEach] [sig-network] EndpointSlice + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] EndpointSlice + test/e2e/network/endpointslice.go:52 +[It] should support creating EndpointSlice API operations [Conformance] + test/e2e/network/endpointslice.go:353 +STEP: getting /apis 01/14/23 04:19:24.498 +STEP: getting /apis/discovery.k8s.io 01/14/23 04:19:24.5 +STEP: getting /apis/discovery.k8s.iov1 01/14/23 04:19:24.501 +STEP: creating 01/14/23 04:19:24.502 +STEP: getting 01/14/23 04:19:24.513 +STEP: listing 01/14/23 04:19:24.516 +STEP: watching 01/14/23 04:19:24.518 +Jan 14 04:19:24.518: INFO: starting watch +STEP: cluster-wide listing 01/14/23 04:19:24.519 +STEP: cluster-wide watching 01/14/23 04:19:24.522 +Jan 14 04:19:24.522: INFO: starting watch +STEP: patching 01/14/23 04:19:24.522 +STEP: updating 01/14/23 04:19:24.53 +Jan 14 04:19:24.536: INFO: waiting for watch events with expected annotations +Jan 14 04:19:24.536: INFO: saw patched and updated annotations +STEP: deleting 01/14/23 04:19:24.536 +STEP: deleting a collection 01/14/23 04:19:24.544 +[AfterEach] [sig-network] EndpointSlice + test/e2e/framework/node/init/init.go:32 +Jan 14 04:19:24.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] EndpointSlice + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] EndpointSlice + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] EndpointSlice + tear down framework | framework.go:193 +STEP: Destroying namespace "endpointslice-1204" for this suite. 01/14/23 04:19:24.561 +------------------------------ +• [0.089 seconds] +[sig-network] EndpointSlice +test/e2e/network/common/framework.go:23 + should support creating EndpointSlice API operations [Conformance] + test/e2e/network/endpointslice.go:353 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] EndpointSlice + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:19:24.477 + Jan 14 04:19:24.477: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename endpointslice 01/14/23 04:19:24.478 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:19:24.494 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:19:24.496 + [BeforeEach] [sig-network] EndpointSlice + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] EndpointSlice + test/e2e/network/endpointslice.go:52 + [It] should support creating EndpointSlice API operations [Conformance] + test/e2e/network/endpointslice.go:353 + STEP: getting /apis 01/14/23 04:19:24.498 + STEP: getting /apis/discovery.k8s.io 01/14/23 04:19:24.5 + STEP: getting /apis/discovery.k8s.iov1 01/14/23 04:19:24.501 + STEP: creating 01/14/23 04:19:24.502 + STEP: getting 01/14/23 04:19:24.513 + STEP: listing 01/14/23 04:19:24.516 + STEP: watching 01/14/23 04:19:24.518 + Jan 14 04:19:24.518: INFO: starting watch + STEP: cluster-wide listing 01/14/23 04:19:24.519 + STEP: cluster-wide watching 01/14/23 04:19:24.522 + Jan 14 04:19:24.522: INFO: starting watch + STEP: patching 01/14/23 04:19:24.522 + STEP: updating 01/14/23 04:19:24.53 + Jan 14 04:19:24.536: INFO: waiting for watch events with expected annotations + Jan 14 04:19:24.536: INFO: saw patched and updated annotations + STEP: deleting 01/14/23 04:19:24.536 + STEP: deleting a collection 01/14/23 04:19:24.544 + [AfterEach] [sig-network] EndpointSlice + test/e2e/framework/node/init/init.go:32 + Jan 14 04:19:24.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] EndpointSlice + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] EndpointSlice + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] EndpointSlice + tear down framework | framework.go:193 + STEP: Destroying namespace "endpointslice-1204" for this suite. 01/14/23 04:19:24.561 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should fail substituting values in a volume subpath with backticks [Slow] [Conformance] + test/e2e/common/node/expansion.go:152 +[BeforeEach] [sig-node] Variable Expansion + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:19:24.566 +Jan 14 04:19:24.567: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename var-expansion 01/14/23 04:19:24.567 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:19:24.592 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:19:24.594 +[BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:31 +[It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] + test/e2e/common/node/expansion.go:152 +Jan 14 04:19:24.687: INFO: Waiting up to 2m0s for pod "var-expansion-8d2b8798-b4ae-4e0d-8cc5-de6bd3bc942b" in namespace "var-expansion-8284" to be "container 0 failed with reason CreateContainerConfigError" +Jan 14 04:19:24.690: INFO: Pod "var-expansion-8d2b8798-b4ae-4e0d-8cc5-de6bd3bc942b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.521457ms +Jan 14 04:19:26.694: INFO: Pod "var-expansion-8d2b8798-b4ae-4e0d-8cc5-de6bd3bc942b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007643038s +Jan 14 04:19:26.694: INFO: Pod "var-expansion-8d2b8798-b4ae-4e0d-8cc5-de6bd3bc942b" satisfied condition "container 0 failed with reason CreateContainerConfigError" +Jan 14 04:19:26.694: INFO: Deleting pod "var-expansion-8d2b8798-b4ae-4e0d-8cc5-de6bd3bc942b" in namespace "var-expansion-8284" +Jan 14 04:19:26.702: INFO: Wait up to 5m0s for pod "var-expansion-8d2b8798-b4ae-4e0d-8cc5-de6bd3bc942b" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + test/e2e/framework/node/init/init.go:32 +Jan 14 04:19:30.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Variable Expansion + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Variable Expansion + tear down framework | framework.go:193 +STEP: Destroying namespace "var-expansion-8284" for this suite. 01/14/23 04:19:30.715 +------------------------------ +• [SLOW TEST] [6.154 seconds] +[sig-node] Variable Expansion +test/e2e/common/node/framework.go:23 + should fail substituting values in a volume subpath with backticks [Slow] [Conformance] + test/e2e/common/node/expansion.go:152 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Variable Expansion + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:19:24.566 + Jan 14 04:19:24.567: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename var-expansion 01/14/23 04:19:24.567 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:19:24.592 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:19:24.594 + [BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:31 + [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] + test/e2e/common/node/expansion.go:152 + Jan 14 04:19:24.687: INFO: Waiting up to 2m0s for pod "var-expansion-8d2b8798-b4ae-4e0d-8cc5-de6bd3bc942b" in namespace "var-expansion-8284" to be "container 0 failed with reason CreateContainerConfigError" + Jan 14 04:19:24.690: INFO: Pod "var-expansion-8d2b8798-b4ae-4e0d-8cc5-de6bd3bc942b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.521457ms + Jan 14 04:19:26.694: INFO: Pod "var-expansion-8d2b8798-b4ae-4e0d-8cc5-de6bd3bc942b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007643038s + Jan 14 04:19:26.694: INFO: Pod "var-expansion-8d2b8798-b4ae-4e0d-8cc5-de6bd3bc942b" satisfied condition "container 0 failed with reason CreateContainerConfigError" + Jan 14 04:19:26.694: INFO: Deleting pod "var-expansion-8d2b8798-b4ae-4e0d-8cc5-de6bd3bc942b" in namespace "var-expansion-8284" + Jan 14 04:19:26.702: INFO: Wait up to 5m0s for pod "var-expansion-8d2b8798-b4ae-4e0d-8cc5-de6bd3bc942b" to be fully deleted + [AfterEach] [sig-node] Variable Expansion + test/e2e/framework/node/init/init.go:32 + Jan 14 04:19:30.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Variable Expansion + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Variable Expansion + tear down framework | framework.go:193 + STEP: Destroying namespace "var-expansion-8284" for this suite. 01/14/23 04:19:30.715 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should have session affinity work for NodePort service [LinuxOnly] [Conformance] + test/e2e/network/service.go:2228 +[BeforeEach] [sig-network] Services + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:19:30.722 +Jan 14 04:19:30.722: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename services 01/14/23 04:19:30.723 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:19:30.737 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:19:30.739 +[BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] + test/e2e/network/service.go:2228 +STEP: creating service in namespace services-9436 01/14/23 04:19:30.741 +STEP: creating service affinity-nodeport in namespace services-9436 01/14/23 04:19:30.741 +STEP: creating replication controller affinity-nodeport in namespace services-9436 01/14/23 04:19:30.753 +I0114 04:19:30.760144 25 runners.go:193] Created replication controller with name: affinity-nodeport, namespace: services-9436, replica count: 3 +I0114 04:19:33.811559 25 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Jan 14 04:19:33.821: INFO: Creating new exec pod +Jan 14 04:19:33.830: INFO: Waiting up to 5m0s for pod "execpod-affinitywngp4" in namespace "services-9436" to be "running" +Jan 14 04:19:33.837: INFO: Pod "execpod-affinitywngp4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.579158ms +Jan 14 04:19:35.842: INFO: Pod "execpod-affinitywngp4": Phase="Running", Reason="", readiness=true. Elapsed: 2.011703847s +Jan 14 04:19:35.842: INFO: Pod "execpod-affinitywngp4" satisfied condition "running" +Jan 14 04:19:36.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-9436 exec execpod-affinitywngp4 -- /bin/sh -x -c nc -v -z -w 2 affinity-nodeport 80' +Jan 14 04:19:36.958: INFO: stderr: "+ nc -v -z -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" +Jan 14 04:19:36.958: INFO: stdout: "" +Jan 14 04:19:36.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-9436 exec execpod-affinitywngp4 -- /bin/sh -x -c nc -v -z -w 2 10.55.254.139 80' +Jan 14 04:19:37.067: INFO: stderr: "+ nc -v -z -w 2 10.55.254.139 80\nConnection to 10.55.254.139 80 port [tcp/http] succeeded!\n" +Jan 14 04:19:37.067: INFO: stdout: "" +Jan 14 04:19:37.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-9436 exec execpod-affinitywngp4 -- /bin/sh -x -c nc -v -z -w 2 10.0.1.99 30336' +Jan 14 04:19:37.179: INFO: stderr: "+ nc -v -z -w 2 10.0.1.99 30336\nConnection to 10.0.1.99 30336 port [tcp/*] succeeded!\n" +Jan 14 04:19:37.179: INFO: stdout: "" +Jan 14 04:19:37.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-9436 exec execpod-affinitywngp4 -- /bin/sh -x -c nc -v -z -w 2 10.0.1.212 30336' +Jan 14 04:19:37.288: INFO: stderr: "+ nc -v -z -w 2 10.0.1.212 30336\nConnection to 10.0.1.212 30336 port [tcp/*] succeeded!\n" +Jan 14 04:19:37.288: INFO: stdout: "" +Jan 14 04:19:37.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-9436 exec execpod-affinitywngp4 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.0.1.106:30336/ ; done' +Jan 14 04:19:37.458: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30336/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30336/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30336/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30336/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30336/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30336/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30336/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30336/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30336/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30336/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30336/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30336/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30336/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30336/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30336/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30336/\n" +Jan 14 04:19:37.458: INFO: stdout: "\naffinity-nodeport-qsv4p\naffinity-nodeport-qsv4p\naffinity-nodeport-qsv4p\naffinity-nodeport-qsv4p\naffinity-nodeport-qsv4p\naffinity-nodeport-qsv4p\naffinity-nodeport-qsv4p\naffinity-nodeport-qsv4p\naffinity-nodeport-qsv4p\naffinity-nodeport-qsv4p\naffinity-nodeport-qsv4p\naffinity-nodeport-qsv4p\naffinity-nodeport-qsv4p\naffinity-nodeport-qsv4p\naffinity-nodeport-qsv4p\naffinity-nodeport-qsv4p" +Jan 14 04:19:37.458: INFO: Received response from host: affinity-nodeport-qsv4p +Jan 14 04:19:37.458: INFO: Received response from host: affinity-nodeport-qsv4p +Jan 14 04:19:37.458: INFO: Received response from host: affinity-nodeport-qsv4p +Jan 14 04:19:37.458: INFO: Received response from host: affinity-nodeport-qsv4p +Jan 14 04:19:37.458: INFO: Received response from host: affinity-nodeport-qsv4p +Jan 14 04:19:37.458: INFO: Received response from host: affinity-nodeport-qsv4p +Jan 14 04:19:37.458: INFO: Received response from host: affinity-nodeport-qsv4p +Jan 14 04:19:37.458: INFO: Received response from host: affinity-nodeport-qsv4p +Jan 14 04:19:37.458: INFO: Received response from host: affinity-nodeport-qsv4p +Jan 14 04:19:37.458: INFO: Received response from host: affinity-nodeport-qsv4p +Jan 14 04:19:37.458: INFO: Received response from host: affinity-nodeport-qsv4p +Jan 14 04:19:37.458: INFO: Received response from host: affinity-nodeport-qsv4p +Jan 14 04:19:37.458: INFO: Received response from host: affinity-nodeport-qsv4p +Jan 14 04:19:37.458: INFO: Received response from host: affinity-nodeport-qsv4p +Jan 14 04:19:37.459: INFO: Received response from host: affinity-nodeport-qsv4p +Jan 14 04:19:37.459: INFO: Received response from host: affinity-nodeport-qsv4p +Jan 14 04:19:37.459: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-nodeport in namespace services-9436, will wait for the garbage collector to delete the pods 01/14/23 04:19:37.473 +Jan 14 04:19:37.533: INFO: Deleting ReplicationController affinity-nodeport took: 6.501774ms +Jan 14 04:19:37.634: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.622932ms +[AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 +Jan 14 04:19:39.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 +STEP: Destroying namespace "services-9436" for this suite. 01/14/23 04:19:39.662 +------------------------------ +• [SLOW TEST] [8.945 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should have session affinity work for NodePort service [LinuxOnly] [Conformance] + test/e2e/network/service.go:2228 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:19:30.722 + Jan 14 04:19:30.722: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename services 01/14/23 04:19:30.723 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:19:30.737 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:19:30.739 + [BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] + test/e2e/network/service.go:2228 + STEP: creating service in namespace services-9436 01/14/23 04:19:30.741 + STEP: creating service affinity-nodeport in namespace services-9436 01/14/23 04:19:30.741 + STEP: creating replication controller affinity-nodeport in namespace services-9436 01/14/23 04:19:30.753 + I0114 04:19:30.760144 25 runners.go:193] Created replication controller with name: affinity-nodeport, namespace: services-9436, replica count: 3 + I0114 04:19:33.811559 25 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Jan 14 04:19:33.821: INFO: Creating new exec pod + Jan 14 04:19:33.830: INFO: Waiting up to 5m0s for pod "execpod-affinitywngp4" in namespace "services-9436" to be "running" + Jan 14 04:19:33.837: INFO: Pod "execpod-affinitywngp4": Phase="Pending", Reason="", readiness=false. Elapsed: 7.579158ms + Jan 14 04:19:35.842: INFO: Pod "execpod-affinitywngp4": Phase="Running", Reason="", readiness=true. Elapsed: 2.011703847s + Jan 14 04:19:35.842: INFO: Pod "execpod-affinitywngp4" satisfied condition "running" + Jan 14 04:19:36.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-9436 exec execpod-affinitywngp4 -- /bin/sh -x -c nc -v -z -w 2 affinity-nodeport 80' + Jan 14 04:19:36.958: INFO: stderr: "+ nc -v -z -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" + Jan 14 04:19:36.958: INFO: stdout: "" + Jan 14 04:19:36.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-9436 exec execpod-affinitywngp4 -- /bin/sh -x -c nc -v -z -w 2 10.55.254.139 80' + Jan 14 04:19:37.067: INFO: stderr: "+ nc -v -z -w 2 10.55.254.139 80\nConnection to 10.55.254.139 80 port [tcp/http] succeeded!\n" + Jan 14 04:19:37.067: INFO: stdout: "" + Jan 14 04:19:37.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-9436 exec execpod-affinitywngp4 -- /bin/sh -x -c nc -v -z -w 2 10.0.1.99 30336' + Jan 14 04:19:37.179: INFO: stderr: "+ nc -v -z -w 2 10.0.1.99 30336\nConnection to 10.0.1.99 30336 port [tcp/*] succeeded!\n" + Jan 14 04:19:37.179: INFO: stdout: "" + Jan 14 04:19:37.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-9436 exec execpod-affinitywngp4 -- /bin/sh -x -c nc -v -z -w 2 10.0.1.212 30336' + Jan 14 04:19:37.288: INFO: stderr: "+ nc -v -z -w 2 10.0.1.212 30336\nConnection to 10.0.1.212 30336 port [tcp/*] succeeded!\n" + Jan 14 04:19:37.288: INFO: stdout: "" + Jan 14 04:19:37.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-9436 exec execpod-affinitywngp4 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.0.1.106:30336/ ; done' + Jan 14 04:19:37.458: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30336/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30336/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30336/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30336/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30336/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30336/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30336/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30336/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30336/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30336/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30336/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30336/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30336/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30336/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30336/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.0.1.106:30336/\n" + Jan 14 04:19:37.458: INFO: stdout: "\naffinity-nodeport-qsv4p\naffinity-nodeport-qsv4p\naffinity-nodeport-qsv4p\naffinity-nodeport-qsv4p\naffinity-nodeport-qsv4p\naffinity-nodeport-qsv4p\naffinity-nodeport-qsv4p\naffinity-nodeport-qsv4p\naffinity-nodeport-qsv4p\naffinity-nodeport-qsv4p\naffinity-nodeport-qsv4p\naffinity-nodeport-qsv4p\naffinity-nodeport-qsv4p\naffinity-nodeport-qsv4p\naffinity-nodeport-qsv4p\naffinity-nodeport-qsv4p" + Jan 14 04:19:37.458: INFO: Received response from host: affinity-nodeport-qsv4p + Jan 14 04:19:37.458: INFO: Received response from host: affinity-nodeport-qsv4p + Jan 14 04:19:37.458: INFO: Received response from host: affinity-nodeport-qsv4p + Jan 14 04:19:37.458: INFO: Received response from host: affinity-nodeport-qsv4p + Jan 14 04:19:37.458: INFO: Received response from host: affinity-nodeport-qsv4p + Jan 14 04:19:37.458: INFO: Received response from host: affinity-nodeport-qsv4p + Jan 14 04:19:37.458: INFO: Received response from host: affinity-nodeport-qsv4p + Jan 14 04:19:37.458: INFO: Received response from host: affinity-nodeport-qsv4p + Jan 14 04:19:37.458: INFO: Received response from host: affinity-nodeport-qsv4p + Jan 14 04:19:37.458: INFO: Received response from host: affinity-nodeport-qsv4p + Jan 14 04:19:37.458: INFO: Received response from host: affinity-nodeport-qsv4p + Jan 14 04:19:37.458: INFO: Received response from host: affinity-nodeport-qsv4p + Jan 14 04:19:37.458: INFO: Received response from host: affinity-nodeport-qsv4p + Jan 14 04:19:37.458: INFO: Received response from host: affinity-nodeport-qsv4p + Jan 14 04:19:37.459: INFO: Received response from host: affinity-nodeport-qsv4p + Jan 14 04:19:37.459: INFO: Received response from host: affinity-nodeport-qsv4p + Jan 14 04:19:37.459: INFO: Cleaning up the exec pod + STEP: deleting ReplicationController affinity-nodeport in namespace services-9436, will wait for the garbage collector to delete the pods 01/14/23 04:19:37.473 + Jan 14 04:19:37.533: INFO: Deleting ReplicationController affinity-nodeport took: 6.501774ms + Jan 14 04:19:37.634: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.622932ms + [AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 + Jan 14 04:19:39.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 + STEP: Destroying namespace "services-9436" for this suite. 01/14/23 04:19:39.662 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute poststart http hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:167 +[BeforeEach] [sig-node] Container Lifecycle Hook + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:19:39.667 +Jan 14 04:19:39.667: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename container-lifecycle-hook 01/14/23 04:19:39.668 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:19:39.686 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:19:39.688 +[BeforeEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:77 +STEP: create the container to handle the HTTPGet hook request. 01/14/23 04:19:39.695 +Jan 14 04:19:39.705: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-4784" to be "running and ready" +Jan 14 04:19:39.708: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 3.135879ms +Jan 14 04:19:39.708: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Jan 14 04:19:41.712: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 2.007399441s +Jan 14 04:19:41.712: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) +Jan 14 04:19:41.712: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" +[It] should execute poststart http hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:167 +STEP: create the pod with lifecycle hook 01/14/23 04:19:41.715 +Jan 14 04:19:41.722: INFO: Waiting up to 5m0s for pod "pod-with-poststart-http-hook" in namespace "container-lifecycle-hook-4784" to be "running and ready" +Jan 14 04:19:41.725: INFO: Pod "pod-with-poststart-http-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 3.795137ms +Jan 14 04:19:41.725: INFO: The phase of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) +Jan 14 04:19:43.731: INFO: Pod "pod-with-poststart-http-hook": Phase="Running", Reason="", readiness=true. Elapsed: 2.008953685s +Jan 14 04:19:43.731: INFO: The phase of Pod pod-with-poststart-http-hook is Running (Ready = true) +Jan 14 04:19:43.731: INFO: Pod "pod-with-poststart-http-hook" satisfied condition "running and ready" +STEP: check poststart hook 01/14/23 04:19:43.734 +STEP: delete the pod with lifecycle hook 01/14/23 04:19:43.748 +Jan 14 04:19:43.764: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Jan 14 04:19:43.768: INFO: Pod pod-with-poststart-http-hook still exists +Jan 14 04:19:45.768: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Jan 14 04:19:45.773: INFO: Pod pod-with-poststart-http-hook no longer exists +[AfterEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/node/init/init.go:32 +Jan 14 04:19:45.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + tear down framework | framework.go:193 +STEP: Destroying namespace "container-lifecycle-hook-4784" for this suite. 01/14/23 04:19:45.777 +------------------------------ +• [SLOW TEST] [6.116 seconds] +[sig-node] Container Lifecycle Hook +test/e2e/common/node/framework.go:23 + when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:46 + should execute poststart http hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:167 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Container Lifecycle Hook + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:19:39.667 + Jan 14 04:19:39.667: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename container-lifecycle-hook 01/14/23 04:19:39.668 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:19:39.686 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:19:39.688 + [BeforeEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:77 + STEP: create the container to handle the HTTPGet hook request. 01/14/23 04:19:39.695 + Jan 14 04:19:39.705: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-4784" to be "running and ready" + Jan 14 04:19:39.708: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 3.135879ms + Jan 14 04:19:39.708: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) + Jan 14 04:19:41.712: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 2.007399441s + Jan 14 04:19:41.712: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) + Jan 14 04:19:41.712: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" + [It] should execute poststart http hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:167 + STEP: create the pod with lifecycle hook 01/14/23 04:19:41.715 + Jan 14 04:19:41.722: INFO: Waiting up to 5m0s for pod "pod-with-poststart-http-hook" in namespace "container-lifecycle-hook-4784" to be "running and ready" + Jan 14 04:19:41.725: INFO: Pod "pod-with-poststart-http-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 3.795137ms + Jan 14 04:19:41.725: INFO: The phase of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) + Jan 14 04:19:43.731: INFO: Pod "pod-with-poststart-http-hook": Phase="Running", Reason="", readiness=true. Elapsed: 2.008953685s + Jan 14 04:19:43.731: INFO: The phase of Pod pod-with-poststart-http-hook is Running (Ready = true) + Jan 14 04:19:43.731: INFO: Pod "pod-with-poststart-http-hook" satisfied condition "running and ready" + STEP: check poststart hook 01/14/23 04:19:43.734 + STEP: delete the pod with lifecycle hook 01/14/23 04:19:43.748 + Jan 14 04:19:43.764: INFO: Waiting for pod pod-with-poststart-http-hook to disappear + Jan 14 04:19:43.768: INFO: Pod pod-with-poststart-http-hook still exists + Jan 14 04:19:45.768: INFO: Waiting for pod pod-with-poststart-http-hook to disappear + Jan 14 04:19:45.773: INFO: Pod pod-with-poststart-http-hook no longer exists + [AfterEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/node/init/init.go:32 + Jan 14 04:19:45.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + tear down framework | framework.go:193 + STEP: Destroying namespace "container-lifecycle-hook-4784" for this suite. 01/14/23 04:19:45.777 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] + test/e2e/common/node/expansion.go:186 +[BeforeEach] [sig-node] Variable Expansion + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:19:45.784 +Jan 14 04:19:45.784: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename var-expansion 01/14/23 04:19:45.785 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:19:45.8 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:19:45.802 +[BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:31 +[It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] + test/e2e/common/node/expansion.go:186 +Jan 14 04:19:45.814: INFO: Waiting up to 2m0s for pod "var-expansion-8e1033e4-8bad-4eac-8307-c17a18f1a26d" in namespace "var-expansion-9181" to be "container 0 failed with reason CreateContainerConfigError" +Jan 14 04:19:45.817: INFO: Pod "var-expansion-8e1033e4-8bad-4eac-8307-c17a18f1a26d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.770102ms +Jan 14 04:19:47.822: INFO: Pod "var-expansion-8e1033e4-8bad-4eac-8307-c17a18f1a26d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008146662s +Jan 14 04:19:47.822: INFO: Pod "var-expansion-8e1033e4-8bad-4eac-8307-c17a18f1a26d" satisfied condition "container 0 failed with reason CreateContainerConfigError" +Jan 14 04:19:47.822: INFO: Deleting pod "var-expansion-8e1033e4-8bad-4eac-8307-c17a18f1a26d" in namespace "var-expansion-9181" +Jan 14 04:19:47.831: INFO: Wait up to 5m0s for pod "var-expansion-8e1033e4-8bad-4eac-8307-c17a18f1a26d" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + test/e2e/framework/node/init/init.go:32 +Jan 14 04:19:51.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Variable Expansion + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Variable Expansion + tear down framework | framework.go:193 +STEP: Destroying namespace "var-expansion-9181" for this suite. 01/14/23 04:19:51.844 +------------------------------ +• [SLOW TEST] [6.065 seconds] +[sig-node] Variable Expansion +test/e2e/common/node/framework.go:23 + should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] + test/e2e/common/node/expansion.go:186 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Variable Expansion + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:19:45.784 + Jan 14 04:19:45.784: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename var-expansion 01/14/23 04:19:45.785 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:19:45.8 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:19:45.802 + [BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:31 + [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] + test/e2e/common/node/expansion.go:186 + Jan 14 04:19:45.814: INFO: Waiting up to 2m0s for pod "var-expansion-8e1033e4-8bad-4eac-8307-c17a18f1a26d" in namespace "var-expansion-9181" to be "container 0 failed with reason CreateContainerConfigError" + Jan 14 04:19:45.817: INFO: Pod "var-expansion-8e1033e4-8bad-4eac-8307-c17a18f1a26d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.770102ms + Jan 14 04:19:47.822: INFO: Pod "var-expansion-8e1033e4-8bad-4eac-8307-c17a18f1a26d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008146662s + Jan 14 04:19:47.822: INFO: Pod "var-expansion-8e1033e4-8bad-4eac-8307-c17a18f1a26d" satisfied condition "container 0 failed with reason CreateContainerConfigError" + Jan 14 04:19:47.822: INFO: Deleting pod "var-expansion-8e1033e4-8bad-4eac-8307-c17a18f1a26d" in namespace "var-expansion-9181" + Jan 14 04:19:47.831: INFO: Wait up to 5m0s for pod "var-expansion-8e1033e4-8bad-4eac-8307-c17a18f1a26d" to be fully deleted + [AfterEach] [sig-node] Variable Expansion + test/e2e/framework/node/init/init.go:32 + Jan 14 04:19:51.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Variable Expansion + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Variable Expansion + tear down framework | framework.go:193 + STEP: Destroying namespace "var-expansion-9181" for this suite. 01/14/23 04:19:51.844 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:117 +[BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:19:51.849 +Jan 14 04:19:51.849: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename emptydir 01/14/23 04:19:51.85 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:19:51.863 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:19:51.865 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 +[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:117 +STEP: Creating a pod to test emptydir 0777 on tmpfs 01/14/23 04:19:51.867 +Jan 14 04:19:51.876: INFO: Waiting up to 5m0s for pod "pod-1a58393c-aafd-4bd2-b79f-27eaa7ab96a9" in namespace "emptydir-1312" to be "Succeeded or Failed" +Jan 14 04:19:51.879: INFO: Pod "pod-1a58393c-aafd-4bd2-b79f-27eaa7ab96a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.840479ms +Jan 14 04:19:53.883: INFO: Pod "pod-1a58393c-aafd-4bd2-b79f-27eaa7ab96a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006900803s +Jan 14 04:19:55.884: INFO: Pod "pod-1a58393c-aafd-4bd2-b79f-27eaa7ab96a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007801871s +STEP: Saw pod success 01/14/23 04:19:55.884 +Jan 14 04:19:55.884: INFO: Pod "pod-1a58393c-aafd-4bd2-b79f-27eaa7ab96a9" satisfied condition "Succeeded or Failed" +Jan 14 04:19:55.887: INFO: Trying to get logs from node 10.0.1.106 pod pod-1a58393c-aafd-4bd2-b79f-27eaa7ab96a9 container test-container: +STEP: delete the pod 01/14/23 04:19:55.895 +Jan 14 04:19:55.906: INFO: Waiting for pod pod-1a58393c-aafd-4bd2-b79f-27eaa7ab96a9 to disappear +Jan 14 04:19:55.908: INFO: Pod pod-1a58393c-aafd-4bd2-b79f-27eaa7ab96a9 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 +Jan 14 04:19:55.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 +STEP: Destroying namespace "emptydir-1312" for this suite. 01/14/23 04:19:55.913 +------------------------------ +• [4.069 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:117 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:19:51.849 + Jan 14 04:19:51.849: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename emptydir 01/14/23 04:19:51.85 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:19:51.863 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:19:51.865 + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 + [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:117 + STEP: Creating a pod to test emptydir 0777 on tmpfs 01/14/23 04:19:51.867 + Jan 14 04:19:51.876: INFO: Waiting up to 5m0s for pod "pod-1a58393c-aafd-4bd2-b79f-27eaa7ab96a9" in namespace "emptydir-1312" to be "Succeeded or Failed" + Jan 14 04:19:51.879: INFO: Pod "pod-1a58393c-aafd-4bd2-b79f-27eaa7ab96a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.840479ms + Jan 14 04:19:53.883: INFO: Pod "pod-1a58393c-aafd-4bd2-b79f-27eaa7ab96a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006900803s + Jan 14 04:19:55.884: INFO: Pod "pod-1a58393c-aafd-4bd2-b79f-27eaa7ab96a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007801871s + STEP: Saw pod success 01/14/23 04:19:55.884 + Jan 14 04:19:55.884: INFO: Pod "pod-1a58393c-aafd-4bd2-b79f-27eaa7ab96a9" satisfied condition "Succeeded or Failed" + Jan 14 04:19:55.887: INFO: Trying to get logs from node 10.0.1.106 pod pod-1a58393c-aafd-4bd2-b79f-27eaa7ab96a9 container test-container: + STEP: delete the pod 01/14/23 04:19:55.895 + Jan 14 04:19:55.906: INFO: Waiting for pod pod-1a58393c-aafd-4bd2-b79f-27eaa7ab96a9 to disappear + Jan 14 04:19:55.908: INFO: Pod pod-1a58393c-aafd-4bd2-b79f-27eaa7ab96a9 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 + Jan 14 04:19:55.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 + STEP: Destroying namespace "emptydir-1312" for this suite. 01/14/23 04:19:55.913 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-node] Pods + should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:618 +[BeforeEach] [sig-node] Pods + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:19:55.919 +Jan 14 04:19:55.919: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename pods 01/14/23 04:19:55.919 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:19:55.932 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:19:55.934 +[BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 +[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:618 +Jan 14 04:19:55.936: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: creating the pod 01/14/23 04:19:55.937 +STEP: submitting the pod to kubernetes 01/14/23 04:19:55.937 +Jan 14 04:19:55.945: INFO: Waiting up to 5m0s for pod "pod-logs-websocket-0656616b-e862-4977-92fa-6ac57b1bbcf9" in namespace "pods-7775" to be "running and ready" +Jan 14 04:19:55.948: INFO: Pod "pod-logs-websocket-0656616b-e862-4977-92fa-6ac57b1bbcf9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.927515ms +Jan 14 04:19:55.948: INFO: The phase of Pod pod-logs-websocket-0656616b-e862-4977-92fa-6ac57b1bbcf9 is Pending, waiting for it to be Running (with Ready = true) +Jan 14 04:19:57.954: INFO: Pod "pod-logs-websocket-0656616b-e862-4977-92fa-6ac57b1bbcf9": Phase="Running", Reason="", readiness=true. Elapsed: 2.008340066s +Jan 14 04:19:57.954: INFO: The phase of Pod pod-logs-websocket-0656616b-e862-4977-92fa-6ac57b1bbcf9 is Running (Ready = true) +Jan 14 04:19:57.954: INFO: Pod "pod-logs-websocket-0656616b-e862-4977-92fa-6ac57b1bbcf9" satisfied condition "running and ready" +[AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 +Jan 14 04:19:57.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 +STEP: Destroying namespace "pods-7775" for this suite. 01/14/23 04:19:57.976 +------------------------------ +• [2.062 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:618 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Pods + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:19:55.919 + Jan 14 04:19:55.919: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename pods 01/14/23 04:19:55.919 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:19:55.932 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:19:55.934 + [BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 + [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:618 + Jan 14 04:19:55.936: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: creating the pod 01/14/23 04:19:55.937 + STEP: submitting the pod to kubernetes 01/14/23 04:19:55.937 + Jan 14 04:19:55.945: INFO: Waiting up to 5m0s for pod "pod-logs-websocket-0656616b-e862-4977-92fa-6ac57b1bbcf9" in namespace "pods-7775" to be "running and ready" + Jan 14 04:19:55.948: INFO: Pod "pod-logs-websocket-0656616b-e862-4977-92fa-6ac57b1bbcf9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.927515ms + Jan 14 04:19:55.948: INFO: The phase of Pod pod-logs-websocket-0656616b-e862-4977-92fa-6ac57b1bbcf9 is Pending, waiting for it to be Running (with Ready = true) + Jan 14 04:19:57.954: INFO: Pod "pod-logs-websocket-0656616b-e862-4977-92fa-6ac57b1bbcf9": Phase="Running", Reason="", readiness=true. Elapsed: 2.008340066s + Jan 14 04:19:57.954: INFO: The phase of Pod pod-logs-websocket-0656616b-e862-4977-92fa-6ac57b1bbcf9 is Running (Ready = true) + Jan 14 04:19:57.954: INFO: Pod "pod-logs-websocket-0656616b-e862-4977-92fa-6ac57b1bbcf9" satisfied condition "running and ready" + [AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 + Jan 14 04:19:57.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 + STEP: Destroying namespace "pods-7775" for this suite. 01/14/23 04:19:57.976 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + listing validating webhooks should work [Conformance] + test/e2e/apimachinery/webhook.go:582 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:19:57.983 +Jan 14 04:19:57.983: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename webhook 01/14/23 04:19:57.984 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:19:57.996 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:19:57.998 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 01/14/23 04:19:58.011 +STEP: Create role binding to let webhook read extension-apiserver-authentication 01/14/23 04:19:58.376 +STEP: Deploying the webhook pod 01/14/23 04:19:58.382 +STEP: Wait for the deployment to be ready 01/14/23 04:19:58.395 +Jan 14 04:19:58.406: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 01/14/23 04:20:00.416 +STEP: Verifying the service has paired with the endpoint 01/14/23 04:20:00.424 +Jan 14 04:20:01.425: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] listing validating webhooks should work [Conformance] + test/e2e/apimachinery/webhook.go:582 +STEP: Listing all of the created validation webhooks 01/14/23 04:20:01.486 +STEP: Creating a configMap that does not comply to the validation webhook rules 01/14/23 04:20:01.512 +STEP: Deleting the collection of validation webhooks 01/14/23 04:20:01.535 +STEP: Creating a configMap that does not comply to the validation webhook rules 01/14/23 04:20:01.579 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:20:01.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "webhook-6689" for this suite. 01/14/23 04:20:01.639 +STEP: Destroying namespace "webhook-6689-markers" for this suite. 01/14/23 04:20:01.647 +------------------------------ +• [3.670 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + listing validating webhooks should work [Conformance] + test/e2e/apimachinery/webhook.go:582 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:19:57.983 + Jan 14 04:19:57.983: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename webhook 01/14/23 04:19:57.984 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:19:57.996 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:19:57.998 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 01/14/23 04:19:58.011 + STEP: Create role binding to let webhook read extension-apiserver-authentication 01/14/23 04:19:58.376 + STEP: Deploying the webhook pod 01/14/23 04:19:58.382 + STEP: Wait for the deployment to be ready 01/14/23 04:19:58.395 + Jan 14 04:19:58.406: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 01/14/23 04:20:00.416 + STEP: Verifying the service has paired with the endpoint 01/14/23 04:20:00.424 + Jan 14 04:20:01.425: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] listing validating webhooks should work [Conformance] + test/e2e/apimachinery/webhook.go:582 + STEP: Listing all of the created validation webhooks 01/14/23 04:20:01.486 + STEP: Creating a configMap that does not comply to the validation webhook rules 01/14/23 04:20:01.512 + STEP: Deleting the collection of validation webhooks 01/14/23 04:20:01.535 + STEP: Creating a configMap that does not comply to the validation webhook rules 01/14/23 04:20:01.579 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:20:01.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "webhook-6689" for this suite. 01/14/23 04:20:01.639 + STEP: Destroying namespace "webhook-6689-markers" for this suite. 01/14/23 04:20:01.647 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:207 +[BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:20:01.654 +Jan 14 04:20:01.654: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename emptydir 01/14/23 04:20:01.655 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:20:01.683 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:20:01.686 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 +[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:207 +STEP: Creating a pod to test emptydir 0666 on node default medium 01/14/23 04:20:01.689 +Jan 14 04:20:01.717: INFO: Waiting up to 5m0s for pod "pod-4e9c4a9e-ac8e-4018-8cf7-91ba4dcc1c7e" in namespace "emptydir-778" to be "Succeeded or Failed" +Jan 14 04:20:01.720: INFO: Pod "pod-4e9c4a9e-ac8e-4018-8cf7-91ba4dcc1c7e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.15172ms +Jan 14 04:20:03.726: INFO: Pod "pod-4e9c4a9e-ac8e-4018-8cf7-91ba4dcc1c7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0084438s +Jan 14 04:20:05.725: INFO: Pod "pod-4e9c4a9e-ac8e-4018-8cf7-91ba4dcc1c7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007319632s +STEP: Saw pod success 01/14/23 04:20:05.725 +Jan 14 04:20:05.725: INFO: Pod "pod-4e9c4a9e-ac8e-4018-8cf7-91ba4dcc1c7e" satisfied condition "Succeeded or Failed" +Jan 14 04:20:05.729: INFO: Trying to get logs from node 10.0.1.99 pod pod-4e9c4a9e-ac8e-4018-8cf7-91ba4dcc1c7e container test-container: +STEP: delete the pod 01/14/23 04:20:05.742 +Jan 14 04:20:05.755: INFO: Waiting for pod pod-4e9c4a9e-ac8e-4018-8cf7-91ba4dcc1c7e to disappear +Jan 14 04:20:05.758: INFO: Pod pod-4e9c4a9e-ac8e-4018-8cf7-91ba4dcc1c7e no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 +Jan 14 04:20:05.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 +STEP: Destroying namespace "emptydir-778" for this suite. 01/14/23 04:20:05.762 +------------------------------ +• [4.113 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:207 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:20:01.654 + Jan 14 04:20:01.654: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename emptydir 01/14/23 04:20:01.655 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:20:01.683 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:20:01.686 + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 + [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:207 + STEP: Creating a pod to test emptydir 0666 on node default medium 01/14/23 04:20:01.689 + Jan 14 04:20:01.717: INFO: Waiting up to 5m0s for pod "pod-4e9c4a9e-ac8e-4018-8cf7-91ba4dcc1c7e" in namespace "emptydir-778" to be "Succeeded or Failed" + Jan 14 04:20:01.720: INFO: Pod "pod-4e9c4a9e-ac8e-4018-8cf7-91ba4dcc1c7e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.15172ms + Jan 14 04:20:03.726: INFO: Pod "pod-4e9c4a9e-ac8e-4018-8cf7-91ba4dcc1c7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0084438s + Jan 14 04:20:05.725: INFO: Pod "pod-4e9c4a9e-ac8e-4018-8cf7-91ba4dcc1c7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007319632s + STEP: Saw pod success 01/14/23 04:20:05.725 + Jan 14 04:20:05.725: INFO: Pod "pod-4e9c4a9e-ac8e-4018-8cf7-91ba4dcc1c7e" satisfied condition "Succeeded or Failed" + Jan 14 04:20:05.729: INFO: Trying to get logs from node 10.0.1.99 pod pod-4e9c4a9e-ac8e-4018-8cf7-91ba4dcc1c7e container test-container: + STEP: delete the pod 01/14/23 04:20:05.742 + Jan 14 04:20:05.755: INFO: Waiting for pod pod-4e9c4a9e-ac8e-4018-8cf7-91ba4dcc1c7e to disappear + Jan 14 04:20:05.758: INFO: Pod pod-4e9c4a9e-ac8e-4018-8cf7-91ba4dcc1c7e no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 + Jan 14 04:20:05.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 + STEP: Destroying namespace "emptydir-778" for this suite. 01/14/23 04:20:05.762 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:152 +[BeforeEach] [sig-node] Probing container + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:20:05.767 +Jan 14 04:20:05.768: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename container-probe 01/14/23 04:20:05.768 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:20:05.782 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:20:05.784 +[BeforeEach] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 +[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:152 +STEP: Creating pod busybox-6a76c331-2e17-43e3-9442-de60be83eed8 in namespace container-probe-9732 01/14/23 04:20:05.786 +Jan 14 04:20:05.795: INFO: Waiting up to 5m0s for pod "busybox-6a76c331-2e17-43e3-9442-de60be83eed8" in namespace "container-probe-9732" to be "not pending" +Jan 14 04:20:05.798: INFO: Pod "busybox-6a76c331-2e17-43e3-9442-de60be83eed8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.826393ms +Jan 14 04:20:07.802: INFO: Pod "busybox-6a76c331-2e17-43e3-9442-de60be83eed8": Phase="Running", Reason="", readiness=true. Elapsed: 2.007118786s +Jan 14 04:20:07.802: INFO: Pod "busybox-6a76c331-2e17-43e3-9442-de60be83eed8" satisfied condition "not pending" +Jan 14 04:20:07.802: INFO: Started pod busybox-6a76c331-2e17-43e3-9442-de60be83eed8 in namespace container-probe-9732 +STEP: checking the pod's current state and verifying that restartCount is present 01/14/23 04:20:07.802 +Jan 14 04:20:07.805: INFO: Initial restart count of pod busybox-6a76c331-2e17-43e3-9442-de60be83eed8 is 0 +STEP: deleting the pod 01/14/23 04:24:08.431 +[AfterEach] [sig-node] Probing container + test/e2e/framework/node/init/init.go:32 +Jan 14 04:24:08.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Probing container + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Probing container + tear down framework | framework.go:193 +STEP: Destroying namespace "container-probe-9732" for this suite. 01/14/23 04:24:08.452 +------------------------------ +• [SLOW TEST] [242.689 seconds] +[sig-node] Probing container +test/e2e/common/node/framework.go:23 + should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:152 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Probing container + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:20:05.767 + Jan 14 04:20:05.768: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename container-probe 01/14/23 04:20:05.768 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:20:05.782 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:20:05.784 + [BeforeEach] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 + [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:152 + STEP: Creating pod busybox-6a76c331-2e17-43e3-9442-de60be83eed8 in namespace container-probe-9732 01/14/23 04:20:05.786 + Jan 14 04:20:05.795: INFO: Waiting up to 5m0s for pod "busybox-6a76c331-2e17-43e3-9442-de60be83eed8" in namespace "container-probe-9732" to be "not pending" + Jan 14 04:20:05.798: INFO: Pod "busybox-6a76c331-2e17-43e3-9442-de60be83eed8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.826393ms + Jan 14 04:20:07.802: INFO: Pod "busybox-6a76c331-2e17-43e3-9442-de60be83eed8": Phase="Running", Reason="", readiness=true. Elapsed: 2.007118786s + Jan 14 04:20:07.802: INFO: Pod "busybox-6a76c331-2e17-43e3-9442-de60be83eed8" satisfied condition "not pending" + Jan 14 04:20:07.802: INFO: Started pod busybox-6a76c331-2e17-43e3-9442-de60be83eed8 in namespace container-probe-9732 + STEP: checking the pod's current state and verifying that restartCount is present 01/14/23 04:20:07.802 + Jan 14 04:20:07.805: INFO: Initial restart count of pod busybox-6a76c331-2e17-43e3-9442-de60be83eed8 is 0 + STEP: deleting the pod 01/14/23 04:24:08.431 + [AfterEach] [sig-node] Probing container + test/e2e/framework/node/init/init.go:32 + Jan 14 04:24:08.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Probing container + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Probing container + tear down framework | framework.go:193 + STEP: Destroying namespace "container-probe-9732" for this suite. 01/14/23 04:24:08.452 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-api-machinery] Watchers + should be able to start watching from a specific resource version [Conformance] + test/e2e/apimachinery/watch.go:142 +[BeforeEach] [sig-api-machinery] Watchers + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:24:08.456 +Jan 14 04:24:08.456: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename watch 01/14/23 04:24:08.457 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:24:08.479 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:24:08.481 +[BeforeEach] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:31 +[It] should be able to start watching from a specific resource version [Conformance] + test/e2e/apimachinery/watch.go:142 +STEP: creating a new configmap 01/14/23 04:24:08.483 +STEP: modifying the configmap once 01/14/23 04:24:08.489 +STEP: modifying the configmap a second time 01/14/23 04:24:08.496 +STEP: deleting the configmap 01/14/23 04:24:08.504 +STEP: creating a watch on configmaps from the resource version returned by the first update 01/14/23 04:24:08.509 +STEP: Expecting to observe notifications for all changes to the configmap after the first update 01/14/23 04:24:08.51 +Jan 14 04:24:08.510: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5946 b96c8fd6-4f0d-4521-ab08-53afb868ae28 441576 0 2023-01-14 04:24:08 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2023-01-14 04:24:08 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +Jan 14 04:24:08.510: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5946 b96c8fd6-4f0d-4521-ab08-53afb868ae28 441577 0 2023-01-14 04:24:08 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2023-01-14 04:24:08 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} +[AfterEach] [sig-api-machinery] Watchers + test/e2e/framework/node/init/init.go:32 +Jan 14 04:24:08.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Watchers + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Watchers + tear down framework | framework.go:193 +STEP: Destroying namespace "watch-5946" for this suite. 01/14/23 04:24:08.514 +------------------------------ +• [0.061 seconds] +[sig-api-machinery] Watchers +test/e2e/apimachinery/framework.go:23 + should be able to start watching from a specific resource version [Conformance] + test/e2e/apimachinery/watch.go:142 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Watchers + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:24:08.456 + Jan 14 04:24:08.456: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename watch 01/14/23 04:24:08.457 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:24:08.479 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:24:08.481 + [BeforeEach] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:31 + [It] should be able to start watching from a specific resource version [Conformance] + test/e2e/apimachinery/watch.go:142 + STEP: creating a new configmap 01/14/23 04:24:08.483 + STEP: modifying the configmap once 01/14/23 04:24:08.489 + STEP: modifying the configmap a second time 01/14/23 04:24:08.496 + STEP: deleting the configmap 01/14/23 04:24:08.504 + STEP: creating a watch on configmaps from the resource version returned by the first update 01/14/23 04:24:08.509 + STEP: Expecting to observe notifications for all changes to the configmap after the first update 01/14/23 04:24:08.51 + Jan 14 04:24:08.510: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5946 b96c8fd6-4f0d-4521-ab08-53afb868ae28 441576 0 2023-01-14 04:24:08 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2023-01-14 04:24:08 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} + Jan 14 04:24:08.510: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-5946 b96c8fd6-4f0d-4521-ab08-53afb868ae28 441577 0 2023-01-14 04:24:08 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2023-01-14 04:24:08 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} + [AfterEach] [sig-api-machinery] Watchers + test/e2e/framework/node/init/init.go:32 + Jan 14 04:24:08.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Watchers + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Watchers + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Watchers + tear down framework | framework.go:193 + STEP: Destroying namespace "watch-5946" for this suite. 01/14/23 04:24:08.514 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl diff + should check if kubectl diff finds a difference for Deployments [Conformance] + test/e2e/kubectl/kubectl.go:931 +[BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:24:08.518 +Jan 14 04:24:08.518: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename kubectl 01/14/23 04:24:08.519 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:24:08.532 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:24:08.534 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[It] should check if kubectl diff finds a difference for Deployments [Conformance] + test/e2e/kubectl/kubectl.go:931 +STEP: create deployment with httpd image 01/14/23 04:24:08.536 +Jan 14 04:24:08.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-2551 create -f -' +Jan 14 04:24:09.064: INFO: stderr: "" +Jan 14 04:24:09.064: INFO: stdout: "deployment.apps/httpd-deployment created\n" +STEP: verify diff finds difference between live and declared image 01/14/23 04:24:09.064 +Jan 14 04:24:09.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-2551 diff -f -' +Jan 14 04:24:09.546: INFO: rc: 1 +Jan 14 04:24:09.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-2551 delete -f -' +Jan 14 04:24:09.613: INFO: stderr: "" +Jan 14 04:24:09.613: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 +Jan 14 04:24:09.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 +STEP: Destroying namespace "kubectl-2551" for this suite. 01/14/23 04:24:09.619 +------------------------------ +• [1.105 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl diff + test/e2e/kubectl/kubectl.go:925 + should check if kubectl diff finds a difference for Deployments [Conformance] + test/e2e/kubectl/kubectl.go:931 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:24:08.518 + Jan 14 04:24:08.518: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename kubectl 01/14/23 04:24:08.519 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:24:08.532 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:24:08.534 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [It] should check if kubectl diff finds a difference for Deployments [Conformance] + test/e2e/kubectl/kubectl.go:931 + STEP: create deployment with httpd image 01/14/23 04:24:08.536 + Jan 14 04:24:08.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-2551 create -f -' + Jan 14 04:24:09.064: INFO: stderr: "" + Jan 14 04:24:09.064: INFO: stdout: "deployment.apps/httpd-deployment created\n" + STEP: verify diff finds difference between live and declared image 01/14/23 04:24:09.064 + Jan 14 04:24:09.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-2551 diff -f -' + Jan 14 04:24:09.546: INFO: rc: 1 + Jan 14 04:24:09.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-2551 delete -f -' + Jan 14 04:24:09.613: INFO: stderr: "" + Jan 14 04:24:09.613: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 + Jan 14 04:24:09.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 + STEP: Destroying namespace "kubectl-2551" for this suite. 01/14/23 04:24:09.619 + << End Captured GinkgoWriter Output +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate pod and apply defaults after mutation [Conformance] + test/e2e/apimachinery/webhook.go:264 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:24:09.624 +Jan 14 04:24:09.624: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename webhook 01/14/23 04:24:09.624 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:24:09.64 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:24:09.642 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 01/14/23 04:24:09.653 +STEP: Create role binding to let webhook read extension-apiserver-authentication 01/14/23 04:24:10.265 +STEP: Deploying the webhook pod 01/14/23 04:24:10.272 +STEP: Wait for the deployment to be ready 01/14/23 04:24:10.283 +Jan 14 04:24:10.290: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 01/14/23 04:24:12.301 +STEP: Verifying the service has paired with the endpoint 01/14/23 04:24:12.31 +Jan 14 04:24:13.311: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate pod and apply defaults after mutation [Conformance] + test/e2e/apimachinery/webhook.go:264 +STEP: Registering the mutating pod webhook via the AdmissionRegistration API 01/14/23 04:24:13.315 +STEP: create a pod that should be updated by the webhook 01/14/23 04:24:13.327 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:24:13.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "webhook-4149" for this suite. 01/14/23 04:24:13.396 +STEP: Destroying namespace "webhook-4149-markers" for this suite. 01/14/23 04:24:13.4 +------------------------------ +• [3.783 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should mutate pod and apply defaults after mutation [Conformance] + test/e2e/apimachinery/webhook.go:264 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:24:09.624 + Jan 14 04:24:09.624: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename webhook 01/14/23 04:24:09.624 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:24:09.64 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:24:09.642 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 01/14/23 04:24:09.653 + STEP: Create role binding to let webhook read extension-apiserver-authentication 01/14/23 04:24:10.265 + STEP: Deploying the webhook pod 01/14/23 04:24:10.272 + STEP: Wait for the deployment to be ready 01/14/23 04:24:10.283 + Jan 14 04:24:10.290: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 01/14/23 04:24:12.301 + STEP: Verifying the service has paired with the endpoint 01/14/23 04:24:12.31 + Jan 14 04:24:13.311: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should mutate pod and apply defaults after mutation [Conformance] + test/e2e/apimachinery/webhook.go:264 + STEP: Registering the mutating pod webhook via the AdmissionRegistration API 01/14/23 04:24:13.315 + STEP: create a pod that should be updated by the webhook 01/14/23 04:24:13.327 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:24:13.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "webhook-4149" for this suite. 01/14/23 04:24:13.396 + STEP: Destroying namespace "webhook-4149-markers" for this suite. 01/14/23 04:24:13.4 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-node] Sysctls [LinuxOnly] [NodeConformance] + should support sysctls [MinimumKubeletVersion:1.21] [Conformance] + test/e2e/common/node/sysctl.go:77 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/common/node/sysctl.go:37 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:24:13.407 +Jan 14 04:24:13.407: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename sysctl 01/14/23 04:24:13.408 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:24:13.421 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:24:13.423 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/common/node/sysctl.go:67 +[It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] + test/e2e/common/node/sysctl.go:77 +STEP: Creating a pod with the kernel.shm_rmid_forced sysctl 01/14/23 04:24:13.425 +STEP: Watching for error events or started pod 01/14/23 04:24:13.435 +STEP: Waiting for pod completion 01/14/23 04:24:15.441 +Jan 14 04:24:15.441: INFO: Waiting up to 3m0s for pod "sysctl-5960ed52-9736-4135-89bc-c7aec7ca3864" in namespace "sysctl-2460" to be "completed" +Jan 14 04:24:15.444: INFO: Pod "sysctl-5960ed52-9736-4135-89bc-c7aec7ca3864": Phase="Pending", Reason="", readiness=false. Elapsed: 3.095505ms +Jan 14 04:24:17.450: INFO: Pod "sysctl-5960ed52-9736-4135-89bc-c7aec7ca3864": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00881251s +Jan 14 04:24:17.450: INFO: Pod "sysctl-5960ed52-9736-4135-89bc-c7aec7ca3864" satisfied condition "completed" +STEP: Checking that the pod succeeded 01/14/23 04:24:17.453 +STEP: Getting logs from the pod 01/14/23 04:24:17.453 +STEP: Checking that the sysctl is actually updated 01/14/23 04:24:17.466 +[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:24:17.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + tear down framework | framework.go:193 +STEP: Destroying namespace "sysctl-2460" for this suite. 01/14/23 04:24:17.471 +------------------------------ +• [4.069 seconds] +[sig-node] Sysctls [LinuxOnly] [NodeConformance] +test/e2e/common/node/framework.go:23 + should support sysctls [MinimumKubeletVersion:1.21] [Conformance] + test/e2e/common/node/sysctl.go:77 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/common/node/sysctl.go:37 + [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:24:13.407 + Jan 14 04:24:13.407: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename sysctl 01/14/23 04:24:13.408 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:24:13.421 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:24:13.423 + [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/common/node/sysctl.go:67 + [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] + test/e2e/common/node/sysctl.go:77 + STEP: Creating a pod with the kernel.shm_rmid_forced sysctl 01/14/23 04:24:13.425 + STEP: Watching for error events or started pod 01/14/23 04:24:13.435 + STEP: Waiting for pod completion 01/14/23 04:24:15.441 + Jan 14 04:24:15.441: INFO: Waiting up to 3m0s for pod "sysctl-5960ed52-9736-4135-89bc-c7aec7ca3864" in namespace "sysctl-2460" to be "completed" + Jan 14 04:24:15.444: INFO: Pod "sysctl-5960ed52-9736-4135-89bc-c7aec7ca3864": Phase="Pending", Reason="", readiness=false. Elapsed: 3.095505ms + Jan 14 04:24:17.450: INFO: Pod "sysctl-5960ed52-9736-4135-89bc-c7aec7ca3864": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00881251s + Jan 14 04:24:17.450: INFO: Pod "sysctl-5960ed52-9736-4135-89bc-c7aec7ca3864" satisfied condition "completed" + STEP: Checking that the pod succeeded 01/14/23 04:24:17.453 + STEP: Getting logs from the pod 01/14/23 04:24:17.453 + STEP: Checking that the sysctl is actually updated 01/14/23 04:24:17.466 + [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:24:17.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + tear down framework | framework.go:193 + STEP: Destroying namespace "sysctl-2460" for this suite. 01/14/23 04:24:17.471 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should serve a basic endpoint from pods [Conformance] + test/e2e/network/service.go:787 +[BeforeEach] [sig-network] Services + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:24:17.478 +Jan 14 04:24:17.478: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename services 01/14/23 04:24:17.479 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:24:17.556 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:24:17.558 +[BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should serve a basic endpoint from pods [Conformance] + test/e2e/network/service.go:787 +STEP: creating service endpoint-test2 in namespace services-3684 01/14/23 04:24:17.561 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3684 to expose endpoints map[] 01/14/23 04:24:17.577 +Jan 14 04:24:17.586: INFO: successfully validated that service endpoint-test2 in namespace services-3684 exposes endpoints map[] +STEP: Creating pod pod1 in namespace services-3684 01/14/23 04:24:17.586 +Jan 14 04:24:17.596: INFO: Waiting up to 5m0s for pod "pod1" in namespace "services-3684" to be "running and ready" +Jan 14 04:24:17.599: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.347283ms +Jan 14 04:24:17.599: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Jan 14 04:24:19.623: INFO: Pod "pod1": Phase="Running", Reason="", readiness=true. Elapsed: 2.027596894s +Jan 14 04:24:19.623: INFO: The phase of Pod pod1 is Running (Ready = true) +Jan 14 04:24:19.623: INFO: Pod "pod1" satisfied condition "running and ready" +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3684 to expose endpoints map[pod1:[80]] 01/14/23 04:24:19.627 +Jan 14 04:24:19.636: INFO: successfully validated that service endpoint-test2 in namespace services-3684 exposes endpoints map[pod1:[80]] +STEP: Checking if the Service forwards traffic to pod1 01/14/23 04:24:19.636 +Jan 14 04:24:19.636: INFO: Creating new exec pod +Jan 14 04:24:19.641: INFO: Waiting up to 5m0s for pod "execpodrhhgn" in namespace "services-3684" to be "running" +Jan 14 04:24:19.644: INFO: Pod "execpodrhhgn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.84308ms +Jan 14 04:24:21.650: INFO: Pod "execpodrhhgn": Phase="Running", Reason="", readiness=true. Elapsed: 2.008134661s +Jan 14 04:24:21.650: INFO: Pod "execpodrhhgn" satisfied condition "running" +Jan 14 04:24:22.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-3684 exec execpodrhhgn -- /bin/sh -x -c nc -v -z -w 2 endpoint-test2 80' +Jan 14 04:24:22.760: INFO: stderr: "+ nc -v -z -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Jan 14 04:24:22.760: INFO: stdout: "" +Jan 14 04:24:22.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-3684 exec execpodrhhgn -- /bin/sh -x -c nc -v -z -w 2 10.55.254.173 80' +Jan 14 04:24:22.871: INFO: stderr: "+ nc -v -z -w 2 10.55.254.173 80\nConnection to 10.55.254.173 80 port [tcp/http] succeeded!\n" +Jan 14 04:24:22.871: INFO: stdout: "" +STEP: Creating pod pod2 in namespace services-3684 01/14/23 04:24:22.871 +Jan 14 04:24:22.880: INFO: Waiting up to 5m0s for pod "pod2" in namespace "services-3684" to be "running and ready" +Jan 14 04:24:22.883: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.225952ms +Jan 14 04:24:22.883: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Jan 14 04:24:24.887: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 2.007650801s +Jan 14 04:24:24.887: INFO: The phase of Pod pod2 is Running (Ready = true) +Jan 14 04:24:24.887: INFO: Pod "pod2" satisfied condition "running and ready" +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3684 to expose endpoints map[pod1:[80] pod2:[80]] 01/14/23 04:24:24.891 +Jan 14 04:24:24.902: INFO: successfully validated that service endpoint-test2 in namespace services-3684 exposes endpoints map[pod1:[80] pod2:[80]] +STEP: Checking if the Service forwards traffic to pod1 and pod2 01/14/23 04:24:24.902 +Jan 14 04:24:25.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-3684 exec execpodrhhgn -- /bin/sh -x -c nc -v -z -w 2 endpoint-test2 80' +Jan 14 04:24:26.016: INFO: stderr: "+ nc -v -z -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Jan 14 04:24:26.016: INFO: stdout: "" +Jan 14 04:24:26.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-3684 exec execpodrhhgn -- /bin/sh -x -c nc -v -z -w 2 10.55.254.173 80' +Jan 14 04:24:26.127: INFO: stderr: "+ nc -v -z -w 2 10.55.254.173 80\nConnection to 10.55.254.173 80 port [tcp/http] succeeded!\n" +Jan 14 04:24:26.127: INFO: stdout: "" +STEP: Deleting pod pod1 in namespace services-3684 01/14/23 04:24:26.127 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3684 to expose endpoints map[pod2:[80]] 01/14/23 04:24:26.148 +Jan 14 04:24:26.162: INFO: successfully validated that service endpoint-test2 in namespace services-3684 exposes endpoints map[pod2:[80]] +STEP: Checking if the Service forwards traffic to pod2 01/14/23 04:24:26.162 +Jan 14 04:24:27.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-3684 exec execpodrhhgn -- /bin/sh -x -c nc -v -z -w 2 endpoint-test2 80' +Jan 14 04:24:27.270: INFO: stderr: "+ nc -v -z -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" +Jan 14 04:24:27.270: INFO: stdout: "" +Jan 14 04:24:27.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-3684 exec execpodrhhgn -- /bin/sh -x -c nc -v -z -w 2 10.55.254.173 80' +Jan 14 04:24:27.376: INFO: stderr: "+ nc -v -z -w 2 10.55.254.173 80\nConnection to 10.55.254.173 80 port [tcp/http] succeeded!\n" +Jan 14 04:24:27.376: INFO: stdout: "" +STEP: Deleting pod pod2 in namespace services-3684 01/14/23 04:24:27.376 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3684 to expose endpoints map[] 01/14/23 04:24:27.394 +Jan 14 04:24:27.400: INFO: successfully validated that service endpoint-test2 in namespace services-3684 exposes endpoints map[] +[AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 +Jan 14 04:24:27.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 +STEP: Destroying namespace "services-3684" for this suite. 01/14/23 04:24:27.422 +------------------------------ +• [SLOW TEST] [9.949 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should serve a basic endpoint from pods [Conformance] + test/e2e/network/service.go:787 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:24:17.478 + Jan 14 04:24:17.478: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename services 01/14/23 04:24:17.479 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:24:17.556 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:24:17.558 + [BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should serve a basic endpoint from pods [Conformance] + test/e2e/network/service.go:787 + STEP: creating service endpoint-test2 in namespace services-3684 01/14/23 04:24:17.561 + STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3684 to expose endpoints map[] 01/14/23 04:24:17.577 + Jan 14 04:24:17.586: INFO: successfully validated that service endpoint-test2 in namespace services-3684 exposes endpoints map[] + STEP: Creating pod pod1 in namespace services-3684 01/14/23 04:24:17.586 + Jan 14 04:24:17.596: INFO: Waiting up to 5m0s for pod "pod1" in namespace "services-3684" to be "running and ready" + Jan 14 04:24:17.599: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.347283ms + Jan 14 04:24:17.599: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) + Jan 14 04:24:19.623: INFO: Pod "pod1": Phase="Running", Reason="", readiness=true. Elapsed: 2.027596894s + Jan 14 04:24:19.623: INFO: The phase of Pod pod1 is Running (Ready = true) + Jan 14 04:24:19.623: INFO: Pod "pod1" satisfied condition "running and ready" + STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3684 to expose endpoints map[pod1:[80]] 01/14/23 04:24:19.627 + Jan 14 04:24:19.636: INFO: successfully validated that service endpoint-test2 in namespace services-3684 exposes endpoints map[pod1:[80]] + STEP: Checking if the Service forwards traffic to pod1 01/14/23 04:24:19.636 + Jan 14 04:24:19.636: INFO: Creating new exec pod + Jan 14 04:24:19.641: INFO: Waiting up to 5m0s for pod "execpodrhhgn" in namespace "services-3684" to be "running" + Jan 14 04:24:19.644: INFO: Pod "execpodrhhgn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.84308ms + Jan 14 04:24:21.650: INFO: Pod "execpodrhhgn": Phase="Running", Reason="", readiness=true. Elapsed: 2.008134661s + Jan 14 04:24:21.650: INFO: Pod "execpodrhhgn" satisfied condition "running" + Jan 14 04:24:22.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-3684 exec execpodrhhgn -- /bin/sh -x -c nc -v -z -w 2 endpoint-test2 80' + Jan 14 04:24:22.760: INFO: stderr: "+ nc -v -z -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" + Jan 14 04:24:22.760: INFO: stdout: "" + Jan 14 04:24:22.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-3684 exec execpodrhhgn -- /bin/sh -x -c nc -v -z -w 2 10.55.254.173 80' + Jan 14 04:24:22.871: INFO: stderr: "+ nc -v -z -w 2 10.55.254.173 80\nConnection to 10.55.254.173 80 port [tcp/http] succeeded!\n" + Jan 14 04:24:22.871: INFO: stdout: "" + STEP: Creating pod pod2 in namespace services-3684 01/14/23 04:24:22.871 + Jan 14 04:24:22.880: INFO: Waiting up to 5m0s for pod "pod2" in namespace "services-3684" to be "running and ready" + Jan 14 04:24:22.883: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.225952ms + Jan 14 04:24:22.883: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) + Jan 14 04:24:24.887: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 2.007650801s + Jan 14 04:24:24.887: INFO: The phase of Pod pod2 is Running (Ready = true) + Jan 14 04:24:24.887: INFO: Pod "pod2" satisfied condition "running and ready" + STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3684 to expose endpoints map[pod1:[80] pod2:[80]] 01/14/23 04:24:24.891 + Jan 14 04:24:24.902: INFO: successfully validated that service endpoint-test2 in namespace services-3684 exposes endpoints map[pod1:[80] pod2:[80]] + STEP: Checking if the Service forwards traffic to pod1 and pod2 01/14/23 04:24:24.902 + Jan 14 04:24:25.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-3684 exec execpodrhhgn -- /bin/sh -x -c nc -v -z -w 2 endpoint-test2 80' + Jan 14 04:24:26.016: INFO: stderr: "+ nc -v -z -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" + Jan 14 04:24:26.016: INFO: stdout: "" + Jan 14 04:24:26.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-3684 exec execpodrhhgn -- /bin/sh -x -c nc -v -z -w 2 10.55.254.173 80' + Jan 14 04:24:26.127: INFO: stderr: "+ nc -v -z -w 2 10.55.254.173 80\nConnection to 10.55.254.173 80 port [tcp/http] succeeded!\n" + Jan 14 04:24:26.127: INFO: stdout: "" + STEP: Deleting pod pod1 in namespace services-3684 01/14/23 04:24:26.127 + STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3684 to expose endpoints map[pod2:[80]] 01/14/23 04:24:26.148 + Jan 14 04:24:26.162: INFO: successfully validated that service endpoint-test2 in namespace services-3684 exposes endpoints map[pod2:[80]] + STEP: Checking if the Service forwards traffic to pod2 01/14/23 04:24:26.162 + Jan 14 04:24:27.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-3684 exec execpodrhhgn -- /bin/sh -x -c nc -v -z -w 2 endpoint-test2 80' + Jan 14 04:24:27.270: INFO: stderr: "+ nc -v -z -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" + Jan 14 04:24:27.270: INFO: stdout: "" + Jan 14 04:24:27.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-3684 exec execpodrhhgn -- /bin/sh -x -c nc -v -z -w 2 10.55.254.173 80' + Jan 14 04:24:27.376: INFO: stderr: "+ nc -v -z -w 2 10.55.254.173 80\nConnection to 10.55.254.173 80 port [tcp/http] succeeded!\n" + Jan 14 04:24:27.376: INFO: stdout: "" + STEP: Deleting pod pod2 in namespace services-3684 01/14/23 04:24:27.376 + STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3684 to expose endpoints map[] 01/14/23 04:24:27.394 + Jan 14 04:24:27.400: INFO: successfully validated that service endpoint-test2 in namespace services-3684 exposes endpoints map[] + [AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 + Jan 14 04:24:27.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 + STEP: Destroying namespace "services-3684" for this suite. 01/14/23 04:24:27.422 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:68 +[BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:24:27.428 +Jan 14 04:24:27.428: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename projected 01/14/23 04:24:27.429 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:24:27.443 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:24:27.445 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 +[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:68 +STEP: Creating a pod to test downward API volume plugin 01/14/23 04:24:27.448 +Jan 14 04:24:27.455: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d2e5881e-ac64-45ea-9570-02c4686e72f0" in namespace "projected-3816" to be "Succeeded or Failed" +Jan 14 04:24:27.458: INFO: Pod "downwardapi-volume-d2e5881e-ac64-45ea-9570-02c4686e72f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.767149ms +Jan 14 04:24:29.463: INFO: Pod "downwardapi-volume-d2e5881e-ac64-45ea-9570-02c4686e72f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008024992s +Jan 14 04:24:31.464: INFO: Pod "downwardapi-volume-d2e5881e-ac64-45ea-9570-02c4686e72f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008210898s +STEP: Saw pod success 01/14/23 04:24:31.464 +Jan 14 04:24:31.464: INFO: Pod "downwardapi-volume-d2e5881e-ac64-45ea-9570-02c4686e72f0" satisfied condition "Succeeded or Failed" +Jan 14 04:24:31.467: INFO: Trying to get logs from node 10.0.1.99 pod downwardapi-volume-d2e5881e-ac64-45ea-9570-02c4686e72f0 container client-container: +STEP: delete the pod 01/14/23 04:24:31.473 +Jan 14 04:24:31.486: INFO: Waiting for pod downwardapi-volume-d2e5881e-ac64-45ea-9570-02c4686e72f0 to disappear +Jan 14 04:24:31.489: INFO: Pod downwardapi-volume-d2e5881e-ac64-45ea-9570-02c4686e72f0 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 +Jan 14 04:24:31.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-3816" for this suite. 01/14/23 04:24:31.494 +------------------------------ +• [4.072 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:68 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:24:27.428 + Jan 14 04:24:27.428: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename projected 01/14/23 04:24:27.429 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:24:27.443 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:24:27.445 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 + [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:68 + STEP: Creating a pod to test downward API volume plugin 01/14/23 04:24:27.448 + Jan 14 04:24:27.455: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d2e5881e-ac64-45ea-9570-02c4686e72f0" in namespace "projected-3816" to be "Succeeded or Failed" + Jan 14 04:24:27.458: INFO: Pod "downwardapi-volume-d2e5881e-ac64-45ea-9570-02c4686e72f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.767149ms + Jan 14 04:24:29.463: INFO: Pod "downwardapi-volume-d2e5881e-ac64-45ea-9570-02c4686e72f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008024992s + Jan 14 04:24:31.464: INFO: Pod "downwardapi-volume-d2e5881e-ac64-45ea-9570-02c4686e72f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008210898s + STEP: Saw pod success 01/14/23 04:24:31.464 + Jan 14 04:24:31.464: INFO: Pod "downwardapi-volume-d2e5881e-ac64-45ea-9570-02c4686e72f0" satisfied condition "Succeeded or Failed" + Jan 14 04:24:31.467: INFO: Trying to get logs from node 10.0.1.99 pod downwardapi-volume-d2e5881e-ac64-45ea-9570-02c4686e72f0 container client-container: + STEP: delete the pod 01/14/23 04:24:31.473 + Jan 14 04:24:31.486: INFO: Waiting for pod downwardapi-volume-d2e5881e-ac64-45ea-9570-02c4686e72f0 to disappear + Jan 14 04:24:31.489: INFO: Pod downwardapi-volume-d2e5881e-ac64-45ea-9570-02c4686e72f0 no longer exists + [AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 + Jan 14 04:24:31.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-3816" for this suite. 01/14/23 04:24:31.494 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-storage] EmptyDir volumes + volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:87 +[BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:24:31.5 +Jan 14 04:24:31.500: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename emptydir 01/14/23 04:24:31.501 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:24:31.514 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:24:31.516 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 +[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:87 +STEP: Creating a pod to test emptydir volume type on tmpfs 01/14/23 04:24:31.519 +Jan 14 04:24:31.530: INFO: Waiting up to 5m0s for pod "pod-22ae4ab7-4ed5-4e57-8555-d18f594058c2" in namespace "emptydir-1178" to be "Succeeded or Failed" +Jan 14 04:24:31.534: INFO: Pod "pod-22ae4ab7-4ed5-4e57-8555-d18f594058c2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.564994ms +Jan 14 04:24:33.539: INFO: Pod "pod-22ae4ab7-4ed5-4e57-8555-d18f594058c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008804904s +Jan 14 04:24:35.539: INFO: Pod "pod-22ae4ab7-4ed5-4e57-8555-d18f594058c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008404856s +STEP: Saw pod success 01/14/23 04:24:35.539 +Jan 14 04:24:35.539: INFO: Pod "pod-22ae4ab7-4ed5-4e57-8555-d18f594058c2" satisfied condition "Succeeded or Failed" +Jan 14 04:24:35.542: INFO: Trying to get logs from node 10.0.1.99 pod pod-22ae4ab7-4ed5-4e57-8555-d18f594058c2 container test-container: +STEP: delete the pod 01/14/23 04:24:35.548 +Jan 14 04:24:35.565: INFO: Waiting for pod pod-22ae4ab7-4ed5-4e57-8555-d18f594058c2 to disappear +Jan 14 04:24:35.568: INFO: Pod pod-22ae4ab7-4ed5-4e57-8555-d18f594058c2 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 +Jan 14 04:24:35.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 +STEP: Destroying namespace "emptydir-1178" for this suite. 01/14/23 04:24:35.573 +------------------------------ +• [4.084 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:87 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:24:31.5 + Jan 14 04:24:31.500: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename emptydir 01/14/23 04:24:31.501 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:24:31.514 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:24:31.516 + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 + [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:87 + STEP: Creating a pod to test emptydir volume type on tmpfs 01/14/23 04:24:31.519 + Jan 14 04:24:31.530: INFO: Waiting up to 5m0s for pod "pod-22ae4ab7-4ed5-4e57-8555-d18f594058c2" in namespace "emptydir-1178" to be "Succeeded or Failed" + Jan 14 04:24:31.534: INFO: Pod "pod-22ae4ab7-4ed5-4e57-8555-d18f594058c2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.564994ms + Jan 14 04:24:33.539: INFO: Pod "pod-22ae4ab7-4ed5-4e57-8555-d18f594058c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008804904s + Jan 14 04:24:35.539: INFO: Pod "pod-22ae4ab7-4ed5-4e57-8555-d18f594058c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008404856s + STEP: Saw pod success 01/14/23 04:24:35.539 + Jan 14 04:24:35.539: INFO: Pod "pod-22ae4ab7-4ed5-4e57-8555-d18f594058c2" satisfied condition "Succeeded or Failed" + Jan 14 04:24:35.542: INFO: Trying to get logs from node 10.0.1.99 pod pod-22ae4ab7-4ed5-4e57-8555-d18f594058c2 container test-container: + STEP: delete the pod 01/14/23 04:24:35.548 + Jan 14 04:24:35.565: INFO: Waiting for pod pod-22ae4ab7-4ed5-4e57-8555-d18f594058c2 to disappear + Jan 14 04:24:35.568: INFO: Pod pod-22ae4ab7-4ed5-4e57-8555-d18f594058c2 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 + Jan 14 04:24:35.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 + STEP: Destroying namespace "emptydir-1178" for this suite. 01/14/23 04:24:35.573 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + test/e2e/common/node/expansion.go:225 +[BeforeEach] [sig-node] Variable Expansion + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:24:35.586 +Jan 14 04:24:35.586: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename var-expansion 01/14/23 04:24:35.587 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:24:35.601 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:24:35.603 +[BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:31 +[It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + test/e2e/common/node/expansion.go:225 +STEP: creating the pod with failed condition 01/14/23 04:24:35.605 +Jan 14 04:24:35.618: INFO: Waiting up to 2m0s for pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc" in namespace "var-expansion-4791" to be "running" +Jan 14 04:24:35.626: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.802036ms +Jan 14 04:24:37.631: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013783628s +Jan 14 04:24:39.634: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016154207s +Jan 14 04:24:41.631: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01382068s +Jan 14 04:24:43.633: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.015166651s +Jan 14 04:24:45.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.014053369s +Jan 14 04:24:47.633: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.015088207s +Jan 14 04:24:49.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.014376158s +Jan 14 04:24:51.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.014710885s +Jan 14 04:24:53.633: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.015521662s +Jan 14 04:24:55.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 20.014095644s +Jan 14 04:24:57.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 22.014279016s +Jan 14 04:24:59.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 24.014750048s +Jan 14 04:25:01.633: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 26.014970777s +Jan 14 04:25:03.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 28.014259319s +Jan 14 04:25:05.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 30.014009432s +Jan 14 04:25:07.631: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 32.01369257s +Jan 14 04:25:09.633: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 34.015424358s +Jan 14 04:25:11.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 36.014525224s +Jan 14 04:25:13.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 38.014350356s +Jan 14 04:25:15.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 40.013932916s +Jan 14 04:25:17.631: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 42.013596925s +Jan 14 04:25:19.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 44.01424351s +Jan 14 04:25:21.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 46.014385475s +Jan 14 04:25:23.630: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 48.01283992s +Jan 14 04:25:25.631: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 50.013438621s +Jan 14 04:25:27.633: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 52.015521787s +Jan 14 04:25:29.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 54.014368409s +Jan 14 04:25:31.631: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 56.013647869s +Jan 14 04:25:33.633: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 58.015200167s +Jan 14 04:25:35.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.014056731s +Jan 14 04:25:37.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.014207885s +Jan 14 04:25:39.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.014423004s +Jan 14 04:25:41.633: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.014920747s +Jan 14 04:25:43.633: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.015324974s +Jan 14 04:25:45.631: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.013843926s +Jan 14 04:25:47.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.014727631s +Jan 14 04:25:49.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.014548073s +Jan 14 04:25:51.631: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.013871407s +Jan 14 04:25:53.631: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.013530669s +Jan 14 04:25:55.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.014400927s +Jan 14 04:25:57.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.014516379s +Jan 14 04:25:59.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.014047288s +Jan 14 04:26:01.633: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.015042485s +Jan 14 04:26:03.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.014432033s +Jan 14 04:26:05.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.013930409s +Jan 14 04:26:07.631: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.013068614s +Jan 14 04:26:09.631: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.013311313s +Jan 14 04:26:11.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.014551178s +Jan 14 04:26:13.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.01426109s +Jan 14 04:26:15.631: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.01385191s +Jan 14 04:26:17.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.014626413s +Jan 14 04:26:19.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.01464318s +Jan 14 04:26:21.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.014058147s +Jan 14 04:26:23.631: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.013752309s +Jan 14 04:26:25.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.014146052s +Jan 14 04:26:27.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.013930999s +Jan 14 04:26:29.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.014583877s +Jan 14 04:26:31.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.014603283s +Jan 14 04:26:33.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.014670175s +Jan 14 04:26:35.631: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.013047242s +Jan 14 04:26:35.634: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.016244971s +STEP: updating the pod 01/14/23 04:26:35.634 +Jan 14 04:26:36.148: INFO: Successfully updated pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc" +STEP: waiting for pod running 01/14/23 04:26:36.148 +Jan 14 04:26:36.148: INFO: Waiting up to 2m0s for pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc" in namespace "var-expansion-4791" to be "running" +Jan 14 04:26:36.151: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.197816ms +Jan 14 04:26:38.155: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Running", Reason="", readiness=true. Elapsed: 2.007415203s +Jan 14 04:26:38.155: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc" satisfied condition "running" +STEP: deleting the pod gracefully 01/14/23 04:26:38.155 +Jan 14 04:26:38.155: INFO: Deleting pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc" in namespace "var-expansion-4791" +Jan 14 04:26:38.164: INFO: Wait up to 5m0s for pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + test/e2e/framework/node/init/init.go:32 +Jan 14 04:27:10.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Variable Expansion + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Variable Expansion + tear down framework | framework.go:193 +STEP: Destroying namespace "var-expansion-4791" for this suite. 01/14/23 04:27:10.176 +------------------------------ +• [SLOW TEST] [154.596 seconds] +[sig-node] Variable Expansion +test/e2e/common/node/framework.go:23 + should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + test/e2e/common/node/expansion.go:225 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Variable Expansion + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:24:35.586 + Jan 14 04:24:35.586: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename var-expansion 01/14/23 04:24:35.587 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:24:35.601 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:24:35.603 + [BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:31 + [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] + test/e2e/common/node/expansion.go:225 + STEP: creating the pod with failed condition 01/14/23 04:24:35.605 + Jan 14 04:24:35.618: INFO: Waiting up to 2m0s for pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc" in namespace "var-expansion-4791" to be "running" + Jan 14 04:24:35.626: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.802036ms + Jan 14 04:24:37.631: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013783628s + Jan 14 04:24:39.634: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016154207s + Jan 14 04:24:41.631: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01382068s + Jan 14 04:24:43.633: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.015166651s + Jan 14 04:24:45.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.014053369s + Jan 14 04:24:47.633: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.015088207s + Jan 14 04:24:49.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.014376158s + Jan 14 04:24:51.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.014710885s + Jan 14 04:24:53.633: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.015521662s + Jan 14 04:24:55.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 20.014095644s + Jan 14 04:24:57.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 22.014279016s + Jan 14 04:24:59.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 24.014750048s + Jan 14 04:25:01.633: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 26.014970777s + Jan 14 04:25:03.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 28.014259319s + Jan 14 04:25:05.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 30.014009432s + Jan 14 04:25:07.631: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 32.01369257s + Jan 14 04:25:09.633: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 34.015424358s + Jan 14 04:25:11.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 36.014525224s + Jan 14 04:25:13.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 38.014350356s + Jan 14 04:25:15.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 40.013932916s + Jan 14 04:25:17.631: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 42.013596925s + Jan 14 04:25:19.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 44.01424351s + Jan 14 04:25:21.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 46.014385475s + Jan 14 04:25:23.630: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 48.01283992s + Jan 14 04:25:25.631: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 50.013438621s + Jan 14 04:25:27.633: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 52.015521787s + Jan 14 04:25:29.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 54.014368409s + Jan 14 04:25:31.631: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 56.013647869s + Jan 14 04:25:33.633: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 58.015200167s + Jan 14 04:25:35.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.014056731s + Jan 14 04:25:37.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.014207885s + Jan 14 04:25:39.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.014423004s + Jan 14 04:25:41.633: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.014920747s + Jan 14 04:25:43.633: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.015324974s + Jan 14 04:25:45.631: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.013843926s + Jan 14 04:25:47.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.014727631s + Jan 14 04:25:49.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.014548073s + Jan 14 04:25:51.631: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.013871407s + Jan 14 04:25:53.631: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.013530669s + Jan 14 04:25:55.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.014400927s + Jan 14 04:25:57.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.014516379s + Jan 14 04:25:59.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.014047288s + Jan 14 04:26:01.633: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.015042485s + Jan 14 04:26:03.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.014432033s + Jan 14 04:26:05.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.013930409s + Jan 14 04:26:07.631: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.013068614s + Jan 14 04:26:09.631: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.013311313s + Jan 14 04:26:11.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.014551178s + Jan 14 04:26:13.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.01426109s + Jan 14 04:26:15.631: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.01385191s + Jan 14 04:26:17.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.014626413s + Jan 14 04:26:19.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.01464318s + Jan 14 04:26:21.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.014058147s + Jan 14 04:26:23.631: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m48.013752309s + Jan 14 04:26:25.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m50.014146052s + Jan 14 04:26:27.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.013930999s + Jan 14 04:26:29.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.014583877s + Jan 14 04:26:31.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.014603283s + Jan 14 04:26:33.632: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.014670175s + Jan 14 04:26:35.631: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.013047242s + Jan 14 04:26:35.634: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.016244971s + STEP: updating the pod 01/14/23 04:26:35.634 + Jan 14 04:26:36.148: INFO: Successfully updated pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc" + STEP: waiting for pod running 01/14/23 04:26:36.148 + Jan 14 04:26:36.148: INFO: Waiting up to 2m0s for pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc" in namespace "var-expansion-4791" to be "running" + Jan 14 04:26:36.151: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.197816ms + Jan 14 04:26:38.155: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc": Phase="Running", Reason="", readiness=true. Elapsed: 2.007415203s + Jan 14 04:26:38.155: INFO: Pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc" satisfied condition "running" + STEP: deleting the pod gracefully 01/14/23 04:26:38.155 + Jan 14 04:26:38.155: INFO: Deleting pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc" in namespace "var-expansion-4791" + Jan 14 04:26:38.164: INFO: Wait up to 5m0s for pod "var-expansion-40934666-e7e5-4fd5-9294-3ec474066cfc" to be fully deleted + [AfterEach] [sig-node] Variable Expansion + test/e2e/framework/node/init/init.go:32 + Jan 14 04:27:10.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Variable Expansion + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Variable Expansion + tear down framework | framework.go:193 + STEP: Destroying namespace "var-expansion-4791" for this suite. 01/14/23 04:27:10.176 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-node] Downward API + should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:166 +[BeforeEach] [sig-node] Downward API + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:27:10.182 +Jan 14 04:27:10.183: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename downward-api 01/14/23 04:27:10.183 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:27:10.196 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:27:10.198 +[BeforeEach] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:31 +[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:166 +STEP: Creating a pod to test downward api env vars 01/14/23 04:27:10.2 +Jan 14 04:27:10.214: INFO: Waiting up to 5m0s for pod "downward-api-8abb8e55-ce35-4179-9462-89d32821510e" in namespace "downward-api-7586" to be "Succeeded or Failed" +Jan 14 04:27:10.217: INFO: Pod "downward-api-8abb8e55-ce35-4179-9462-89d32821510e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.051804ms +Jan 14 04:27:12.223: INFO: Pod "downward-api-8abb8e55-ce35-4179-9462-89d32821510e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008913232s +Jan 14 04:27:14.223: INFO: Pod "downward-api-8abb8e55-ce35-4179-9462-89d32821510e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008374582s +STEP: Saw pod success 01/14/23 04:27:14.223 +Jan 14 04:27:14.223: INFO: Pod "downward-api-8abb8e55-ce35-4179-9462-89d32821510e" satisfied condition "Succeeded or Failed" +Jan 14 04:27:14.226: INFO: Trying to get logs from node 10.0.1.99 pod downward-api-8abb8e55-ce35-4179-9462-89d32821510e container dapi-container: +STEP: delete the pod 01/14/23 04:27:14.239 +Jan 14 04:27:14.254: INFO: Waiting for pod downward-api-8abb8e55-ce35-4179-9462-89d32821510e to disappear +Jan 14 04:27:14.257: INFO: Pod downward-api-8abb8e55-ce35-4179-9462-89d32821510e no longer exists +[AfterEach] [sig-node] Downward API + test/e2e/framework/node/init/init.go:32 +Jan 14 04:27:14.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Downward API + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Downward API + tear down framework | framework.go:193 +STEP: Destroying namespace "downward-api-7586" for this suite. 01/14/23 04:27:14.262 +------------------------------ +• [4.085 seconds] +[sig-node] Downward API +test/e2e/common/node/framework.go:23 + should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:166 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Downward API + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:27:10.182 + Jan 14 04:27:10.183: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename downward-api 01/14/23 04:27:10.183 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:27:10.196 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:27:10.198 + [BeforeEach] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:31 + [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:166 + STEP: Creating a pod to test downward api env vars 01/14/23 04:27:10.2 + Jan 14 04:27:10.214: INFO: Waiting up to 5m0s for pod "downward-api-8abb8e55-ce35-4179-9462-89d32821510e" in namespace "downward-api-7586" to be "Succeeded or Failed" + Jan 14 04:27:10.217: INFO: Pod "downward-api-8abb8e55-ce35-4179-9462-89d32821510e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.051804ms + Jan 14 04:27:12.223: INFO: Pod "downward-api-8abb8e55-ce35-4179-9462-89d32821510e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008913232s + Jan 14 04:27:14.223: INFO: Pod "downward-api-8abb8e55-ce35-4179-9462-89d32821510e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008374582s + STEP: Saw pod success 01/14/23 04:27:14.223 + Jan 14 04:27:14.223: INFO: Pod "downward-api-8abb8e55-ce35-4179-9462-89d32821510e" satisfied condition "Succeeded or Failed" + Jan 14 04:27:14.226: INFO: Trying to get logs from node 10.0.1.99 pod downward-api-8abb8e55-ce35-4179-9462-89d32821510e container dapi-container: + STEP: delete the pod 01/14/23 04:27:14.239 + Jan 14 04:27:14.254: INFO: Waiting for pod downward-api-8abb8e55-ce35-4179-9462-89d32821510e to disappear + Jan 14 04:27:14.257: INFO: Pod downward-api-8abb8e55-ce35-4179-9462-89d32821510e no longer exists + [AfterEach] [sig-node] Downward API + test/e2e/framework/node/init/init.go:32 + Jan 14 04:27:14.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Downward API + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Downward API + tear down framework | framework.go:193 + STEP: Destroying namespace "downward-api-7586" for this suite. 01/14/23 04:27:14.262 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:215 +[BeforeEach] [sig-storage] Projected secret + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:27:14.269 +Jan 14 04:27:14.269: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename projected 01/14/23 04:27:14.27 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:27:14.284 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:27:14.286 +[BeforeEach] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:31 +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:215 +STEP: Creating secret with name s-test-opt-del-f81d3d1e-e406-44f3-bf89-451034166f8e 01/14/23 04:27:14.292 +STEP: Creating secret with name s-test-opt-upd-89dfaf99-6a2d-48b7-b820-67d421c2d9d1 01/14/23 04:27:14.296 +STEP: Creating the pod 01/14/23 04:27:14.3 +Jan 14 04:27:14.310: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d827c232-932b-448b-b97b-ade27c8d24f0" in namespace "projected-6798" to be "running and ready" +Jan 14 04:27:14.317: INFO: Pod "pod-projected-secrets-d827c232-932b-448b-b97b-ade27c8d24f0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.94464ms +Jan 14 04:27:14.317: INFO: The phase of Pod pod-projected-secrets-d827c232-932b-448b-b97b-ade27c8d24f0 is Pending, waiting for it to be Running (with Ready = true) +Jan 14 04:27:16.321: INFO: Pod "pod-projected-secrets-d827c232-932b-448b-b97b-ade27c8d24f0": Phase="Running", Reason="", readiness=true. Elapsed: 2.011072896s +Jan 14 04:27:16.321: INFO: The phase of Pod pod-projected-secrets-d827c232-932b-448b-b97b-ade27c8d24f0 is Running (Ready = true) +Jan 14 04:27:16.321: INFO: Pod "pod-projected-secrets-d827c232-932b-448b-b97b-ade27c8d24f0" satisfied condition "running and ready" +STEP: Deleting secret s-test-opt-del-f81d3d1e-e406-44f3-bf89-451034166f8e 01/14/23 04:27:16.346 +STEP: Updating secret s-test-opt-upd-89dfaf99-6a2d-48b7-b820-67d421c2d9d1 01/14/23 04:27:16.35 +STEP: Creating secret with name s-test-opt-create-2a8db547-c785-4f84-a1d1-1ba01b074a74 01/14/23 04:27:16.354 +STEP: waiting to observe update in volume 01/14/23 04:27:16.357 +[AfterEach] [sig-storage] Projected secret + test/e2e/framework/node/init/init.go:32 +Jan 14 04:27:18.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected secret + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected secret + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-6798" for this suite. 01/14/23 04:27:18.385 +------------------------------ +• [4.122 seconds] +[sig-storage] Projected secret +test/e2e/common/storage/framework.go:23 + optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:215 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected secret + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:27:14.269 + Jan 14 04:27:14.269: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename projected 01/14/23 04:27:14.27 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:27:14.284 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:27:14.286 + [BeforeEach] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:31 + [It] optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:215 + STEP: Creating secret with name s-test-opt-del-f81d3d1e-e406-44f3-bf89-451034166f8e 01/14/23 04:27:14.292 + STEP: Creating secret with name s-test-opt-upd-89dfaf99-6a2d-48b7-b820-67d421c2d9d1 01/14/23 04:27:14.296 + STEP: Creating the pod 01/14/23 04:27:14.3 + Jan 14 04:27:14.310: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d827c232-932b-448b-b97b-ade27c8d24f0" in namespace "projected-6798" to be "running and ready" + Jan 14 04:27:14.317: INFO: Pod "pod-projected-secrets-d827c232-932b-448b-b97b-ade27c8d24f0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.94464ms + Jan 14 04:27:14.317: INFO: The phase of Pod pod-projected-secrets-d827c232-932b-448b-b97b-ade27c8d24f0 is Pending, waiting for it to be Running (with Ready = true) + Jan 14 04:27:16.321: INFO: Pod "pod-projected-secrets-d827c232-932b-448b-b97b-ade27c8d24f0": Phase="Running", Reason="", readiness=true. Elapsed: 2.011072896s + Jan 14 04:27:16.321: INFO: The phase of Pod pod-projected-secrets-d827c232-932b-448b-b97b-ade27c8d24f0 is Running (Ready = true) + Jan 14 04:27:16.321: INFO: Pod "pod-projected-secrets-d827c232-932b-448b-b97b-ade27c8d24f0" satisfied condition "running and ready" + STEP: Deleting secret s-test-opt-del-f81d3d1e-e406-44f3-bf89-451034166f8e 01/14/23 04:27:16.346 + STEP: Updating secret s-test-opt-upd-89dfaf99-6a2d-48b7-b820-67d421c2d9d1 01/14/23 04:27:16.35 + STEP: Creating secret with name s-test-opt-create-2a8db547-c785-4f84-a1d1-1ba01b074a74 01/14/23 04:27:16.354 + STEP: waiting to observe update in volume 01/14/23 04:27:16.357 + [AfterEach] [sig-storage] Projected secret + test/e2e/framework/node/init/init.go:32 + Jan 14 04:27:18.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected secret + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected secret + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-6798" for this suite. 01/14/23 04:27:18.385 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:169 +[BeforeEach] [sig-node] Probing container + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:27:18.392 +Jan 14 04:27:18.392: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename container-probe 01/14/23 04:27:18.393 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:27:18.406 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:27:18.408 +[BeforeEach] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 +[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:169 +STEP: Creating pod liveness-6355edcd-fea4-41ef-a299-fba52e83cce5 in namespace container-probe-5104 01/14/23 04:27:18.41 +Jan 14 04:27:18.419: INFO: Waiting up to 5m0s for pod "liveness-6355edcd-fea4-41ef-a299-fba52e83cce5" in namespace "container-probe-5104" to be "not pending" +Jan 14 04:27:18.421: INFO: Pod "liveness-6355edcd-fea4-41ef-a299-fba52e83cce5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.772835ms +Jan 14 04:27:20.427: INFO: Pod "liveness-6355edcd-fea4-41ef-a299-fba52e83cce5": Phase="Running", Reason="", readiness=true. Elapsed: 2.00827748s +Jan 14 04:27:20.427: INFO: Pod "liveness-6355edcd-fea4-41ef-a299-fba52e83cce5" satisfied condition "not pending" +Jan 14 04:27:20.427: INFO: Started pod liveness-6355edcd-fea4-41ef-a299-fba52e83cce5 in namespace container-probe-5104 +STEP: checking the pod's current state and verifying that restartCount is present 01/14/23 04:27:20.427 +Jan 14 04:27:20.430: INFO: Initial restart count of pod liveness-6355edcd-fea4-41ef-a299-fba52e83cce5 is 0 +Jan 14 04:27:40.483: INFO: Restart count of pod container-probe-5104/liveness-6355edcd-fea4-41ef-a299-fba52e83cce5 is now 1 (20.052882531s elapsed) +STEP: deleting the pod 01/14/23 04:27:40.483 +[AfterEach] [sig-node] Probing container + test/e2e/framework/node/init/init.go:32 +Jan 14 04:27:40.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Probing container + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Probing container + tear down framework | framework.go:193 +STEP: Destroying namespace "container-probe-5104" for this suite. 01/14/23 04:27:40.502 +------------------------------ +• [SLOW TEST] [22.115 seconds] +[sig-node] Probing container +test/e2e/common/node/framework.go:23 + should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:169 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Probing container + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:27:18.392 + Jan 14 04:27:18.392: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename container-probe 01/14/23 04:27:18.393 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:27:18.406 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:27:18.408 + [BeforeEach] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 + [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:169 + STEP: Creating pod liveness-6355edcd-fea4-41ef-a299-fba52e83cce5 in namespace container-probe-5104 01/14/23 04:27:18.41 + Jan 14 04:27:18.419: INFO: Waiting up to 5m0s for pod "liveness-6355edcd-fea4-41ef-a299-fba52e83cce5" in namespace "container-probe-5104" to be "not pending" + Jan 14 04:27:18.421: INFO: Pod "liveness-6355edcd-fea4-41ef-a299-fba52e83cce5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.772835ms + Jan 14 04:27:20.427: INFO: Pod "liveness-6355edcd-fea4-41ef-a299-fba52e83cce5": Phase="Running", Reason="", readiness=true. Elapsed: 2.00827748s + Jan 14 04:27:20.427: INFO: Pod "liveness-6355edcd-fea4-41ef-a299-fba52e83cce5" satisfied condition "not pending" + Jan 14 04:27:20.427: INFO: Started pod liveness-6355edcd-fea4-41ef-a299-fba52e83cce5 in namespace container-probe-5104 + STEP: checking the pod's current state and verifying that restartCount is present 01/14/23 04:27:20.427 + Jan 14 04:27:20.430: INFO: Initial restart count of pod liveness-6355edcd-fea4-41ef-a299-fba52e83cce5 is 0 + Jan 14 04:27:40.483: INFO: Restart count of pod container-probe-5104/liveness-6355edcd-fea4-41ef-a299-fba52e83cce5 is now 1 (20.052882531s elapsed) + STEP: deleting the pod 01/14/23 04:27:40.483 + [AfterEach] [sig-node] Probing container + test/e2e/framework/node/init/init.go:32 + Jan 14 04:27:40.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Probing container + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Probing container + tear down framework | framework.go:193 + STEP: Destroying namespace "container-probe-5104" for this suite. 01/14/23 04:27:40.502 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] LimitRange + should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + test/e2e/scheduling/limit_range.go:61 +[BeforeEach] [sig-scheduling] LimitRange + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:27:40.508 +Jan 14 04:27:40.508: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename limitrange 01/14/23 04:27:40.509 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:27:40.523 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:27:40.525 +[BeforeEach] [sig-scheduling] LimitRange + test/e2e/framework/metrics/init/init.go:31 +[It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + test/e2e/scheduling/limit_range.go:61 +STEP: Creating a LimitRange 01/14/23 04:27:40.527 +STEP: Setting up watch 01/14/23 04:27:40.528 +STEP: Submitting a LimitRange 01/14/23 04:27:40.63 +STEP: Verifying LimitRange creation was observed 01/14/23 04:27:40.635 +STEP: Fetching the LimitRange to ensure it has proper values 01/14/23 04:27:40.636 +Jan 14 04:27:40.638: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] +Jan 14 04:27:40.638: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Creating a Pod with no resource requirements 01/14/23 04:27:40.638 +STEP: Ensuring Pod has resource requirements applied from LimitRange 01/14/23 04:27:40.645 +Jan 14 04:27:40.648: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] +Jan 14 04:27:40.648: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Creating a Pod with partial resource requirements 01/14/23 04:27:40.648 +STEP: Ensuring Pod has merged resource requirements applied from LimitRange 01/14/23 04:27:40.657 +Jan 14 04:27:40.660: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] +Jan 14 04:27:40.660: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] +STEP: Failing to create a Pod with less than min resources 01/14/23 04:27:40.66 +STEP: Failing to create a Pod with more than max resources 01/14/23 04:27:40.663 +STEP: Updating a LimitRange 01/14/23 04:27:40.666 +STEP: Verifying LimitRange updating is effective 01/14/23 04:27:40.671 +STEP: Creating a Pod with less than former min resources 01/14/23 04:27:42.677 +STEP: Failing to create a Pod with more than max resources 01/14/23 04:27:42.685 +STEP: Deleting a LimitRange 01/14/23 04:27:42.688 +STEP: Verifying the LimitRange was deleted 01/14/23 04:27:42.693 +Jan 14 04:27:47.699: INFO: limitRange is already deleted +STEP: Creating a Pod with more than former max resources 01/14/23 04:27:47.699 +[AfterEach] [sig-scheduling] LimitRange + test/e2e/framework/node/init/init.go:32 +Jan 14 04:27:47.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-scheduling] LimitRange + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-scheduling] LimitRange + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-scheduling] LimitRange + tear down framework | framework.go:193 +STEP: Destroying namespace "limitrange-1328" for this suite. 01/14/23 04:27:47.714 +------------------------------ +• [SLOW TEST] [7.212 seconds] +[sig-scheduling] LimitRange +test/e2e/scheduling/framework.go:40 + should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + test/e2e/scheduling/limit_range.go:61 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-scheduling] LimitRange + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:27:40.508 + Jan 14 04:27:40.508: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename limitrange 01/14/23 04:27:40.509 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:27:40.523 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:27:40.525 + [BeforeEach] [sig-scheduling] LimitRange + test/e2e/framework/metrics/init/init.go:31 + [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] + test/e2e/scheduling/limit_range.go:61 + STEP: Creating a LimitRange 01/14/23 04:27:40.527 + STEP: Setting up watch 01/14/23 04:27:40.528 + STEP: Submitting a LimitRange 01/14/23 04:27:40.63 + STEP: Verifying LimitRange creation was observed 01/14/23 04:27:40.635 + STEP: Fetching the LimitRange to ensure it has proper values 01/14/23 04:27:40.636 + Jan 14 04:27:40.638: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] + Jan 14 04:27:40.638: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] + STEP: Creating a Pod with no resource requirements 01/14/23 04:27:40.638 + STEP: Ensuring Pod has resource requirements applied from LimitRange 01/14/23 04:27:40.645 + Jan 14 04:27:40.648: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] + Jan 14 04:27:40.648: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] + STEP: Creating a Pod with partial resource requirements 01/14/23 04:27:40.648 + STEP: Ensuring Pod has merged resource requirements applied from LimitRange 01/14/23 04:27:40.657 + Jan 14 04:27:40.660: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] + Jan 14 04:27:40.660: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] + STEP: Failing to create a Pod with less than min resources 01/14/23 04:27:40.66 + STEP: Failing to create a Pod with more than max resources 01/14/23 04:27:40.663 + STEP: Updating a LimitRange 01/14/23 04:27:40.666 + STEP: Verifying LimitRange updating is effective 01/14/23 04:27:40.671 + STEP: Creating a Pod with less than former min resources 01/14/23 04:27:42.677 + STEP: Failing to create a Pod with more than max resources 01/14/23 04:27:42.685 + STEP: Deleting a LimitRange 01/14/23 04:27:42.688 + STEP: Verifying the LimitRange was deleted 01/14/23 04:27:42.693 + Jan 14 04:27:47.699: INFO: limitRange is already deleted + STEP: Creating a Pod with more than former max resources 01/14/23 04:27:47.699 + [AfterEach] [sig-scheduling] LimitRange + test/e2e/framework/node/init/init.go:32 + Jan 14 04:27:47.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-scheduling] LimitRange + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-scheduling] LimitRange + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-scheduling] LimitRange + tear down framework | framework.go:193 + STEP: Destroying namespace "limitrange-1328" for this suite. 01/14/23 04:27:47.714 + << End Captured GinkgoWriter Output +------------------------------ +[sig-node] ConfigMap + should run through a ConfigMap lifecycle [Conformance] + test/e2e/common/node/configmap.go:169 +[BeforeEach] [sig-node] ConfigMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:27:47.72 +Jan 14 04:27:47.720: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename configmap 01/14/23 04:27:47.721 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:27:47.733 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:27:47.735 +[BeforeEach] [sig-node] ConfigMap + test/e2e/framework/metrics/init/init.go:31 +[It] should run through a ConfigMap lifecycle [Conformance] + test/e2e/common/node/configmap.go:169 +STEP: creating a ConfigMap 01/14/23 04:27:47.738 +STEP: fetching the ConfigMap 01/14/23 04:27:47.743 +STEP: patching the ConfigMap 01/14/23 04:27:47.746 +STEP: listing all ConfigMaps in all namespaces with a label selector 01/14/23 04:27:47.751 +STEP: deleting the ConfigMap by collection with a label selector 01/14/23 04:27:47.755 +STEP: listing all ConfigMaps in test namespace 01/14/23 04:27:47.762 +[AfterEach] [sig-node] ConfigMap + test/e2e/framework/node/init/init.go:32 +Jan 14 04:27:47.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] ConfigMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] ConfigMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] ConfigMap + tear down framework | framework.go:193 +STEP: Destroying namespace "configmap-9847" for this suite. 01/14/23 04:27:47.768 +------------------------------ +• [0.053 seconds] +[sig-node] ConfigMap +test/e2e/common/node/framework.go:23 + should run through a ConfigMap lifecycle [Conformance] + test/e2e/common/node/configmap.go:169 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] ConfigMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:27:47.72 + Jan 14 04:27:47.720: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename configmap 01/14/23 04:27:47.721 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:27:47.733 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:27:47.735 + [BeforeEach] [sig-node] ConfigMap + test/e2e/framework/metrics/init/init.go:31 + [It] should run through a ConfigMap lifecycle [Conformance] + test/e2e/common/node/configmap.go:169 + STEP: creating a ConfigMap 01/14/23 04:27:47.738 + STEP: fetching the ConfigMap 01/14/23 04:27:47.743 + STEP: patching the ConfigMap 01/14/23 04:27:47.746 + STEP: listing all ConfigMaps in all namespaces with a label selector 01/14/23 04:27:47.751 + STEP: deleting the ConfigMap by collection with a label selector 01/14/23 04:27:47.755 + STEP: listing all ConfigMaps in test namespace 01/14/23 04:27:47.762 + [AfterEach] [sig-node] ConfigMap + test/e2e/framework/node/init/init.go:32 + Jan 14 04:27:47.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] ConfigMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] ConfigMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] ConfigMap + tear down framework | framework.go:193 + STEP: Destroying namespace "configmap-9847" for this suite. 01/14/23 04:27:47.768 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that NodeSelector is respected if not matching [Conformance] + test/e2e/scheduling/predicates.go:443 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:27:47.775 +Jan 14 04:27:47.775: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename sched-pred 01/14/23 04:27:47.776 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:27:47.79 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:27:47.792 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:97 +Jan 14 04:27:47.794: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Jan 14 04:27:47.802: INFO: Waiting for terminating namespaces to be deleted... +Jan 14 04:27:47.804: INFO: +Logging pods the apiserver thinks is on node 10.0.1.106 before test +Jan 14 04:27:47.814: INFO: kubernetes-proxy-544fb566b4-fh64j from default started at 2023-01-14 03:20:34 +0000 UTC (1 container statuses recorded) +Jan 14 04:27:47.814: INFO: Container kubernetes-proxy ready: false, restart count 18 +Jan 14 04:27:47.814: INFO: csi-cbs-node-5wf2s from kube-system started at 2023-01-13 08:11:20 +0000 UTC (2 container statuses recorded) +Jan 14 04:27:47.814: INFO: Container cbs-csi ready: true, restart count 1 +Jan 14 04:27:47.814: INFO: Container driver-registrar ready: true, restart count 0 +Jan 14 04:27:47.814: INFO: ip-masq-agent-rx9k6 from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) +Jan 14 04:27:47.814: INFO: Container ip-masq-agent ready: true, restart count 0 +Jan 14 04:27:47.814: INFO: kube-proxy-s6xxg from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) +Jan 14 04:27:47.814: INFO: Container kube-proxy ready: true, restart count 0 +Jan 14 04:27:47.814: INFO: tke-bridge-agent-frbcm from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) +Jan 14 04:27:47.814: INFO: Container tke-bridge-agent ready: true, restart count 1 +Jan 14 04:27:47.814: INFO: tke-cni-agent-nv7pn from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) +Jan 14 04:27:47.814: INFO: Container tke-cni-agent ready: true, restart count 0 +Jan 14 04:27:47.814: INFO: tke-monitor-agent-xhdhg from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) +Jan 14 04:27:47.814: INFO: Container tke-monitor-agent ready: true, restart count 0 +Jan 14 04:27:47.814: INFO: pfpod from limitrange-1328 started at 2023-01-14 04:27:42 +0000 UTC (1 container statuses recorded) +Jan 14 04:27:47.814: INFO: Container pause ready: true, restart count 0 +Jan 14 04:27:47.814: INFO: sonobuoy-systemd-logs-daemon-set-0fc25bd2d6604eac-jkjnf from sonobuoy started at 2023-01-14 03:34:54 +0000 UTC (2 container statuses recorded) +Jan 14 04:27:47.814: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jan 14 04:27:47.814: INFO: Container systemd-logs ready: true, restart count 0 +Jan 14 04:27:47.814: INFO: ss2-1 from statefulset-8862 started at 2023-01-14 02:33:11 +0000 UTC (1 container statuses recorded) +Jan 14 04:27:47.814: INFO: Container webserver ready: true, restart count 0 +Jan 14 04:27:47.814: INFO: +Logging pods the apiserver thinks is on node 10.0.1.212 before test +Jan 14 04:27:47.825: INFO: kubernetes-proxy-544fb566b4-zpvz8 from default started at 2023-01-14 03:20:34 +0000 UTC (1 container statuses recorded) +Jan 14 04:27:47.825: INFO: Container kubernetes-proxy ready: false, restart count 18 +Jan 14 04:27:47.825: INFO: csi-cbs-node-ddpcx from kube-system started at 2023-01-13 08:11:16 +0000 UTC (2 container statuses recorded) +Jan 14 04:27:47.825: INFO: Container cbs-csi ready: true, restart count 0 +Jan 14 04:27:47.825: INFO: Container driver-registrar ready: true, restart count 0 +Jan 14 04:27:47.825: INFO: ip-masq-agent-8wxxs from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) +Jan 14 04:27:47.825: INFO: Container ip-masq-agent ready: true, restart count 0 +Jan 14 04:27:47.825: INFO: kube-proxy-npt42 from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) +Jan 14 04:27:47.825: INFO: Container kube-proxy ready: true, restart count 0 +Jan 14 04:27:47.825: INFO: tke-bridge-agent-4rffd from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) +Jan 14 04:27:47.825: INFO: Container tke-bridge-agent ready: true, restart count 1 +Jan 14 04:27:47.825: INFO: tke-cni-agent-xmggs from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) +Jan 14 04:27:47.825: INFO: Container tke-cni-agent ready: true, restart count 0 +Jan 14 04:27:47.825: INFO: tke-monitor-agent-6gtt6 from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) +Jan 14 04:27:47.825: INFO: Container tke-monitor-agent ready: true, restart count 0 +Jan 14 04:27:47.825: INFO: sonobuoy-e2e-job-1b1a46fb40e34267 from sonobuoy started at 2023-01-14 03:34:54 +0000 UTC (2 container statuses recorded) +Jan 14 04:27:47.825: INFO: Container e2e ready: true, restart count 0 +Jan 14 04:27:47.825: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jan 14 04:27:47.825: INFO: sonobuoy-systemd-logs-daemon-set-0fc25bd2d6604eac-lgxk7 from sonobuoy started at 2023-01-14 03:34:54 +0000 UTC (2 container statuses recorded) +Jan 14 04:27:47.825: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jan 14 04:27:47.825: INFO: Container systemd-logs ready: true, restart count 0 +Jan 14 04:27:47.825: INFO: ss2-2 from statefulset-8862 started at 2023-01-14 02:33:30 +0000 UTC (1 container statuses recorded) +Jan 14 04:27:47.825: INFO: Container webserver ready: true, restart count 0 +Jan 14 04:27:47.825: INFO: +Logging pods the apiserver thinks is on node 10.0.1.99 before test +Jan 14 04:27:47.835: INFO: csi-cbs-node-q4l9b from kube-system started at 2023-01-13 08:11:43 +0000 UTC (2 container statuses recorded) +Jan 14 04:27:47.835: INFO: Container cbs-csi ready: true, restart count 0 +Jan 14 04:27:47.835: INFO: Container driver-registrar ready: true, restart count 0 +Jan 14 04:27:47.835: INFO: ip-masq-agent-kmrrk from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) +Jan 14 04:27:47.835: INFO: Container ip-masq-agent ready: true, restart count 0 +Jan 14 04:27:47.835: INFO: kube-proxy-g4qjh from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) +Jan 14 04:27:47.835: INFO: Container kube-proxy ready: true, restart count 0 +Jan 14 04:27:47.835: INFO: tke-bridge-agent-hzv6c from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) +Jan 14 04:27:47.835: INFO: Container tke-bridge-agent ready: true, restart count 1 +Jan 14 04:27:47.835: INFO: tke-cni-agent-7mk9b from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) +Jan 14 04:27:47.835: INFO: Container tke-cni-agent ready: true, restart count 0 +Jan 14 04:27:47.835: INFO: tke-monitor-agent-g27mp from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) +Jan 14 04:27:47.835: INFO: Container tke-monitor-agent ready: true, restart count 0 +Jan 14 04:27:47.835: INFO: pod-partial-resources from limitrange-1328 started at 2023-01-14 04:27:40 +0000 UTC (1 container statuses recorded) +Jan 14 04:27:47.835: INFO: Container pause ready: true, restart count 0 +Jan 14 04:27:47.835: INFO: sonobuoy from sonobuoy started at 2023-01-14 03:34:53 +0000 UTC (1 container statuses recorded) +Jan 14 04:27:47.835: INFO: Container kube-sonobuoy ready: true, restart count 0 +Jan 14 04:27:47.835: INFO: sonobuoy-systemd-logs-daemon-set-0fc25bd2d6604eac-nwh62 from sonobuoy started at 2023-01-14 03:34:54 +0000 UTC (2 container statuses recorded) +Jan 14 04:27:47.835: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jan 14 04:27:47.835: INFO: Container systemd-logs ready: true, restart count 0 +Jan 14 04:27:47.835: INFO: ss2-0 from statefulset-8862 started at 2023-01-14 02:33:10 +0000 UTC (1 container statuses recorded) +Jan 14 04:27:47.835: INFO: Container webserver ready: true, restart count 0 +[It] validates that NodeSelector is respected if not matching [Conformance] + test/e2e/scheduling/predicates.go:443 +STEP: Trying to schedule Pod with nonempty NodeSelector. 01/14/23 04:27:47.835 +STEP: Considering event: +Type = [Warning], Name = [restricted-pod.173a126db4f30eb9], Reason = [FailedScheduling], Message = [0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling..] 01/14/23 04:27:53.923 +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:27:54.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:88 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "sched-pred-7219" for this suite. 01/14/23 04:27:54.927 +------------------------------ +• [SLOW TEST] [7.157 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +test/e2e/scheduling/framework.go:40 + validates that NodeSelector is respected if not matching [Conformance] + test/e2e/scheduling/predicates.go:443 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:27:47.775 + Jan 14 04:27:47.775: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename sched-pred 01/14/23 04:27:47.776 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:27:47.79 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:27:47.792 + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:97 + Jan 14 04:27:47.794: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready + Jan 14 04:27:47.802: INFO: Waiting for terminating namespaces to be deleted... + Jan 14 04:27:47.804: INFO: + Logging pods the apiserver thinks is on node 10.0.1.106 before test + Jan 14 04:27:47.814: INFO: kubernetes-proxy-544fb566b4-fh64j from default started at 2023-01-14 03:20:34 +0000 UTC (1 container statuses recorded) + Jan 14 04:27:47.814: INFO: Container kubernetes-proxy ready: false, restart count 18 + Jan 14 04:27:47.814: INFO: csi-cbs-node-5wf2s from kube-system started at 2023-01-13 08:11:20 +0000 UTC (2 container statuses recorded) + Jan 14 04:27:47.814: INFO: Container cbs-csi ready: true, restart count 1 + Jan 14 04:27:47.814: INFO: Container driver-registrar ready: true, restart count 0 + Jan 14 04:27:47.814: INFO: ip-masq-agent-rx9k6 from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) + Jan 14 04:27:47.814: INFO: Container ip-masq-agent ready: true, restart count 0 + Jan 14 04:27:47.814: INFO: kube-proxy-s6xxg from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) + Jan 14 04:27:47.814: INFO: Container kube-proxy ready: true, restart count 0 + Jan 14 04:27:47.814: INFO: tke-bridge-agent-frbcm from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) + Jan 14 04:27:47.814: INFO: Container tke-bridge-agent ready: true, restart count 1 + Jan 14 04:27:47.814: INFO: tke-cni-agent-nv7pn from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) + Jan 14 04:27:47.814: INFO: Container tke-cni-agent ready: true, restart count 0 + Jan 14 04:27:47.814: INFO: tke-monitor-agent-xhdhg from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) + Jan 14 04:27:47.814: INFO: Container tke-monitor-agent ready: true, restart count 0 + Jan 14 04:27:47.814: INFO: pfpod from limitrange-1328 started at 2023-01-14 04:27:42 +0000 UTC (1 container statuses recorded) + Jan 14 04:27:47.814: INFO: Container pause ready: true, restart count 0 + Jan 14 04:27:47.814: INFO: sonobuoy-systemd-logs-daemon-set-0fc25bd2d6604eac-jkjnf from sonobuoy started at 2023-01-14 03:34:54 +0000 UTC (2 container statuses recorded) + Jan 14 04:27:47.814: INFO: Container sonobuoy-worker ready: true, restart count 0 + Jan 14 04:27:47.814: INFO: Container systemd-logs ready: true, restart count 0 + Jan 14 04:27:47.814: INFO: ss2-1 from statefulset-8862 started at 2023-01-14 02:33:11 +0000 UTC (1 container statuses recorded) + Jan 14 04:27:47.814: INFO: Container webserver ready: true, restart count 0 + Jan 14 04:27:47.814: INFO: + Logging pods the apiserver thinks is on node 10.0.1.212 before test + Jan 14 04:27:47.825: INFO: kubernetes-proxy-544fb566b4-zpvz8 from default started at 2023-01-14 03:20:34 +0000 UTC (1 container statuses recorded) + Jan 14 04:27:47.825: INFO: Container kubernetes-proxy ready: false, restart count 18 + Jan 14 04:27:47.825: INFO: csi-cbs-node-ddpcx from kube-system started at 2023-01-13 08:11:16 +0000 UTC (2 container statuses recorded) + Jan 14 04:27:47.825: INFO: Container cbs-csi ready: true, restart count 0 + Jan 14 04:27:47.825: INFO: Container driver-registrar ready: true, restart count 0 + Jan 14 04:27:47.825: INFO: ip-masq-agent-8wxxs from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) + Jan 14 04:27:47.825: INFO: Container ip-masq-agent ready: true, restart count 0 + Jan 14 04:27:47.825: INFO: kube-proxy-npt42 from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) + Jan 14 04:27:47.825: INFO: Container kube-proxy ready: true, restart count 0 + Jan 14 04:27:47.825: INFO: tke-bridge-agent-4rffd from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) + Jan 14 04:27:47.825: INFO: Container tke-bridge-agent ready: true, restart count 1 + Jan 14 04:27:47.825: INFO: tke-cni-agent-xmggs from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) + Jan 14 04:27:47.825: INFO: Container tke-cni-agent ready: true, restart count 0 + Jan 14 04:27:47.825: INFO: tke-monitor-agent-6gtt6 from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) + Jan 14 04:27:47.825: INFO: Container tke-monitor-agent ready: true, restart count 0 + Jan 14 04:27:47.825: INFO: sonobuoy-e2e-job-1b1a46fb40e34267 from sonobuoy started at 2023-01-14 03:34:54 +0000 UTC (2 container statuses recorded) + Jan 14 04:27:47.825: INFO: Container e2e ready: true, restart count 0 + Jan 14 04:27:47.825: INFO: Container sonobuoy-worker ready: true, restart count 0 + Jan 14 04:27:47.825: INFO: sonobuoy-systemd-logs-daemon-set-0fc25bd2d6604eac-lgxk7 from sonobuoy started at 2023-01-14 03:34:54 +0000 UTC (2 container statuses recorded) + Jan 14 04:27:47.825: INFO: Container sonobuoy-worker ready: true, restart count 0 + Jan 14 04:27:47.825: INFO: Container systemd-logs ready: true, restart count 0 + Jan 14 04:27:47.825: INFO: ss2-2 from statefulset-8862 started at 2023-01-14 02:33:30 +0000 UTC (1 container statuses recorded) + Jan 14 04:27:47.825: INFO: Container webserver ready: true, restart count 0 + Jan 14 04:27:47.825: INFO: + Logging pods the apiserver thinks is on node 10.0.1.99 before test + Jan 14 04:27:47.835: INFO: csi-cbs-node-q4l9b from kube-system started at 2023-01-13 08:11:43 +0000 UTC (2 container statuses recorded) + Jan 14 04:27:47.835: INFO: Container cbs-csi ready: true, restart count 0 + Jan 14 04:27:47.835: INFO: Container driver-registrar ready: true, restart count 0 + Jan 14 04:27:47.835: INFO: ip-masq-agent-kmrrk from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) + Jan 14 04:27:47.835: INFO: Container ip-masq-agent ready: true, restart count 0 + Jan 14 04:27:47.835: INFO: kube-proxy-g4qjh from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) + Jan 14 04:27:47.835: INFO: Container kube-proxy ready: true, restart count 0 + Jan 14 04:27:47.835: INFO: tke-bridge-agent-hzv6c from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) + Jan 14 04:27:47.835: INFO: Container tke-bridge-agent ready: true, restart count 1 + Jan 14 04:27:47.835: INFO: tke-cni-agent-7mk9b from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) + Jan 14 04:27:47.835: INFO: Container tke-cni-agent ready: true, restart count 0 + Jan 14 04:27:47.835: INFO: tke-monitor-agent-g27mp from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) + Jan 14 04:27:47.835: INFO: Container tke-monitor-agent ready: true, restart count 0 + Jan 14 04:27:47.835: INFO: pod-partial-resources from limitrange-1328 started at 2023-01-14 04:27:40 +0000 UTC (1 container statuses recorded) + Jan 14 04:27:47.835: INFO: Container pause ready: true, restart count 0 + Jan 14 04:27:47.835: INFO: sonobuoy from sonobuoy started at 2023-01-14 03:34:53 +0000 UTC (1 container statuses recorded) + Jan 14 04:27:47.835: INFO: Container kube-sonobuoy ready: true, restart count 0 + Jan 14 04:27:47.835: INFO: sonobuoy-systemd-logs-daemon-set-0fc25bd2d6604eac-nwh62 from sonobuoy started at 2023-01-14 03:34:54 +0000 UTC (2 container statuses recorded) + Jan 14 04:27:47.835: INFO: Container sonobuoy-worker ready: true, restart count 0 + Jan 14 04:27:47.835: INFO: Container systemd-logs ready: true, restart count 0 + Jan 14 04:27:47.835: INFO: ss2-0 from statefulset-8862 started at 2023-01-14 02:33:10 +0000 UTC (1 container statuses recorded) + Jan 14 04:27:47.835: INFO: Container webserver ready: true, restart count 0 + [It] validates that NodeSelector is respected if not matching [Conformance] + test/e2e/scheduling/predicates.go:443 + STEP: Trying to schedule Pod with nonempty NodeSelector. 01/14/23 04:27:47.835 + STEP: Considering event: + Type = [Warning], Name = [restricted-pod.173a126db4f30eb9], Reason = [FailedScheduling], Message = [0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling..] 01/14/23 04:27:53.923 + [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:27:54.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:88 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "sched-pred-7219" for this suite. 01/14/23 04:27:54.927 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-storage] ConfigMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:240 +[BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:27:54.933 +Jan 14 04:27:54.933: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename configmap 01/14/23 04:27:54.934 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:27:54.948 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:27:54.95 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:240 +STEP: Creating configMap with name cm-test-opt-del-1c3285de-0462-40fd-83e5-a1d27307d11c 01/14/23 04:27:54.957 +STEP: Creating configMap with name cm-test-opt-upd-2005a0d8-0432-4fbe-8f53-13192fe21238 01/14/23 04:27:54.96 +STEP: Creating the pod 01/14/23 04:27:54.965 +Jan 14 04:27:54.974: INFO: Waiting up to 5m0s for pod "pod-configmaps-65dbc6d7-a494-4695-ae3e-437b74d31c3e" in namespace "configmap-7197" to be "running and ready" +Jan 14 04:27:54.977: INFO: Pod "pod-configmaps-65dbc6d7-a494-4695-ae3e-437b74d31c3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.924388ms +Jan 14 04:27:54.977: INFO: The phase of Pod pod-configmaps-65dbc6d7-a494-4695-ae3e-437b74d31c3e is Pending, waiting for it to be Running (with Ready = true) +Jan 14 04:27:56.981: INFO: Pod "pod-configmaps-65dbc6d7-a494-4695-ae3e-437b74d31c3e": Phase="Running", Reason="", readiness=true. Elapsed: 2.007489052s +Jan 14 04:27:56.981: INFO: The phase of Pod pod-configmaps-65dbc6d7-a494-4695-ae3e-437b74d31c3e is Running (Ready = true) +Jan 14 04:27:56.981: INFO: Pod "pod-configmaps-65dbc6d7-a494-4695-ae3e-437b74d31c3e" satisfied condition "running and ready" +STEP: Deleting configmap cm-test-opt-del-1c3285de-0462-40fd-83e5-a1d27307d11c 01/14/23 04:27:57.001 +STEP: Updating configmap cm-test-opt-upd-2005a0d8-0432-4fbe-8f53-13192fe21238 01/14/23 04:27:57.006 +STEP: Creating configMap with name cm-test-opt-create-f041939b-63dd-433c-a246-bf27da88ef5a 01/14/23 04:27:57.01 +STEP: waiting to observe update in volume 01/14/23 04:27:57.015 +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 +Jan 14 04:29:17.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 +STEP: Destroying namespace "configmap-7197" for this suite. 01/14/23 04:29:17.373 +------------------------------ +• [SLOW TEST] [82.446 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:240 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:27:54.933 + Jan 14 04:27:54.933: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename configmap 01/14/23 04:27:54.934 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:27:54.948 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:27:54.95 + [BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 + [It] optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:240 + STEP: Creating configMap with name cm-test-opt-del-1c3285de-0462-40fd-83e5-a1d27307d11c 01/14/23 04:27:54.957 + STEP: Creating configMap with name cm-test-opt-upd-2005a0d8-0432-4fbe-8f53-13192fe21238 01/14/23 04:27:54.96 + STEP: Creating the pod 01/14/23 04:27:54.965 + Jan 14 04:27:54.974: INFO: Waiting up to 5m0s for pod "pod-configmaps-65dbc6d7-a494-4695-ae3e-437b74d31c3e" in namespace "configmap-7197" to be "running and ready" + Jan 14 04:27:54.977: INFO: Pod "pod-configmaps-65dbc6d7-a494-4695-ae3e-437b74d31c3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.924388ms + Jan 14 04:27:54.977: INFO: The phase of Pod pod-configmaps-65dbc6d7-a494-4695-ae3e-437b74d31c3e is Pending, waiting for it to be Running (with Ready = true) + Jan 14 04:27:56.981: INFO: Pod "pod-configmaps-65dbc6d7-a494-4695-ae3e-437b74d31c3e": Phase="Running", Reason="", readiness=true. Elapsed: 2.007489052s + Jan 14 04:27:56.981: INFO: The phase of Pod pod-configmaps-65dbc6d7-a494-4695-ae3e-437b74d31c3e is Running (Ready = true) + Jan 14 04:27:56.981: INFO: Pod "pod-configmaps-65dbc6d7-a494-4695-ae3e-437b74d31c3e" satisfied condition "running and ready" + STEP: Deleting configmap cm-test-opt-del-1c3285de-0462-40fd-83e5-a1d27307d11c 01/14/23 04:27:57.001 + STEP: Updating configmap cm-test-opt-upd-2005a0d8-0432-4fbe-8f53-13192fe21238 01/14/23 04:27:57.006 + STEP: Creating configMap with name cm-test-opt-create-f041939b-63dd-433c-a246-bf27da88ef5a 01/14/23 04:27:57.01 + STEP: waiting to observe update in volume 01/14/23 04:27:57.015 + [AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 + Jan 14 04:29:17.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 + STEP: Destroying namespace "configmap-7197" for this suite. 01/14/23 04:29:17.373 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should invoke init containers on a RestartAlways pod [Conformance] + test/e2e/common/node/init_container.go:255 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:29:17.379 +Jan 14 04:29:17.379: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename init-container 01/14/23 04:29:17.38 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:29:17.396 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:29:17.398 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/common/node/init_container.go:165 +[It] should invoke init containers on a RestartAlways pod [Conformance] + test/e2e/common/node/init_container.go:255 +STEP: creating the pod 01/14/23 04:29:17.4 +Jan 14 04:29:17.401: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:29:20.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + tear down framework | framework.go:193 +STEP: Destroying namespace "init-container-3558" for this suite. 01/14/23 04:29:20.183 +------------------------------ +• [2.809 seconds] +[sig-node] InitContainer [NodeConformance] +test/e2e/common/node/framework.go:23 + should invoke init containers on a RestartAlways pod [Conformance] + test/e2e/common/node/init_container.go:255 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] InitContainer [NodeConformance] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:29:17.379 + Jan 14 04:29:17.379: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename init-container 01/14/23 04:29:17.38 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:29:17.396 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:29:17.398 + [BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/common/node/init_container.go:165 + [It] should invoke init containers on a RestartAlways pod [Conformance] + test/e2e/common/node/init_container.go:255 + STEP: creating the pod 01/14/23 04:29:17.4 + Jan 14 04:29:17.401: INFO: PodSpec: initContainers in spec.initContainers + [AfterEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:29:20.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + tear down framework | framework.go:193 + STEP: Destroying namespace "init-container-3558" for this suite. 01/14/23 04:29:20.183 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:99 +[BeforeEach] [sig-storage] Projected configMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:29:20.189 +Jan 14 04:29:20.189: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename projected 01/14/23 04:29:20.19 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:29:20.204 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:29:20.206 +[BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:99 +STEP: Creating configMap with name projected-configmap-test-volume-map-506c00b6-3e17-4c44-960c-ba06bbf1c11a 01/14/23 04:29:20.208 +STEP: Creating a pod to test consume configMaps 01/14/23 04:29:20.22 +Jan 14 04:29:20.248: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5ca7d003-d8a1-4e0c-9313-ff9e7c24e383" in namespace "projected-3324" to be "Succeeded or Failed" +Jan 14 04:29:20.257: INFO: Pod "pod-projected-configmaps-5ca7d003-d8a1-4e0c-9313-ff9e7c24e383": Phase="Pending", Reason="", readiness=false. Elapsed: 9.10209ms +Jan 14 04:29:22.263: INFO: Pod "pod-projected-configmaps-5ca7d003-d8a1-4e0c-9313-ff9e7c24e383": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01449119s +Jan 14 04:29:24.263: INFO: Pod "pod-projected-configmaps-5ca7d003-d8a1-4e0c-9313-ff9e7c24e383": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014307903s +STEP: Saw pod success 01/14/23 04:29:24.263 +Jan 14 04:29:24.263: INFO: Pod "pod-projected-configmaps-5ca7d003-d8a1-4e0c-9313-ff9e7c24e383" satisfied condition "Succeeded or Failed" +Jan 14 04:29:24.266: INFO: Trying to get logs from node 10.0.1.106 pod pod-projected-configmaps-5ca7d003-d8a1-4e0c-9313-ff9e7c24e383 container agnhost-container: +STEP: delete the pod 01/14/23 04:29:24.279 +Jan 14 04:29:24.293: INFO: Waiting for pod pod-projected-configmaps-5ca7d003-d8a1-4e0c-9313-ff9e7c24e383 to disappear +Jan 14 04:29:24.296: INFO: Pod pod-projected-configmaps-5ca7d003-d8a1-4e0c-9313-ff9e7c24e383 no longer exists +[AfterEach] [sig-storage] Projected configMap + test/e2e/framework/node/init/init.go:32 +Jan 14 04:29:24.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected configMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected configMap + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-3324" for this suite. 01/14/23 04:29:24.3 +------------------------------ +• [4.117 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:99 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected configMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:29:20.189 + Jan 14 04:29:20.189: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename projected 01/14/23 04:29:20.19 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:29:20.204 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:29:20.206 + [BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:99 + STEP: Creating configMap with name projected-configmap-test-volume-map-506c00b6-3e17-4c44-960c-ba06bbf1c11a 01/14/23 04:29:20.208 + STEP: Creating a pod to test consume configMaps 01/14/23 04:29:20.22 + Jan 14 04:29:20.248: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5ca7d003-d8a1-4e0c-9313-ff9e7c24e383" in namespace "projected-3324" to be "Succeeded or Failed" + Jan 14 04:29:20.257: INFO: Pod "pod-projected-configmaps-5ca7d003-d8a1-4e0c-9313-ff9e7c24e383": Phase="Pending", Reason="", readiness=false. Elapsed: 9.10209ms + Jan 14 04:29:22.263: INFO: Pod "pod-projected-configmaps-5ca7d003-d8a1-4e0c-9313-ff9e7c24e383": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01449119s + Jan 14 04:29:24.263: INFO: Pod "pod-projected-configmaps-5ca7d003-d8a1-4e0c-9313-ff9e7c24e383": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014307903s + STEP: Saw pod success 01/14/23 04:29:24.263 + Jan 14 04:29:24.263: INFO: Pod "pod-projected-configmaps-5ca7d003-d8a1-4e0c-9313-ff9e7c24e383" satisfied condition "Succeeded or Failed" + Jan 14 04:29:24.266: INFO: Trying to get logs from node 10.0.1.106 pod pod-projected-configmaps-5ca7d003-d8a1-4e0c-9313-ff9e7c24e383 container agnhost-container: + STEP: delete the pod 01/14/23 04:29:24.279 + Jan 14 04:29:24.293: INFO: Waiting for pod pod-projected-configmaps-5ca7d003-d8a1-4e0c-9313-ff9e7c24e383 to disappear + Jan 14 04:29:24.296: INFO: Pod pod-projected-configmaps-5ca7d003-d8a1-4e0c-9313-ff9e7c24e383 no longer exists + [AfterEach] [sig-storage] Projected configMap + test/e2e/framework/node/init/init.go:32 + Jan 14 04:29:24.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected configMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected configMap + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-3324" for this suite. 01/14/23 04:29:24.3 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl label + should update the label on a resource [Conformance] + test/e2e/kubectl/kubectl.go:1509 +[BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:29:24.307 +Jan 14 04:29:24.307: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename kubectl 01/14/23 04:29:24.308 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:29:24.32 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:29:24.322 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[BeforeEach] Kubectl label + test/e2e/kubectl/kubectl.go:1494 +STEP: creating the pod 01/14/23 04:29:24.325 +Jan 14 04:29:24.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-8968 create -f -' +Jan 14 04:29:24.806: INFO: stderr: "" +Jan 14 04:29:24.806: INFO: stdout: "pod/pause created\n" +Jan 14 04:29:24.806: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] +Jan 14 04:29:24.806: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-8968" to be "running and ready" +Jan 14 04:29:24.812: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 5.905358ms +Jan 14 04:29:24.812: INFO: Error evaluating pod condition running and ready: want pod 'pause' on '10.0.1.99' to be 'Running' but was 'Pending' +Jan 14 04:29:26.817: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.010759458s +Jan 14 04:29:26.817: INFO: Pod "pause" satisfied condition "running and ready" +Jan 14 04:29:26.817: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] +[It] should update the label on a resource [Conformance] + test/e2e/kubectl/kubectl.go:1509 +STEP: adding the label testing-label with value testing-label-value to a pod 01/14/23 04:29:26.817 +Jan 14 04:29:26.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-8968 label pods pause testing-label=testing-label-value' +Jan 14 04:29:26.891: INFO: stderr: "" +Jan 14 04:29:26.891: INFO: stdout: "pod/pause labeled\n" +STEP: verifying the pod has the label testing-label with the value testing-label-value 01/14/23 04:29:26.891 +Jan 14 04:29:26.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-8968 get pod pause -L testing-label' +Jan 14 04:29:26.953: INFO: stderr: "" +Jan 14 04:29:26.954: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s testing-label-value\n" +STEP: removing the label testing-label of a pod 01/14/23 04:29:26.954 +Jan 14 04:29:26.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-8968 label pods pause testing-label-' +Jan 14 04:29:27.027: INFO: stderr: "" +Jan 14 04:29:27.027: INFO: stdout: "pod/pause unlabeled\n" +STEP: verifying the pod doesn't have the label testing-label 01/14/23 04:29:27.027 +Jan 14 04:29:27.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-8968 get pod pause -L testing-label' +Jan 14 04:29:27.089: INFO: stderr: "" +Jan 14 04:29:27.089: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 3s \n" +[AfterEach] Kubectl label + test/e2e/kubectl/kubectl.go:1500 +STEP: using delete to clean up resources 01/14/23 04:29:27.09 +Jan 14 04:29:27.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-8968 delete --grace-period=0 --force -f -' +Jan 14 04:29:27.163: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jan 14 04:29:27.163: INFO: stdout: "pod \"pause\" force deleted\n" +Jan 14 04:29:27.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-8968 get rc,svc -l name=pause --no-headers' +Jan 14 04:29:27.229: INFO: stderr: "No resources found in kubectl-8968 namespace.\n" +Jan 14 04:29:27.229: INFO: stdout: "" +Jan 14 04:29:27.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-8968 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Jan 14 04:29:27.291: INFO: stderr: "" +Jan 14 04:29:27.291: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 +Jan 14 04:29:27.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 +STEP: Destroying namespace "kubectl-8968" for this suite. 01/14/23 04:29:27.297 +------------------------------ +• [2.995 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl label + test/e2e/kubectl/kubectl.go:1492 + should update the label on a resource [Conformance] + test/e2e/kubectl/kubectl.go:1509 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:29:24.307 + Jan 14 04:29:24.307: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename kubectl 01/14/23 04:29:24.308 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:29:24.32 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:29:24.322 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [BeforeEach] Kubectl label + test/e2e/kubectl/kubectl.go:1494 + STEP: creating the pod 01/14/23 04:29:24.325 + Jan 14 04:29:24.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-8968 create -f -' + Jan 14 04:29:24.806: INFO: stderr: "" + Jan 14 04:29:24.806: INFO: stdout: "pod/pause created\n" + Jan 14 04:29:24.806: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] + Jan 14 04:29:24.806: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-8968" to be "running and ready" + Jan 14 04:29:24.812: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 5.905358ms + Jan 14 04:29:24.812: INFO: Error evaluating pod condition running and ready: want pod 'pause' on '10.0.1.99' to be 'Running' but was 'Pending' + Jan 14 04:29:26.817: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.010759458s + Jan 14 04:29:26.817: INFO: Pod "pause" satisfied condition "running and ready" + Jan 14 04:29:26.817: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] + [It] should update the label on a resource [Conformance] + test/e2e/kubectl/kubectl.go:1509 + STEP: adding the label testing-label with value testing-label-value to a pod 01/14/23 04:29:26.817 + Jan 14 04:29:26.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-8968 label pods pause testing-label=testing-label-value' + Jan 14 04:29:26.891: INFO: stderr: "" + Jan 14 04:29:26.891: INFO: stdout: "pod/pause labeled\n" + STEP: verifying the pod has the label testing-label with the value testing-label-value 01/14/23 04:29:26.891 + Jan 14 04:29:26.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-8968 get pod pause -L testing-label' + Jan 14 04:29:26.953: INFO: stderr: "" + Jan 14 04:29:26.954: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s testing-label-value\n" + STEP: removing the label testing-label of a pod 01/14/23 04:29:26.954 + Jan 14 04:29:26.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-8968 label pods pause testing-label-' + Jan 14 04:29:27.027: INFO: stderr: "" + Jan 14 04:29:27.027: INFO: stdout: "pod/pause unlabeled\n" + STEP: verifying the pod doesn't have the label testing-label 01/14/23 04:29:27.027 + Jan 14 04:29:27.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-8968 get pod pause -L testing-label' + Jan 14 04:29:27.089: INFO: stderr: "" + Jan 14 04:29:27.089: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 3s \n" + [AfterEach] Kubectl label + test/e2e/kubectl/kubectl.go:1500 + STEP: using delete to clean up resources 01/14/23 04:29:27.09 + Jan 14 04:29:27.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-8968 delete --grace-period=0 --force -f -' + Jan 14 04:29:27.163: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" + Jan 14 04:29:27.163: INFO: stdout: "pod \"pause\" force deleted\n" + Jan 14 04:29:27.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-8968 get rc,svc -l name=pause --no-headers' + Jan 14 04:29:27.229: INFO: stderr: "No resources found in kubectl-8968 namespace.\n" + Jan 14 04:29:27.229: INFO: stdout: "" + Jan 14 04:29:27.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-8968 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' + Jan 14 04:29:27.291: INFO: stderr: "" + Jan 14 04:29:27.291: INFO: stdout: "" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 + Jan 14 04:29:27.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 + STEP: Destroying namespace "kubectl-8968" for this suite. 01/14/23 04:29:27.297 + << End Captured GinkgoWriter Output +------------------------------ +[sig-apps] DisruptionController + should create a PodDisruptionBudget [Conformance] + test/e2e/apps/disruption.go:108 +[BeforeEach] [sig-apps] DisruptionController + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:29:27.302 +Jan 14 04:29:27.302: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename disruption 01/14/23 04:29:27.303 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:29:27.317 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:29:27.32 +[BeforeEach] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:72 +[It] should create a PodDisruptionBudget [Conformance] + test/e2e/apps/disruption.go:108 +STEP: creating the pdb 01/14/23 04:29:27.322 +STEP: Waiting for the pdb to be processed 01/14/23 04:29:27.326 +STEP: updating the pdb 01/14/23 04:29:29.335 +STEP: Waiting for the pdb to be processed 01/14/23 04:29:29.344 +STEP: patching the pdb 01/14/23 04:29:31.352 +STEP: Waiting for the pdb to be processed 01/14/23 04:29:31.362 +STEP: Waiting for the pdb to be deleted 01/14/23 04:29:31.374 +[AfterEach] [sig-apps] DisruptionController + test/e2e/framework/node/init/init.go:32 +Jan 14 04:29:31.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] DisruptionController + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] DisruptionController + tear down framework | framework.go:193 +STEP: Destroying namespace "disruption-9902" for this suite. 01/14/23 04:29:31.383 +------------------------------ +• [4.086 seconds] +[sig-apps] DisruptionController +test/e2e/apps/framework.go:23 + should create a PodDisruptionBudget [Conformance] + test/e2e/apps/disruption.go:108 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] DisruptionController + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:29:27.302 + Jan 14 04:29:27.302: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename disruption 01/14/23 04:29:27.303 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:29:27.317 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:29:27.32 + [BeforeEach] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:72 + [It] should create a PodDisruptionBudget [Conformance] + test/e2e/apps/disruption.go:108 + STEP: creating the pdb 01/14/23 04:29:27.322 + STEP: Waiting for the pdb to be processed 01/14/23 04:29:27.326 + STEP: updating the pdb 01/14/23 04:29:29.335 + STEP: Waiting for the pdb to be processed 01/14/23 04:29:29.344 + STEP: patching the pdb 01/14/23 04:29:31.352 + STEP: Waiting for the pdb to be processed 01/14/23 04:29:31.362 + STEP: Waiting for the pdb to be deleted 01/14/23 04:29:31.374 + [AfterEach] [sig-apps] DisruptionController + test/e2e/framework/node/init/init.go:32 + Jan 14 04:29:31.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] DisruptionController + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] DisruptionController + tear down framework | framework.go:193 + STEP: Destroying namespace "disruption-9902" for this suite. 01/14/23 04:29:31.383 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:127 +[BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:29:31.389 +Jan 14 04:29:31.389: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename emptydir 01/14/23 04:29:31.39 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:29:31.404 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:29:31.406 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 +[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:127 +STEP: Creating a pod to test emptydir 0644 on tmpfs 01/14/23 04:29:31.408 +Jan 14 04:29:31.418: INFO: Waiting up to 5m0s for pod "pod-78733b03-30a0-4a44-86a0-39458f1e6c04" in namespace "emptydir-2130" to be "Succeeded or Failed" +Jan 14 04:29:31.421: INFO: Pod "pod-78733b03-30a0-4a44-86a0-39458f1e6c04": Phase="Pending", Reason="", readiness=false. Elapsed: 3.331094ms +Jan 14 04:29:33.427: INFO: Pod "pod-78733b03-30a0-4a44-86a0-39458f1e6c04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008613873s +Jan 14 04:29:35.426: INFO: Pod "pod-78733b03-30a0-4a44-86a0-39458f1e6c04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008192843s +STEP: Saw pod success 01/14/23 04:29:35.426 +Jan 14 04:29:35.426: INFO: Pod "pod-78733b03-30a0-4a44-86a0-39458f1e6c04" satisfied condition "Succeeded or Failed" +Jan 14 04:29:35.429: INFO: Trying to get logs from node 10.0.1.106 pod pod-78733b03-30a0-4a44-86a0-39458f1e6c04 container test-container: +STEP: delete the pod 01/14/23 04:29:35.435 +Jan 14 04:29:35.448: INFO: Waiting for pod pod-78733b03-30a0-4a44-86a0-39458f1e6c04 to disappear +Jan 14 04:29:35.450: INFO: Pod pod-78733b03-30a0-4a44-86a0-39458f1e6c04 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 +Jan 14 04:29:35.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 +STEP: Destroying namespace "emptydir-2130" for this suite. 01/14/23 04:29:35.455 +------------------------------ +• [4.072 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:127 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:29:31.389 + Jan 14 04:29:31.389: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename emptydir 01/14/23 04:29:31.39 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:29:31.404 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:29:31.406 + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 + [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:127 + STEP: Creating a pod to test emptydir 0644 on tmpfs 01/14/23 04:29:31.408 + Jan 14 04:29:31.418: INFO: Waiting up to 5m0s for pod "pod-78733b03-30a0-4a44-86a0-39458f1e6c04" in namespace "emptydir-2130" to be "Succeeded or Failed" + Jan 14 04:29:31.421: INFO: Pod "pod-78733b03-30a0-4a44-86a0-39458f1e6c04": Phase="Pending", Reason="", readiness=false. Elapsed: 3.331094ms + Jan 14 04:29:33.427: INFO: Pod "pod-78733b03-30a0-4a44-86a0-39458f1e6c04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008613873s + Jan 14 04:29:35.426: INFO: Pod "pod-78733b03-30a0-4a44-86a0-39458f1e6c04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008192843s + STEP: Saw pod success 01/14/23 04:29:35.426 + Jan 14 04:29:35.426: INFO: Pod "pod-78733b03-30a0-4a44-86a0-39458f1e6c04" satisfied condition "Succeeded or Failed" + Jan 14 04:29:35.429: INFO: Trying to get logs from node 10.0.1.106 pod pod-78733b03-30a0-4a44-86a0-39458f1e6c04 container test-container: + STEP: delete the pod 01/14/23 04:29:35.435 + Jan 14 04:29:35.448: INFO: Waiting for pod pod-78733b03-30a0-4a44-86a0-39458f1e6c04 to disappear + Jan 14 04:29:35.450: INFO: Pod pod-78733b03-30a0-4a44-86a0-39458f1e6c04 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 + Jan 14 04:29:35.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 + STEP: Destroying namespace "emptydir-2130" for this suite. 01/14/23 04:29:35.455 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints + verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + test/e2e/scheduling/preemption.go:806 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:29:35.462 +Jan 14 04:29:35.462: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename sched-preemption 01/14/23 04:29:35.463 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:29:35.477 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:29:35.479 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:96 +Jan 14 04:29:35.495: INFO: Waiting up to 1m0s for all nodes to be ready +Jan 14 04:30:35.534: INFO: Waiting for terminating namespaces to be deleted... +[BeforeEach] PriorityClass endpoints + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:30:35.537 +Jan 14 04:30:35.537: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename sched-preemption-path 01/14/23 04:30:35.538 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:30:35.554 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:30:35.556 +[BeforeEach] PriorityClass endpoints + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] PriorityClass endpoints + test/e2e/scheduling/preemption.go:763 +[It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + test/e2e/scheduling/preemption.go:806 +Jan 14 04:30:35.570: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: value: Forbidden: may not be changed in an update. +Jan 14 04:30:35.573: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: value: Forbidden: may not be changed in an update. +[AfterEach] PriorityClass endpoints + test/e2e/framework/node/init/init.go:32 +Jan 14 04:30:35.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] PriorityClass endpoints + test/e2e/scheduling/preemption.go:779 +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:30:35.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:84 +[DeferCleanup (Each)] PriorityClass endpoints + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] PriorityClass endpoints + dump namespaces | framework.go:196 +[DeferCleanup (Each)] PriorityClass endpoints + tear down framework | framework.go:193 +STEP: Destroying namespace "sched-preemption-path-3412" for this suite. 01/14/23 04:30:35.646 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "sched-preemption-4616" for this suite. 01/14/23 04:30:35.652 +------------------------------ +• [SLOW TEST] [60.195 seconds] +[sig-scheduling] SchedulerPreemption [Serial] +test/e2e/scheduling/framework.go:40 + PriorityClass endpoints + test/e2e/scheduling/preemption.go:756 + verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + test/e2e/scheduling/preemption.go:806 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:29:35.462 + Jan 14 04:29:35.462: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename sched-preemption 01/14/23 04:29:35.463 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:29:35.477 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:29:35.479 + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:96 + Jan 14 04:29:35.495: INFO: Waiting up to 1m0s for all nodes to be ready + Jan 14 04:30:35.534: INFO: Waiting for terminating namespaces to be deleted... + [BeforeEach] PriorityClass endpoints + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:30:35.537 + Jan 14 04:30:35.537: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename sched-preemption-path 01/14/23 04:30:35.538 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:30:35.554 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:30:35.556 + [BeforeEach] PriorityClass endpoints + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] PriorityClass endpoints + test/e2e/scheduling/preemption.go:763 + [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] + test/e2e/scheduling/preemption.go:806 + Jan 14 04:30:35.570: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: value: Forbidden: may not be changed in an update. + Jan 14 04:30:35.573: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: value: Forbidden: may not be changed in an update. + [AfterEach] PriorityClass endpoints + test/e2e/framework/node/init/init.go:32 + Jan 14 04:30:35.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] PriorityClass endpoints + test/e2e/scheduling/preemption.go:779 + [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:30:35.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:84 + [DeferCleanup (Each)] PriorityClass endpoints + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] PriorityClass endpoints + dump namespaces | framework.go:196 + [DeferCleanup (Each)] PriorityClass endpoints + tear down framework | framework.go:193 + STEP: Destroying namespace "sched-preemption-path-3412" for this suite. 01/14/23 04:30:35.646 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "sched-preemption-4616" for this suite. 01/14/23 04:30:35.652 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's cpu request [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:221 +[BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:30:35.658 +Jan 14 04:30:35.658: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename projected 01/14/23 04:30:35.658 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:30:35.674 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:30:35.677 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 +[It] should provide container's cpu request [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:221 +STEP: Creating a pod to test downward API volume plugin 01/14/23 04:30:35.679 +Jan 14 04:30:35.689: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9f5f9f43-1d27-49bd-a302-af3b11b35cbc" in namespace "projected-7997" to be "Succeeded or Failed" +Jan 14 04:30:35.692: INFO: Pod "downwardapi-volume-9f5f9f43-1d27-49bd-a302-af3b11b35cbc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.120763ms +Jan 14 04:30:37.698: INFO: Pod "downwardapi-volume-9f5f9f43-1d27-49bd-a302-af3b11b35cbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008220979s +Jan 14 04:30:39.697: INFO: Pod "downwardapi-volume-9f5f9f43-1d27-49bd-a302-af3b11b35cbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007432216s +STEP: Saw pod success 01/14/23 04:30:39.697 +Jan 14 04:30:39.697: INFO: Pod "downwardapi-volume-9f5f9f43-1d27-49bd-a302-af3b11b35cbc" satisfied condition "Succeeded or Failed" +Jan 14 04:30:39.700: INFO: Trying to get logs from node 10.0.1.106 pod downwardapi-volume-9f5f9f43-1d27-49bd-a302-af3b11b35cbc container client-container: +STEP: delete the pod 01/14/23 04:30:39.706 +Jan 14 04:30:39.720: INFO: Waiting for pod downwardapi-volume-9f5f9f43-1d27-49bd-a302-af3b11b35cbc to disappear +Jan 14 04:30:39.723: INFO: Pod downwardapi-volume-9f5f9f43-1d27-49bd-a302-af3b11b35cbc no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 +Jan 14 04:30:39.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-7997" for this suite. 01/14/23 04:30:39.728 +------------------------------ +• [4.075 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should provide container's cpu request [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:221 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:30:35.658 + Jan 14 04:30:35.658: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename projected 01/14/23 04:30:35.658 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:30:35.674 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:30:35.677 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 + [It] should provide container's cpu request [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:221 + STEP: Creating a pod to test downward API volume plugin 01/14/23 04:30:35.679 + Jan 14 04:30:35.689: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9f5f9f43-1d27-49bd-a302-af3b11b35cbc" in namespace "projected-7997" to be "Succeeded or Failed" + Jan 14 04:30:35.692: INFO: Pod "downwardapi-volume-9f5f9f43-1d27-49bd-a302-af3b11b35cbc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.120763ms + Jan 14 04:30:37.698: INFO: Pod "downwardapi-volume-9f5f9f43-1d27-49bd-a302-af3b11b35cbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008220979s + Jan 14 04:30:39.697: INFO: Pod "downwardapi-volume-9f5f9f43-1d27-49bd-a302-af3b11b35cbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007432216s + STEP: Saw pod success 01/14/23 04:30:39.697 + Jan 14 04:30:39.697: INFO: Pod "downwardapi-volume-9f5f9f43-1d27-49bd-a302-af3b11b35cbc" satisfied condition "Succeeded or Failed" + Jan 14 04:30:39.700: INFO: Trying to get logs from node 10.0.1.106 pod downwardapi-volume-9f5f9f43-1d27-49bd-a302-af3b11b35cbc container client-container: + STEP: delete the pod 01/14/23 04:30:39.706 + Jan 14 04:30:39.720: INFO: Waiting for pod downwardapi-volume-9f5f9f43-1d27-49bd-a302-af3b11b35cbc to disappear + Jan 14 04:30:39.723: INFO: Pod downwardapi-volume-9f5f9f43-1d27-49bd-a302-af3b11b35cbc no longer exists + [AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 + Jan 14 04:30:39.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-7997" for this suite. 01/14/23 04:30:39.728 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD preserving unknown fields in an embedded object [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:236 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:30:39.734 +Jan 14 04:30:39.734: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename crd-publish-openapi 01/14/23 04:30:39.734 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:30:39.749 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:30:39.752 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[It] works for CRD preserving unknown fields in an embedded object [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:236 +Jan 14 04:30:39.754: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: kubectl validation (kubectl create and apply) allows request with any unknown properties 01/14/23 04:30:41.541 +Jan 14 04:30:41.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-3019 --namespace=crd-publish-openapi-3019 create -f -' +Jan 14 04:30:42.065: INFO: stderr: "" +Jan 14 04:30:42.065: INFO: stdout: "e2e-test-crd-publish-openapi-9819-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" +Jan 14 04:30:42.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-3019 --namespace=crd-publish-openapi-3019 delete e2e-test-crd-publish-openapi-9819-crds test-cr' +Jan 14 04:30:42.159: INFO: stderr: "" +Jan 14 04:30:42.159: INFO: stdout: "e2e-test-crd-publish-openapi-9819-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" +Jan 14 04:30:42.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-3019 --namespace=crd-publish-openapi-3019 apply -f -' +Jan 14 04:30:42.637: INFO: stderr: "" +Jan 14 04:30:42.637: INFO: stdout: "e2e-test-crd-publish-openapi-9819-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" +Jan 14 04:30:42.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-3019 --namespace=crd-publish-openapi-3019 delete e2e-test-crd-publish-openapi-9819-crds test-cr' +Jan 14 04:30:42.731: INFO: stderr: "" +Jan 14 04:30:42.731: INFO: stdout: "e2e-test-crd-publish-openapi-9819-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR 01/14/23 04:30:42.731 +Jan 14 04:30:42.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-3019 explain e2e-test-crd-publish-openapi-9819-crds' +Jan 14 04:30:42.900: INFO: stderr: "" +Jan 14 04:30:42.900: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-9819-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:30:44.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "crd-publish-openapi-3019" for this suite. 01/14/23 04:30:44.711 +------------------------------ +• [4.983 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + works for CRD preserving unknown fields in an embedded object [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:236 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:30:39.734 + Jan 14 04:30:39.734: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename crd-publish-openapi 01/14/23 04:30:39.734 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:30:39.749 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:30:39.752 + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [It] works for CRD preserving unknown fields in an embedded object [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:236 + Jan 14 04:30:39.754: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: kubectl validation (kubectl create and apply) allows request with any unknown properties 01/14/23 04:30:41.541 + Jan 14 04:30:41.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-3019 --namespace=crd-publish-openapi-3019 create -f -' + Jan 14 04:30:42.065: INFO: stderr: "" + Jan 14 04:30:42.065: INFO: stdout: "e2e-test-crd-publish-openapi-9819-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" + Jan 14 04:30:42.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-3019 --namespace=crd-publish-openapi-3019 delete e2e-test-crd-publish-openapi-9819-crds test-cr' + Jan 14 04:30:42.159: INFO: stderr: "" + Jan 14 04:30:42.159: INFO: stdout: "e2e-test-crd-publish-openapi-9819-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" + Jan 14 04:30:42.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-3019 --namespace=crd-publish-openapi-3019 apply -f -' + Jan 14 04:30:42.637: INFO: stderr: "" + Jan 14 04:30:42.637: INFO: stdout: "e2e-test-crd-publish-openapi-9819-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" + Jan 14 04:30:42.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-3019 --namespace=crd-publish-openapi-3019 delete e2e-test-crd-publish-openapi-9819-crds test-cr' + Jan 14 04:30:42.731: INFO: stderr: "" + Jan 14 04:30:42.731: INFO: stdout: "e2e-test-crd-publish-openapi-9819-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" + STEP: kubectl explain works to explain CR 01/14/23 04:30:42.731 + Jan 14 04:30:42.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-3019 explain e2e-test-crd-publish-openapi-9819-crds' + Jan 14 04:30:42.900: INFO: stderr: "" + Jan 14 04:30:42.900: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-9819-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" + [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:30:44.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "crd-publish-openapi-3019" for this suite. 01/14/23 04:30:44.711 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Kubectl expose + should create services for rc [Conformance] + test/e2e/kubectl/kubectl.go:1415 +[BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:30:44.718 +Jan 14 04:30:44.719: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename kubectl 01/14/23 04:30:44.719 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:30:44.813 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:30:44.815 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[It] should create services for rc [Conformance] + test/e2e/kubectl/kubectl.go:1415 +STEP: creating Agnhost RC 01/14/23 04:30:44.817 +Jan 14 04:30:44.818: INFO: namespace kubectl-7274 +Jan 14 04:30:44.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-7274 create -f -' +Jan 14 04:30:45.333: INFO: stderr: "" +Jan 14 04:30:45.333: INFO: stdout: "replicationcontroller/agnhost-primary created\n" +STEP: Waiting for Agnhost primary to start. 01/14/23 04:30:45.333 +Jan 14 04:30:46.337: INFO: Selector matched 1 pods for map[app:agnhost] +Jan 14 04:30:46.337: INFO: Found 1 / 1 +Jan 14 04:30:46.337: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Jan 14 04:30:46.340: INFO: Selector matched 1 pods for map[app:agnhost] +Jan 14 04:30:46.340: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Jan 14 04:30:46.340: INFO: wait on agnhost-primary startup in kubectl-7274 +Jan 14 04:30:46.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-7274 logs agnhost-primary-jdgh4 agnhost-primary' +Jan 14 04:30:46.412: INFO: stderr: "" +Jan 14 04:30:46.412: INFO: stdout: "Paused\n" +STEP: exposing RC 01/14/23 04:30:46.412 +Jan 14 04:30:46.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-7274 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' +Jan 14 04:30:46.490: INFO: stderr: "" +Jan 14 04:30:46.490: INFO: stdout: "service/rm2 exposed\n" +Jan 14 04:30:46.493: INFO: Service rm2 in namespace kubectl-7274 found. +STEP: exposing service 01/14/23 04:30:48.501 +Jan 14 04:30:48.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-7274 expose service rm2 --name=rm3 --port=2345 --target-port=6379' +Jan 14 04:30:48.577: INFO: stderr: "" +Jan 14 04:30:48.577: INFO: stdout: "service/rm3 exposed\n" +Jan 14 04:30:48.581: INFO: Service rm3 in namespace kubectl-7274 found. +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 +Jan 14 04:30:50.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 +STEP: Destroying namespace "kubectl-7274" for this suite. 01/14/23 04:30:50.593 +------------------------------ +• [SLOW TEST] [5.881 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Kubectl expose + test/e2e/kubectl/kubectl.go:1409 + should create services for rc [Conformance] + test/e2e/kubectl/kubectl.go:1415 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:30:44.718 + Jan 14 04:30:44.719: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename kubectl 01/14/23 04:30:44.719 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:30:44.813 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:30:44.815 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [It] should create services for rc [Conformance] + test/e2e/kubectl/kubectl.go:1415 + STEP: creating Agnhost RC 01/14/23 04:30:44.817 + Jan 14 04:30:44.818: INFO: namespace kubectl-7274 + Jan 14 04:30:44.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-7274 create -f -' + Jan 14 04:30:45.333: INFO: stderr: "" + Jan 14 04:30:45.333: INFO: stdout: "replicationcontroller/agnhost-primary created\n" + STEP: Waiting for Agnhost primary to start. 01/14/23 04:30:45.333 + Jan 14 04:30:46.337: INFO: Selector matched 1 pods for map[app:agnhost] + Jan 14 04:30:46.337: INFO: Found 1 / 1 + Jan 14 04:30:46.337: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 + Jan 14 04:30:46.340: INFO: Selector matched 1 pods for map[app:agnhost] + Jan 14 04:30:46.340: INFO: ForEach: Found 1 pods from the filter. Now looping through them. + Jan 14 04:30:46.340: INFO: wait on agnhost-primary startup in kubectl-7274 + Jan 14 04:30:46.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-7274 logs agnhost-primary-jdgh4 agnhost-primary' + Jan 14 04:30:46.412: INFO: stderr: "" + Jan 14 04:30:46.412: INFO: stdout: "Paused\n" + STEP: exposing RC 01/14/23 04:30:46.412 + Jan 14 04:30:46.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-7274 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' + Jan 14 04:30:46.490: INFO: stderr: "" + Jan 14 04:30:46.490: INFO: stdout: "service/rm2 exposed\n" + Jan 14 04:30:46.493: INFO: Service rm2 in namespace kubectl-7274 found. + STEP: exposing service 01/14/23 04:30:48.501 + Jan 14 04:30:48.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-7274 expose service rm2 --name=rm3 --port=2345 --target-port=6379' + Jan 14 04:30:48.577: INFO: stderr: "" + Jan 14 04:30:48.577: INFO: stdout: "service/rm3 exposed\n" + Jan 14 04:30:48.581: INFO: Service rm3 in namespace kubectl-7274 found. + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 + Jan 14 04:30:50.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 + STEP: Destroying namespace "kubectl-7274" for this suite. 01/14/23 04:30:50.593 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + works for CRD preserving unknown fields at the schema root [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:194 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:30:50.6 +Jan 14 04:30:50.600: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename crd-publish-openapi 01/14/23 04:30:50.6 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:30:50.616 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:30:50.618 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[It] works for CRD preserving unknown fields at the schema root [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:194 +Jan 14 04:30:50.620: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: kubectl validation (kubectl create and apply) allows request with any unknown properties 01/14/23 04:30:52.387 +Jan 14 04:30:52.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-9775 --namespace=crd-publish-openapi-9775 create -f -' +Jan 14 04:30:52.931: INFO: stderr: "" +Jan 14 04:30:52.931: INFO: stdout: "e2e-test-crd-publish-openapi-3580-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" +Jan 14 04:30:52.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-9775 --namespace=crd-publish-openapi-9775 delete e2e-test-crd-publish-openapi-3580-crds test-cr' +Jan 14 04:30:53.022: INFO: stderr: "" +Jan 14 04:30:53.022: INFO: stdout: "e2e-test-crd-publish-openapi-3580-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" +Jan 14 04:30:53.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-9775 --namespace=crd-publish-openapi-9775 apply -f -' +Jan 14 04:30:53.519: INFO: stderr: "" +Jan 14 04:30:53.519: INFO: stdout: "e2e-test-crd-publish-openapi-3580-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" +Jan 14 04:30:53.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-9775 --namespace=crd-publish-openapi-9775 delete e2e-test-crd-publish-openapi-3580-crds test-cr' +Jan 14 04:30:53.592: INFO: stderr: "" +Jan 14 04:30:53.592: INFO: stdout: "e2e-test-crd-publish-openapi-3580-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" +STEP: kubectl explain works to explain CR 01/14/23 04:30:53.592 +Jan 14 04:30:53.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-9775 explain e2e-test-crd-publish-openapi-3580-crds' +Jan 14 04:30:54.094: INFO: stderr: "" +Jan 14 04:30:54.094: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-3580-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:30:55.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "crd-publish-openapi-9775" for this suite. 01/14/23 04:30:55.854 +------------------------------ +• [SLOW TEST] [5.261 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + works for CRD preserving unknown fields at the schema root [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:194 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:30:50.6 + Jan 14 04:30:50.600: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename crd-publish-openapi 01/14/23 04:30:50.6 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:30:50.616 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:30:50.618 + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [It] works for CRD preserving unknown fields at the schema root [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:194 + Jan 14 04:30:50.620: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: kubectl validation (kubectl create and apply) allows request with any unknown properties 01/14/23 04:30:52.387 + Jan 14 04:30:52.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-9775 --namespace=crd-publish-openapi-9775 create -f -' + Jan 14 04:30:52.931: INFO: stderr: "" + Jan 14 04:30:52.931: INFO: stdout: "e2e-test-crd-publish-openapi-3580-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" + Jan 14 04:30:52.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-9775 --namespace=crd-publish-openapi-9775 delete e2e-test-crd-publish-openapi-3580-crds test-cr' + Jan 14 04:30:53.022: INFO: stderr: "" + Jan 14 04:30:53.022: INFO: stdout: "e2e-test-crd-publish-openapi-3580-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" + Jan 14 04:30:53.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-9775 --namespace=crd-publish-openapi-9775 apply -f -' + Jan 14 04:30:53.519: INFO: stderr: "" + Jan 14 04:30:53.519: INFO: stdout: "e2e-test-crd-publish-openapi-3580-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" + Jan 14 04:30:53.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-9775 --namespace=crd-publish-openapi-9775 delete e2e-test-crd-publish-openapi-3580-crds test-cr' + Jan 14 04:30:53.592: INFO: stderr: "" + Jan 14 04:30:53.592: INFO: stdout: "e2e-test-crd-publish-openapi-3580-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" + STEP: kubectl explain works to explain CR 01/14/23 04:30:53.592 + Jan 14 04:30:53.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=crd-publish-openapi-9775 explain e2e-test-crd-publish-openapi-3580-crds' + Jan 14 04:30:54.094: INFO: stderr: "" + Jan 14 04:30:54.094: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-3580-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" + [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:30:55.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "crd-publish-openapi-9775" for this suite. 01/14/23 04:30:55.854 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSS +------------------------------ +[sig-node] NoExecuteTaintManager Single Pod [Serial] + removing taint cancels eviction [Disruptive] [Conformance] + test/e2e/node/taints.go:293 +[BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:30:55.861 +Jan 14 04:30:55.861: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename taint-single-pod 01/14/23 04:30:55.861 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:30:55.875 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:30:55.877 +[BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + test/e2e/node/taints.go:170 +Jan 14 04:30:55.879: INFO: Waiting up to 1m0s for all nodes to be ready +Jan 14 04:31:55.917: INFO: Waiting for terminating namespaces to be deleted... +[It] removing taint cancels eviction [Disruptive] [Conformance] + test/e2e/node/taints.go:293 +Jan 14 04:31:55.920: INFO: Starting informer... +STEP: Starting pod... 01/14/23 04:31:55.92 +Jan 14 04:31:56.135: INFO: Pod is running on 10.0.1.106. Tainting Node +STEP: Trying to apply a taint on the Node 01/14/23 04:31:56.135 +STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 01/14/23 04:31:56.15 +STEP: Waiting short time to make sure Pod is queued for deletion 01/14/23 04:31:56.153 +Jan 14 04:31:56.153: INFO: Pod wasn't evicted. Proceeding +Jan 14 04:31:56.153: INFO: Removing taint from Node +STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 01/14/23 04:31:56.165 +STEP: Waiting some time to make sure that toleration time passed. 01/14/23 04:31:56.168 +Jan 14 04:33:11.168: INFO: Pod wasn't evicted. Test successful +[AfterEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:33:11.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Single Pod [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Single Pod [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Single Pod [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "taint-single-pod-9597" for this suite. 01/14/23 04:33:11.175 +------------------------------ +• [SLOW TEST] [135.321 seconds] +[sig-node] NoExecuteTaintManager Single Pod [Serial] +test/e2e/node/framework.go:23 + removing taint cancels eviction [Disruptive] [Conformance] + test/e2e/node/taints.go:293 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:30:55.861 + Jan 14 04:30:55.861: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename taint-single-pod 01/14/23 04:30:55.861 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:30:55.875 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:30:55.877 + [BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + test/e2e/node/taints.go:170 + Jan 14 04:30:55.879: INFO: Waiting up to 1m0s for all nodes to be ready + Jan 14 04:31:55.917: INFO: Waiting for terminating namespaces to be deleted... + [It] removing taint cancels eviction [Disruptive] [Conformance] + test/e2e/node/taints.go:293 + Jan 14 04:31:55.920: INFO: Starting informer... + STEP: Starting pod... 01/14/23 04:31:55.92 + Jan 14 04:31:56.135: INFO: Pod is running on 10.0.1.106. Tainting Node + STEP: Trying to apply a taint on the Node 01/14/23 04:31:56.135 + STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 01/14/23 04:31:56.15 + STEP: Waiting short time to make sure Pod is queued for deletion 01/14/23 04:31:56.153 + Jan 14 04:31:56.153: INFO: Pod wasn't evicted. Proceeding + Jan 14 04:31:56.153: INFO: Removing taint from Node + STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 01/14/23 04:31:56.165 + STEP: Waiting some time to make sure that toleration time passed. 01/14/23 04:31:56.168 + Jan 14 04:33:11.168: INFO: Pod wasn't evicted. Test successful + [AfterEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:33:11.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Single Pod [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Single Pod [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Single Pod [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "taint-single-pod-9597" for this suite. 01/14/23 04:33:11.175 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] Certificates API [Privileged:ClusterAdmin] + should support CSR API operations [Conformance] + test/e2e/auth/certificates.go:200 +[BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:33:11.183 +Jan 14 04:33:11.183: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename certificates 01/14/23 04:33:11.183 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:33:11.198 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:33:11.201 +[BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[It] should support CSR API operations [Conformance] + test/e2e/auth/certificates.go:200 +STEP: getting /apis 01/14/23 04:33:11.988 +STEP: getting /apis/certificates.k8s.io 01/14/23 04:33:11.991 +STEP: getting /apis/certificates.k8s.io/v1 01/14/23 04:33:11.992 +STEP: creating 01/14/23 04:33:11.993 +STEP: getting 01/14/23 04:33:12.048 +STEP: listing 01/14/23 04:33:12.051 +STEP: watching 01/14/23 04:33:12.054 +Jan 14 04:33:12.054: INFO: starting watch +STEP: patching 01/14/23 04:33:12.055 +STEP: updating 01/14/23 04:33:12.09 +Jan 14 04:33:12.096: INFO: waiting for watch events with expected annotations +Jan 14 04:33:12.096: INFO: saw patched and updated annotations +STEP: getting /approval 01/14/23 04:33:12.096 +STEP: patching /approval 01/14/23 04:33:12.099 +STEP: updating /approval 01/14/23 04:33:12.105 +STEP: getting /status 01/14/23 04:33:12.112 +STEP: patching /status 01/14/23 04:33:12.115 +STEP: updating /status 01/14/23 04:33:12.122 +STEP: deleting 01/14/23 04:33:12.129 +STEP: deleting a collection 01/14/23 04:33:12.139 +[AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:33:12.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-auth] Certificates API [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-auth] Certificates API [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-auth] Certificates API [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "certificates-3843" for this suite. 01/14/23 04:33:12.158 +------------------------------ +• [0.980 seconds] +[sig-auth] Certificates API [Privileged:ClusterAdmin] +test/e2e/auth/framework.go:23 + should support CSR API operations [Conformance] + test/e2e/auth/certificates.go:200 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:33:11.183 + Jan 14 04:33:11.183: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename certificates 01/14/23 04:33:11.183 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:33:11.198 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:33:11.201 + [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [It] should support CSR API operations [Conformance] + test/e2e/auth/certificates.go:200 + STEP: getting /apis 01/14/23 04:33:11.988 + STEP: getting /apis/certificates.k8s.io 01/14/23 04:33:11.991 + STEP: getting /apis/certificates.k8s.io/v1 01/14/23 04:33:11.992 + STEP: creating 01/14/23 04:33:11.993 + STEP: getting 01/14/23 04:33:12.048 + STEP: listing 01/14/23 04:33:12.051 + STEP: watching 01/14/23 04:33:12.054 + Jan 14 04:33:12.054: INFO: starting watch + STEP: patching 01/14/23 04:33:12.055 + STEP: updating 01/14/23 04:33:12.09 + Jan 14 04:33:12.096: INFO: waiting for watch events with expected annotations + Jan 14 04:33:12.096: INFO: saw patched and updated annotations + STEP: getting /approval 01/14/23 04:33:12.096 + STEP: patching /approval 01/14/23 04:33:12.099 + STEP: updating /approval 01/14/23 04:33:12.105 + STEP: getting /status 01/14/23 04:33:12.112 + STEP: patching /status 01/14/23 04:33:12.115 + STEP: updating /status 01/14/23 04:33:12.122 + STEP: deleting 01/14/23 04:33:12.129 + STEP: deleting a collection 01/14/23 04:33:12.139 + [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:33:12.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-auth] Certificates API [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-auth] Certificates API [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-auth] Certificates API [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "certificates-3843" for this suite. 01/14/23 04:33:12.158 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-storage] Downward API volume + should update annotations on modification [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:162 +[BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:33:12.163 +Jan 14 04:33:12.163: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename downward-api 01/14/23 04:33:12.164 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:33:12.178 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:33:12.18 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 +[It] should update annotations on modification [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:162 +STEP: Creating the pod 01/14/23 04:33:12.182 +Jan 14 04:33:12.192: INFO: Waiting up to 5m0s for pod "annotationupdate408515c4-3fce-46f1-8f3b-f20ee55d15f4" in namespace "downward-api-9088" to be "running and ready" +Jan 14 04:33:12.197: INFO: Pod "annotationupdate408515c4-3fce-46f1-8f3b-f20ee55d15f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.752513ms +Jan 14 04:33:12.197: INFO: The phase of Pod annotationupdate408515c4-3fce-46f1-8f3b-f20ee55d15f4 is Pending, waiting for it to be Running (with Ready = true) +Jan 14 04:33:14.202: INFO: Pod "annotationupdate408515c4-3fce-46f1-8f3b-f20ee55d15f4": Phase="Running", Reason="", readiness=true. Elapsed: 2.010132067s +Jan 14 04:33:14.202: INFO: The phase of Pod annotationupdate408515c4-3fce-46f1-8f3b-f20ee55d15f4 is Running (Ready = true) +Jan 14 04:33:14.202: INFO: Pod "annotationupdate408515c4-3fce-46f1-8f3b-f20ee55d15f4" satisfied condition "running and ready" +Jan 14 04:33:14.733: INFO: Successfully updated pod "annotationupdate408515c4-3fce-46f1-8f3b-f20ee55d15f4" +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 +Jan 14 04:33:16.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 +STEP: Destroying namespace "downward-api-9088" for this suite. 01/14/23 04:33:16.751 +------------------------------ +• [4.593 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should update annotations on modification [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:162 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:33:12.163 + Jan 14 04:33:12.163: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename downward-api 01/14/23 04:33:12.164 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:33:12.178 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:33:12.18 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 + [It] should update annotations on modification [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:162 + STEP: Creating the pod 01/14/23 04:33:12.182 + Jan 14 04:33:12.192: INFO: Waiting up to 5m0s for pod "annotationupdate408515c4-3fce-46f1-8f3b-f20ee55d15f4" in namespace "downward-api-9088" to be "running and ready" + Jan 14 04:33:12.197: INFO: Pod "annotationupdate408515c4-3fce-46f1-8f3b-f20ee55d15f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.752513ms + Jan 14 04:33:12.197: INFO: The phase of Pod annotationupdate408515c4-3fce-46f1-8f3b-f20ee55d15f4 is Pending, waiting for it to be Running (with Ready = true) + Jan 14 04:33:14.202: INFO: Pod "annotationupdate408515c4-3fce-46f1-8f3b-f20ee55d15f4": Phase="Running", Reason="", readiness=true. Elapsed: 2.010132067s + Jan 14 04:33:14.202: INFO: The phase of Pod annotationupdate408515c4-3fce-46f1-8f3b-f20ee55d15f4 is Running (Ready = true) + Jan 14 04:33:14.202: INFO: Pod "annotationupdate408515c4-3fce-46f1-8f3b-f20ee55d15f4" satisfied condition "running and ready" + Jan 14 04:33:14.733: INFO: Successfully updated pod "annotationupdate408515c4-3fce-46f1-8f3b-f20ee55d15f4" + [AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 + Jan 14 04:33:16.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 + STEP: Destroying namespace "downward-api-9088" for this suite. 01/14/23 04:33:16.751 + << End Captured GinkgoWriter Output +------------------------------ +[sig-node] Secrets + should be consumable via the environment [NodeConformance] [Conformance] + test/e2e/common/node/secrets.go:95 +[BeforeEach] [sig-node] Secrets + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:33:16.757 +Jan 14 04:33:16.757: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename secrets 01/14/23 04:33:16.757 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:33:16.771 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:33:16.773 +[BeforeEach] [sig-node] Secrets + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable via the environment [NodeConformance] [Conformance] + test/e2e/common/node/secrets.go:95 +STEP: creating secret secrets-1792/secret-test-40c67696-4563-4b09-8026-091beb3f201a 01/14/23 04:33:16.775 +STEP: Creating a pod to test consume secrets 01/14/23 04:33:16.779 +Jan 14 04:33:16.786: INFO: Waiting up to 5m0s for pod "pod-configmaps-55653800-1158-4da8-98eb-7b3bea503088" in namespace "secrets-1792" to be "Succeeded or Failed" +Jan 14 04:33:16.788: INFO: Pod "pod-configmaps-55653800-1158-4da8-98eb-7b3bea503088": Phase="Pending", Reason="", readiness=false. Elapsed: 2.657588ms +Jan 14 04:33:18.793: INFO: Pod "pod-configmaps-55653800-1158-4da8-98eb-7b3bea503088": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007036102s +Jan 14 04:33:20.793: INFO: Pod "pod-configmaps-55653800-1158-4da8-98eb-7b3bea503088": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007101749s +STEP: Saw pod success 01/14/23 04:33:20.793 +Jan 14 04:33:20.793: INFO: Pod "pod-configmaps-55653800-1158-4da8-98eb-7b3bea503088" satisfied condition "Succeeded or Failed" +Jan 14 04:33:20.796: INFO: Trying to get logs from node 10.0.1.99 pod pod-configmaps-55653800-1158-4da8-98eb-7b3bea503088 container env-test: +STEP: delete the pod 01/14/23 04:33:20.811 +Jan 14 04:33:20.825: INFO: Waiting for pod pod-configmaps-55653800-1158-4da8-98eb-7b3bea503088 to disappear +Jan 14 04:33:20.828: INFO: Pod pod-configmaps-55653800-1158-4da8-98eb-7b3bea503088 no longer exists +[AfterEach] [sig-node] Secrets + test/e2e/framework/node/init/init.go:32 +Jan 14 04:33:20.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Secrets + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Secrets + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Secrets + tear down framework | framework.go:193 +STEP: Destroying namespace "secrets-1792" for this suite. 01/14/23 04:33:20.832 +------------------------------ +• [4.083 seconds] +[sig-node] Secrets +test/e2e/common/node/framework.go:23 + should be consumable via the environment [NodeConformance] [Conformance] + test/e2e/common/node/secrets.go:95 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Secrets + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:33:16.757 + Jan 14 04:33:16.757: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename secrets 01/14/23 04:33:16.757 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:33:16.771 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:33:16.773 + [BeforeEach] [sig-node] Secrets + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable via the environment [NodeConformance] [Conformance] + test/e2e/common/node/secrets.go:95 + STEP: creating secret secrets-1792/secret-test-40c67696-4563-4b09-8026-091beb3f201a 01/14/23 04:33:16.775 + STEP: Creating a pod to test consume secrets 01/14/23 04:33:16.779 + Jan 14 04:33:16.786: INFO: Waiting up to 5m0s for pod "pod-configmaps-55653800-1158-4da8-98eb-7b3bea503088" in namespace "secrets-1792" to be "Succeeded or Failed" + Jan 14 04:33:16.788: INFO: Pod "pod-configmaps-55653800-1158-4da8-98eb-7b3bea503088": Phase="Pending", Reason="", readiness=false. Elapsed: 2.657588ms + Jan 14 04:33:18.793: INFO: Pod "pod-configmaps-55653800-1158-4da8-98eb-7b3bea503088": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007036102s + Jan 14 04:33:20.793: INFO: Pod "pod-configmaps-55653800-1158-4da8-98eb-7b3bea503088": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007101749s + STEP: Saw pod success 01/14/23 04:33:20.793 + Jan 14 04:33:20.793: INFO: Pod "pod-configmaps-55653800-1158-4da8-98eb-7b3bea503088" satisfied condition "Succeeded or Failed" + Jan 14 04:33:20.796: INFO: Trying to get logs from node 10.0.1.99 pod pod-configmaps-55653800-1158-4da8-98eb-7b3bea503088 container env-test: + STEP: delete the pod 01/14/23 04:33:20.811 + Jan 14 04:33:20.825: INFO: Waiting for pod pod-configmaps-55653800-1158-4da8-98eb-7b3bea503088 to disappear + Jan 14 04:33:20.828: INFO: Pod pod-configmaps-55653800-1158-4da8-98eb-7b3bea503088 no longer exists + [AfterEach] [sig-node] Secrets + test/e2e/framework/node/init/init.go:32 + Jan 14 04:33:20.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Secrets + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Secrets + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Secrets + tear down framework | framework.go:193 + STEP: Destroying namespace "secrets-1792" for this suite. 01/14/23 04:33:20.832 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should validate Replicaset Status endpoints [Conformance] + test/e2e/apps/replica_set.go:176 +[BeforeEach] [sig-apps] ReplicaSet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:33:20.842 +Jan 14 04:33:20.842: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename replicaset 01/14/23 04:33:20.843 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:33:20.856 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:33:20.858 +[BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:31 +[It] should validate Replicaset Status endpoints [Conformance] + test/e2e/apps/replica_set.go:176 +STEP: Create a Replicaset 01/14/23 04:33:20.863 +STEP: Verify that the required pods have come up. 01/14/23 04:33:20.869 +Jan 14 04:33:20.871: INFO: Pod name sample-pod: Found 0 pods out of 1 +Jan 14 04:33:25.876: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running 01/14/23 04:33:25.876 +STEP: Getting /status 01/14/23 04:33:25.876 +Jan 14 04:33:25.879: INFO: Replicaset test-rs has Conditions: [] +STEP: updating the Replicaset Status 01/14/23 04:33:25.879 +Jan 14 04:33:25.888: INFO: updatedStatus.Conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} +STEP: watching for the ReplicaSet status to be updated 01/14/23 04:33:25.888 +Jan 14 04:33:25.890: INFO: Observed &ReplicaSet event: ADDED +Jan 14 04:33:25.890: INFO: Observed &ReplicaSet event: MODIFIED +Jan 14 04:33:25.890: INFO: Observed &ReplicaSet event: MODIFIED +Jan 14 04:33:25.890: INFO: Observed &ReplicaSet event: MODIFIED +Jan 14 04:33:25.890: INFO: Found replicaset test-rs in namespace replicaset-2628 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] +Jan 14 04:33:25.890: INFO: Replicaset test-rs has an updated status +STEP: patching the Replicaset Status 01/14/23 04:33:25.89 +Jan 14 04:33:25.890: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} +Jan 14 04:33:25.896: INFO: Patched status conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} +STEP: watching for the Replicaset status to be patched 01/14/23 04:33:25.896 +Jan 14 04:33:25.897: INFO: Observed &ReplicaSet event: ADDED +Jan 14 04:33:25.897: INFO: Observed &ReplicaSet event: MODIFIED +Jan 14 04:33:25.897: INFO: Observed &ReplicaSet event: MODIFIED +Jan 14 04:33:25.897: INFO: Observed &ReplicaSet event: MODIFIED +Jan 14 04:33:25.897: INFO: Observed replicaset test-rs in namespace replicaset-2628 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} +Jan 14 04:33:25.897: INFO: Observed &ReplicaSet event: MODIFIED +Jan 14 04:33:25.897: INFO: Found replicaset test-rs in namespace replicaset-2628 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC } +Jan 14 04:33:25.897: INFO: Replicaset test-rs has a patched status +[AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/node/init/init.go:32 +Jan 14 04:33:25.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] ReplicaSet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] ReplicaSet + tear down framework | framework.go:193 +STEP: Destroying namespace "replicaset-2628" for this suite. 01/14/23 04:33:25.902 +------------------------------ +• [SLOW TEST] [5.066 seconds] +[sig-apps] ReplicaSet +test/e2e/apps/framework.go:23 + should validate Replicaset Status endpoints [Conformance] + test/e2e/apps/replica_set.go:176 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ReplicaSet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:33:20.842 + Jan 14 04:33:20.842: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename replicaset 01/14/23 04:33:20.843 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:33:20.856 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:33:20.858 + [BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:31 + [It] should validate Replicaset Status endpoints [Conformance] + test/e2e/apps/replica_set.go:176 + STEP: Create a Replicaset 01/14/23 04:33:20.863 + STEP: Verify that the required pods have come up. 01/14/23 04:33:20.869 + Jan 14 04:33:20.871: INFO: Pod name sample-pod: Found 0 pods out of 1 + Jan 14 04:33:25.876: INFO: Pod name sample-pod: Found 1 pods out of 1 + STEP: ensuring each pod is running 01/14/23 04:33:25.876 + STEP: Getting /status 01/14/23 04:33:25.876 + Jan 14 04:33:25.879: INFO: Replicaset test-rs has Conditions: [] + STEP: updating the Replicaset Status 01/14/23 04:33:25.879 + Jan 14 04:33:25.888: INFO: updatedStatus.Conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} + STEP: watching for the ReplicaSet status to be updated 01/14/23 04:33:25.888 + Jan 14 04:33:25.890: INFO: Observed &ReplicaSet event: ADDED + Jan 14 04:33:25.890: INFO: Observed &ReplicaSet event: MODIFIED + Jan 14 04:33:25.890: INFO: Observed &ReplicaSet event: MODIFIED + Jan 14 04:33:25.890: INFO: Observed &ReplicaSet event: MODIFIED + Jan 14 04:33:25.890: INFO: Found replicaset test-rs in namespace replicaset-2628 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] + Jan 14 04:33:25.890: INFO: Replicaset test-rs has an updated status + STEP: patching the Replicaset Status 01/14/23 04:33:25.89 + Jan 14 04:33:25.890: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} + Jan 14 04:33:25.896: INFO: Patched status conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} + STEP: watching for the Replicaset status to be patched 01/14/23 04:33:25.896 + Jan 14 04:33:25.897: INFO: Observed &ReplicaSet event: ADDED + Jan 14 04:33:25.897: INFO: Observed &ReplicaSet event: MODIFIED + Jan 14 04:33:25.897: INFO: Observed &ReplicaSet event: MODIFIED + Jan 14 04:33:25.897: INFO: Observed &ReplicaSet event: MODIFIED + Jan 14 04:33:25.897: INFO: Observed replicaset test-rs in namespace replicaset-2628 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} + Jan 14 04:33:25.897: INFO: Observed &ReplicaSet event: MODIFIED + Jan 14 04:33:25.897: INFO: Found replicaset test-rs in namespace replicaset-2628 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC } + Jan 14 04:33:25.897: INFO: Replicaset test-rs has a patched status + [AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/node/init/init.go:32 + Jan 14 04:33:25.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] ReplicaSet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] ReplicaSet + tear down framework | framework.go:193 + STEP: Destroying namespace "replicaset-2628" for this suite. 01/14/23 04:33:25.902 + << End Captured GinkgoWriter Output +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:56 +[BeforeEach] [sig-storage] Projected secret + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:33:25.908 +Jan 14 04:33:25.908: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename projected 01/14/23 04:33:25.909 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:33:25.921 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:33:25.923 +[BeforeEach] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:56 +STEP: Creating projection with secret that has name projected-secret-test-4cae7a97-387a-4892-bec4-f968e8e39602 01/14/23 04:33:25.926 +STEP: Creating a pod to test consume secrets 01/14/23 04:33:25.93 +Jan 14 04:33:25.939: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ef66a4a7-962f-4731-bce8-08f71e0b2896" in namespace "projected-6067" to be "Succeeded or Failed" +Jan 14 04:33:25.941: INFO: Pod "pod-projected-secrets-ef66a4a7-962f-4731-bce8-08f71e0b2896": Phase="Pending", Reason="", readiness=false. Elapsed: 2.802699ms +Jan 14 04:33:27.946: INFO: Pod "pod-projected-secrets-ef66a4a7-962f-4731-bce8-08f71e0b2896": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006887342s +Jan 14 04:33:29.949: INFO: Pod "pod-projected-secrets-ef66a4a7-962f-4731-bce8-08f71e0b2896": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009876131s +STEP: Saw pod success 01/14/23 04:33:29.949 +Jan 14 04:33:29.949: INFO: Pod "pod-projected-secrets-ef66a4a7-962f-4731-bce8-08f71e0b2896" satisfied condition "Succeeded or Failed" +Jan 14 04:33:29.952: INFO: Trying to get logs from node 10.0.1.106 pod pod-projected-secrets-ef66a4a7-962f-4731-bce8-08f71e0b2896 container projected-secret-volume-test: +STEP: delete the pod 01/14/23 04:33:29.957 +Jan 14 04:33:29.970: INFO: Waiting for pod pod-projected-secrets-ef66a4a7-962f-4731-bce8-08f71e0b2896 to disappear +Jan 14 04:33:29.973: INFO: Pod pod-projected-secrets-ef66a4a7-962f-4731-bce8-08f71e0b2896 no longer exists +[AfterEach] [sig-storage] Projected secret + test/e2e/framework/node/init/init.go:32 +Jan 14 04:33:29.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected secret + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected secret + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-6067" for this suite. 01/14/23 04:33:29.978 +------------------------------ +• [4.075 seconds] +[sig-storage] Projected secret +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:56 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected secret + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:33:25.908 + Jan 14 04:33:25.908: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename projected 01/14/23 04:33:25.909 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:33:25.921 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:33:25.923 + [BeforeEach] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:56 + STEP: Creating projection with secret that has name projected-secret-test-4cae7a97-387a-4892-bec4-f968e8e39602 01/14/23 04:33:25.926 + STEP: Creating a pod to test consume secrets 01/14/23 04:33:25.93 + Jan 14 04:33:25.939: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ef66a4a7-962f-4731-bce8-08f71e0b2896" in namespace "projected-6067" to be "Succeeded or Failed" + Jan 14 04:33:25.941: INFO: Pod "pod-projected-secrets-ef66a4a7-962f-4731-bce8-08f71e0b2896": Phase="Pending", Reason="", readiness=false. Elapsed: 2.802699ms + Jan 14 04:33:27.946: INFO: Pod "pod-projected-secrets-ef66a4a7-962f-4731-bce8-08f71e0b2896": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006887342s + Jan 14 04:33:29.949: INFO: Pod "pod-projected-secrets-ef66a4a7-962f-4731-bce8-08f71e0b2896": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009876131s + STEP: Saw pod success 01/14/23 04:33:29.949 + Jan 14 04:33:29.949: INFO: Pod "pod-projected-secrets-ef66a4a7-962f-4731-bce8-08f71e0b2896" satisfied condition "Succeeded or Failed" + Jan 14 04:33:29.952: INFO: Trying to get logs from node 10.0.1.106 pod pod-projected-secrets-ef66a4a7-962f-4731-bce8-08f71e0b2896 container projected-secret-volume-test: + STEP: delete the pod 01/14/23 04:33:29.957 + Jan 14 04:33:29.970: INFO: Waiting for pod pod-projected-secrets-ef66a4a7-962f-4731-bce8-08f71e0b2896 to disappear + Jan 14 04:33:29.973: INFO: Pod pod-projected-secrets-ef66a4a7-962f-4731-bce8-08f71e0b2896 no longer exists + [AfterEach] [sig-storage] Projected secret + test/e2e/framework/node/init/init.go:32 + Jan 14 04:33:29.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected secret + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected secret + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-6067" for this suite. 01/14/23 04:33:29.978 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:74 +[BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:33:29.983 +Jan 14 04:33:29.983: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename configmap 01/14/23 04:33:29.984 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:33:29.998 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:33:30 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:74 +STEP: Creating configMap with name configmap-test-volume-5807f263-afe1-46fb-970e-3f651668c80e 01/14/23 04:33:30.002 +STEP: Creating a pod to test consume configMaps 01/14/23 04:33:30.007 +Jan 14 04:33:30.014: INFO: Waiting up to 5m0s for pod "pod-configmaps-cdb1ee22-1652-4cce-8876-c44f33c4e87f" in namespace "configmap-4309" to be "Succeeded or Failed" +Jan 14 04:33:30.017: INFO: Pod "pod-configmaps-cdb1ee22-1652-4cce-8876-c44f33c4e87f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.810036ms +Jan 14 04:33:32.022: INFO: Pod "pod-configmaps-cdb1ee22-1652-4cce-8876-c44f33c4e87f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008193652s +Jan 14 04:33:34.023: INFO: Pod "pod-configmaps-cdb1ee22-1652-4cce-8876-c44f33c4e87f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008407581s +STEP: Saw pod success 01/14/23 04:33:34.023 +Jan 14 04:33:34.023: INFO: Pod "pod-configmaps-cdb1ee22-1652-4cce-8876-c44f33c4e87f" satisfied condition "Succeeded or Failed" +Jan 14 04:33:34.026: INFO: Trying to get logs from node 10.0.1.106 pod pod-configmaps-cdb1ee22-1652-4cce-8876-c44f33c4e87f container agnhost-container: +STEP: delete the pod 01/14/23 04:33:34.031 +Jan 14 04:33:34.042: INFO: Waiting for pod pod-configmaps-cdb1ee22-1652-4cce-8876-c44f33c4e87f to disappear +Jan 14 04:33:34.044: INFO: Pod pod-configmaps-cdb1ee22-1652-4cce-8876-c44f33c4e87f no longer exists +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 +Jan 14 04:33:34.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 +STEP: Destroying namespace "configmap-4309" for this suite. 01/14/23 04:33:34.049 +------------------------------ +• [4.071 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:74 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:33:29.983 + Jan 14 04:33:29.983: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename configmap 01/14/23 04:33:29.984 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:33:29.998 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:33:30 + [BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:74 + STEP: Creating configMap with name configmap-test-volume-5807f263-afe1-46fb-970e-3f651668c80e 01/14/23 04:33:30.002 + STEP: Creating a pod to test consume configMaps 01/14/23 04:33:30.007 + Jan 14 04:33:30.014: INFO: Waiting up to 5m0s for pod "pod-configmaps-cdb1ee22-1652-4cce-8876-c44f33c4e87f" in namespace "configmap-4309" to be "Succeeded or Failed" + Jan 14 04:33:30.017: INFO: Pod "pod-configmaps-cdb1ee22-1652-4cce-8876-c44f33c4e87f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.810036ms + Jan 14 04:33:32.022: INFO: Pod "pod-configmaps-cdb1ee22-1652-4cce-8876-c44f33c4e87f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008193652s + Jan 14 04:33:34.023: INFO: Pod "pod-configmaps-cdb1ee22-1652-4cce-8876-c44f33c4e87f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008407581s + STEP: Saw pod success 01/14/23 04:33:34.023 + Jan 14 04:33:34.023: INFO: Pod "pod-configmaps-cdb1ee22-1652-4cce-8876-c44f33c4e87f" satisfied condition "Succeeded or Failed" + Jan 14 04:33:34.026: INFO: Trying to get logs from node 10.0.1.106 pod pod-configmaps-cdb1ee22-1652-4cce-8876-c44f33c4e87f container agnhost-container: + STEP: delete the pod 01/14/23 04:33:34.031 + Jan 14 04:33:34.042: INFO: Waiting for pod pod-configmaps-cdb1ee22-1652-4cce-8876-c44f33c4e87f to disappear + Jan 14 04:33:34.044: INFO: Pod pod-configmaps-cdb1ee22-1652-4cce-8876-c44f33c4e87f no longer exists + [AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 + Jan 14 04:33:34.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 + STEP: Destroying namespace "configmap-4309" for this suite. 01/14/23 04:33:34.049 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's cpu request [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:221 +[BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:33:34.055 +Jan 14 04:33:34.055: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename downward-api 01/14/23 04:33:34.056 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:33:34.069 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:33:34.071 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 +[It] should provide container's cpu request [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:221 +STEP: Creating a pod to test downward API volume plugin 01/14/23 04:33:34.075 +Jan 14 04:33:34.085: INFO: Waiting up to 5m0s for pod "downwardapi-volume-04597ee5-350b-4394-84bb-01f5827ba398" in namespace "downward-api-9961" to be "Succeeded or Failed" +Jan 14 04:33:34.089: INFO: Pod "downwardapi-volume-04597ee5-350b-4394-84bb-01f5827ba398": Phase="Pending", Reason="", readiness=false. Elapsed: 3.88303ms +Jan 14 04:33:36.094: INFO: Pod "downwardapi-volume-04597ee5-350b-4394-84bb-01f5827ba398": Phase="Running", Reason="", readiness=false. Elapsed: 2.008873856s +Jan 14 04:33:38.095: INFO: Pod "downwardapi-volume-04597ee5-350b-4394-84bb-01f5827ba398": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009700084s +STEP: Saw pod success 01/14/23 04:33:38.095 +Jan 14 04:33:38.095: INFO: Pod "downwardapi-volume-04597ee5-350b-4394-84bb-01f5827ba398" satisfied condition "Succeeded or Failed" +Jan 14 04:33:38.098: INFO: Trying to get logs from node 10.0.1.106 pod downwardapi-volume-04597ee5-350b-4394-84bb-01f5827ba398 container client-container: +STEP: delete the pod 01/14/23 04:33:38.103 +Jan 14 04:33:38.115: INFO: Waiting for pod downwardapi-volume-04597ee5-350b-4394-84bb-01f5827ba398 to disappear +Jan 14 04:33:38.118: INFO: Pod downwardapi-volume-04597ee5-350b-4394-84bb-01f5827ba398 no longer exists +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 +Jan 14 04:33:38.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 +STEP: Destroying namespace "downward-api-9961" for this suite. 01/14/23 04:33:38.122 +------------------------------ +• [4.072 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should provide container's cpu request [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:221 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:33:34.055 + Jan 14 04:33:34.055: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename downward-api 01/14/23 04:33:34.056 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:33:34.069 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:33:34.071 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 + [It] should provide container's cpu request [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:221 + STEP: Creating a pod to test downward API volume plugin 01/14/23 04:33:34.075 + Jan 14 04:33:34.085: INFO: Waiting up to 5m0s for pod "downwardapi-volume-04597ee5-350b-4394-84bb-01f5827ba398" in namespace "downward-api-9961" to be "Succeeded or Failed" + Jan 14 04:33:34.089: INFO: Pod "downwardapi-volume-04597ee5-350b-4394-84bb-01f5827ba398": Phase="Pending", Reason="", readiness=false. Elapsed: 3.88303ms + Jan 14 04:33:36.094: INFO: Pod "downwardapi-volume-04597ee5-350b-4394-84bb-01f5827ba398": Phase="Running", Reason="", readiness=false. Elapsed: 2.008873856s + Jan 14 04:33:38.095: INFO: Pod "downwardapi-volume-04597ee5-350b-4394-84bb-01f5827ba398": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009700084s + STEP: Saw pod success 01/14/23 04:33:38.095 + Jan 14 04:33:38.095: INFO: Pod "downwardapi-volume-04597ee5-350b-4394-84bb-01f5827ba398" satisfied condition "Succeeded or Failed" + Jan 14 04:33:38.098: INFO: Trying to get logs from node 10.0.1.106 pod downwardapi-volume-04597ee5-350b-4394-84bb-01f5827ba398 container client-container: + STEP: delete the pod 01/14/23 04:33:38.103 + Jan 14 04:33:38.115: INFO: Waiting for pod downwardapi-volume-04597ee5-350b-4394-84bb-01f5827ba398 to disappear + Jan 14 04:33:38.118: INFO: Pod downwardapi-volume-04597ee5-350b-4394-84bb-01f5827ba398 no longer exists + [AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 + Jan 14 04:33:38.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 + STEP: Destroying namespace "downward-api-9961" for this suite. 01/14/23 04:33:38.122 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-node] PreStop + should call prestop when killing a pod [Conformance] + test/e2e/node/pre_stop.go:168 +[BeforeEach] [sig-node] PreStop + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:33:38.131 +Jan 14 04:33:38.131: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename prestop 01/14/23 04:33:38.132 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:33:38.146 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:33:38.149 +[BeforeEach] [sig-node] PreStop + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] PreStop + test/e2e/node/pre_stop.go:159 +[It] should call prestop when killing a pod [Conformance] + test/e2e/node/pre_stop.go:168 +STEP: Creating server pod server in namespace prestop-9451 01/14/23 04:33:38.151 +STEP: Waiting for pods to come up. 01/14/23 04:33:38.159 +Jan 14 04:33:38.159: INFO: Waiting up to 5m0s for pod "server" in namespace "prestop-9451" to be "running" +Jan 14 04:33:38.162: INFO: Pod "server": Phase="Pending", Reason="", readiness=false. Elapsed: 2.844877ms +Jan 14 04:33:40.167: INFO: Pod "server": Phase="Running", Reason="", readiness=true. Elapsed: 2.007309217s +Jan 14 04:33:40.167: INFO: Pod "server" satisfied condition "running" +STEP: Creating tester pod tester in namespace prestop-9451 01/14/23 04:33:40.17 +Jan 14 04:33:40.176: INFO: Waiting up to 5m0s for pod "tester" in namespace "prestop-9451" to be "running" +Jan 14 04:33:40.179: INFO: Pod "tester": Phase="Pending", Reason="", readiness=false. Elapsed: 3.180355ms +Jan 14 04:33:42.185: INFO: Pod "tester": Phase="Running", Reason="", readiness=true. Elapsed: 2.008475124s +Jan 14 04:33:42.185: INFO: Pod "tester" satisfied condition "running" +STEP: Deleting pre-stop pod 01/14/23 04:33:42.185 +Jan 14 04:33:47.199: INFO: Saw: { + "Hostname": "server", + "Sent": null, + "Received": { + "prestop": 1 + }, + "Errors": null, + "Log": [ + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." + ], + "StillContactingPeers": true +} +STEP: Deleting the server pod 01/14/23 04:33:47.199 +[AfterEach] [sig-node] PreStop + test/e2e/framework/node/init/init.go:32 +Jan 14 04:33:47.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] PreStop + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] PreStop + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] PreStop + tear down framework | framework.go:193 +STEP: Destroying namespace "prestop-9451" for this suite. 01/14/23 04:33:47.217 +------------------------------ +• [SLOW TEST] [9.091 seconds] +[sig-node] PreStop +test/e2e/node/framework.go:23 + should call prestop when killing a pod [Conformance] + test/e2e/node/pre_stop.go:168 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] PreStop + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:33:38.131 + Jan 14 04:33:38.131: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename prestop 01/14/23 04:33:38.132 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:33:38.146 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:33:38.149 + [BeforeEach] [sig-node] PreStop + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] PreStop + test/e2e/node/pre_stop.go:159 + [It] should call prestop when killing a pod [Conformance] + test/e2e/node/pre_stop.go:168 + STEP: Creating server pod server in namespace prestop-9451 01/14/23 04:33:38.151 + STEP: Waiting for pods to come up. 01/14/23 04:33:38.159 + Jan 14 04:33:38.159: INFO: Waiting up to 5m0s for pod "server" in namespace "prestop-9451" to be "running" + Jan 14 04:33:38.162: INFO: Pod "server": Phase="Pending", Reason="", readiness=false. Elapsed: 2.844877ms + Jan 14 04:33:40.167: INFO: Pod "server": Phase="Running", Reason="", readiness=true. Elapsed: 2.007309217s + Jan 14 04:33:40.167: INFO: Pod "server" satisfied condition "running" + STEP: Creating tester pod tester in namespace prestop-9451 01/14/23 04:33:40.17 + Jan 14 04:33:40.176: INFO: Waiting up to 5m0s for pod "tester" in namespace "prestop-9451" to be "running" + Jan 14 04:33:40.179: INFO: Pod "tester": Phase="Pending", Reason="", readiness=false. Elapsed: 3.180355ms + Jan 14 04:33:42.185: INFO: Pod "tester": Phase="Running", Reason="", readiness=true. Elapsed: 2.008475124s + Jan 14 04:33:42.185: INFO: Pod "tester" satisfied condition "running" + STEP: Deleting pre-stop pod 01/14/23 04:33:42.185 + Jan 14 04:33:47.199: INFO: Saw: { + "Hostname": "server", + "Sent": null, + "Received": { + "prestop": 1 + }, + "Errors": null, + "Log": [ + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." + ], + "StillContactingPeers": true + } + STEP: Deleting the server pod 01/14/23 04:33:47.199 + [AfterEach] [sig-node] PreStop + test/e2e/framework/node/init/init.go:32 + Jan 14 04:33:47.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] PreStop + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] PreStop + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] PreStop + tear down framework | framework.go:193 + STEP: Destroying namespace "prestop-9451" for this suite. 01/14/23 04:33:47.217 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:249 +[BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:33:47.222 +Jan 14 04:33:47.222: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename downward-api 01/14/23 04:33:47.223 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:33:47.236 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:33:47.239 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 +[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:249 +STEP: Creating a pod to test downward API volume plugin 01/14/23 04:33:47.241 +Jan 14 04:33:47.250: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7d84be27-3168-4623-b492-d475c0fb729b" in namespace "downward-api-5907" to be "Succeeded or Failed" +Jan 14 04:33:47.253: INFO: Pod "downwardapi-volume-7d84be27-3168-4623-b492-d475c0fb729b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.894154ms +Jan 14 04:33:49.257: INFO: Pod "downwardapi-volume-7d84be27-3168-4623-b492-d475c0fb729b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007198604s +Jan 14 04:33:51.258: INFO: Pod "downwardapi-volume-7d84be27-3168-4623-b492-d475c0fb729b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008407377s +STEP: Saw pod success 01/14/23 04:33:51.258 +Jan 14 04:33:51.258: INFO: Pod "downwardapi-volume-7d84be27-3168-4623-b492-d475c0fb729b" satisfied condition "Succeeded or Failed" +Jan 14 04:33:51.261: INFO: Trying to get logs from node 10.0.1.106 pod downwardapi-volume-7d84be27-3168-4623-b492-d475c0fb729b container client-container: +STEP: delete the pod 01/14/23 04:33:51.267 +Jan 14 04:33:51.281: INFO: Waiting for pod downwardapi-volume-7d84be27-3168-4623-b492-d475c0fb729b to disappear +Jan 14 04:33:51.284: INFO: Pod downwardapi-volume-7d84be27-3168-4623-b492-d475c0fb729b no longer exists +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 +Jan 14 04:33:51.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 +STEP: Destroying namespace "downward-api-5907" for this suite. 01/14/23 04:33:51.288 +------------------------------ +• [4.071 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:249 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:33:47.222 + Jan 14 04:33:47.222: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename downward-api 01/14/23 04:33:47.223 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:33:47.236 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:33:47.239 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 + [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:249 + STEP: Creating a pod to test downward API volume plugin 01/14/23 04:33:47.241 + Jan 14 04:33:47.250: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7d84be27-3168-4623-b492-d475c0fb729b" in namespace "downward-api-5907" to be "Succeeded or Failed" + Jan 14 04:33:47.253: INFO: Pod "downwardapi-volume-7d84be27-3168-4623-b492-d475c0fb729b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.894154ms + Jan 14 04:33:49.257: INFO: Pod "downwardapi-volume-7d84be27-3168-4623-b492-d475c0fb729b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007198604s + Jan 14 04:33:51.258: INFO: Pod "downwardapi-volume-7d84be27-3168-4623-b492-d475c0fb729b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008407377s + STEP: Saw pod success 01/14/23 04:33:51.258 + Jan 14 04:33:51.258: INFO: Pod "downwardapi-volume-7d84be27-3168-4623-b492-d475c0fb729b" satisfied condition "Succeeded or Failed" + Jan 14 04:33:51.261: INFO: Trying to get logs from node 10.0.1.106 pod downwardapi-volume-7d84be27-3168-4623-b492-d475c0fb729b container client-container: + STEP: delete the pod 01/14/23 04:33:51.267 + Jan 14 04:33:51.281: INFO: Waiting for pod downwardapi-volume-7d84be27-3168-4623-b492-d475c0fb729b to disappear + Jan 14 04:33:51.284: INFO: Pod downwardapi-volume-7d84be27-3168-4623-b492-d475c0fb729b no longer exists + [AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 + Jan 14 04:33:51.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 + STEP: Destroying namespace "downward-api-5907" for this suite. 01/14/23 04:33:51.288 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:157 +[BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:33:51.294 +Jan 14 04:33:51.294: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename emptydir 01/14/23 04:33:51.295 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:33:51.311 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:33:51.313 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 +[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:157 +STEP: Creating a pod to test emptydir volume type on node default medium 01/14/23 04:33:51.315 +Jan 14 04:33:51.326: INFO: Waiting up to 5m0s for pod "pod-19299744-3d6a-44fc-99cd-ddc55522d94b" in namespace "emptydir-6811" to be "Succeeded or Failed" +Jan 14 04:33:51.330: INFO: Pod "pod-19299744-3d6a-44fc-99cd-ddc55522d94b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.802818ms +Jan 14 04:33:53.335: INFO: Pod "pod-19299744-3d6a-44fc-99cd-ddc55522d94b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009622597s +Jan 14 04:33:55.335: INFO: Pod "pod-19299744-3d6a-44fc-99cd-ddc55522d94b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009147869s +STEP: Saw pod success 01/14/23 04:33:55.335 +Jan 14 04:33:55.335: INFO: Pod "pod-19299744-3d6a-44fc-99cd-ddc55522d94b" satisfied condition "Succeeded or Failed" +Jan 14 04:33:55.338: INFO: Trying to get logs from node 10.0.1.106 pod pod-19299744-3d6a-44fc-99cd-ddc55522d94b container test-container: +STEP: delete the pod 01/14/23 04:33:55.344 +Jan 14 04:33:55.355: INFO: Waiting for pod pod-19299744-3d6a-44fc-99cd-ddc55522d94b to disappear +Jan 14 04:33:55.358: INFO: Pod pod-19299744-3d6a-44fc-99cd-ddc55522d94b no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 +Jan 14 04:33:55.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 +STEP: Destroying namespace "emptydir-6811" for this suite. 01/14/23 04:33:55.362 +------------------------------ +• [4.076 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:157 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:33:51.294 + Jan 14 04:33:51.294: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename emptydir 01/14/23 04:33:51.295 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:33:51.311 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:33:51.313 + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 + [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:157 + STEP: Creating a pod to test emptydir volume type on node default medium 01/14/23 04:33:51.315 + Jan 14 04:33:51.326: INFO: Waiting up to 5m0s for pod "pod-19299744-3d6a-44fc-99cd-ddc55522d94b" in namespace "emptydir-6811" to be "Succeeded or Failed" + Jan 14 04:33:51.330: INFO: Pod "pod-19299744-3d6a-44fc-99cd-ddc55522d94b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.802818ms + Jan 14 04:33:53.335: INFO: Pod "pod-19299744-3d6a-44fc-99cd-ddc55522d94b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009622597s + Jan 14 04:33:55.335: INFO: Pod "pod-19299744-3d6a-44fc-99cd-ddc55522d94b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009147869s + STEP: Saw pod success 01/14/23 04:33:55.335 + Jan 14 04:33:55.335: INFO: Pod "pod-19299744-3d6a-44fc-99cd-ddc55522d94b" satisfied condition "Succeeded or Failed" + Jan 14 04:33:55.338: INFO: Trying to get logs from node 10.0.1.106 pod pod-19299744-3d6a-44fc-99cd-ddc55522d94b container test-container: + STEP: delete the pod 01/14/23 04:33:55.344 + Jan 14 04:33:55.355: INFO: Waiting for pod pod-19299744-3d6a-44fc-99cd-ddc55522d94b to disappear + Jan 14 04:33:55.358: INFO: Pod pod-19299744-3d6a-44fc-99cd-ddc55522d94b no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 + Jan 14 04:33:55.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 + STEP: Destroying namespace "emptydir-6811" for this suite. 01/14/23 04:33:55.362 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should adopt matching pods on creation [Conformance] + test/e2e/apps/rc.go:92 +[BeforeEach] [sig-apps] ReplicationController + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:33:55.371 +Jan 14 04:33:55.371: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename replication-controller 01/14/23 04:33:55.372 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:33:55.385 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:33:55.388 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:57 +[It] should adopt matching pods on creation [Conformance] + test/e2e/apps/rc.go:92 +STEP: Given a Pod with a 'name' label pod-adoption is created 01/14/23 04:33:55.39 +Jan 14 04:33:55.399: INFO: Waiting up to 5m0s for pod "pod-adoption" in namespace "replication-controller-6815" to be "running and ready" +Jan 14 04:33:55.402: INFO: Pod "pod-adoption": Phase="Pending", Reason="", readiness=false. Elapsed: 3.012483ms +Jan 14 04:33:55.402: INFO: The phase of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) +Jan 14 04:33:57.407: INFO: Pod "pod-adoption": Phase="Running", Reason="", readiness=true. Elapsed: 2.007764357s +Jan 14 04:33:57.407: INFO: The phase of Pod pod-adoption is Running (Ready = true) +Jan 14 04:33:57.407: INFO: Pod "pod-adoption" satisfied condition "running and ready" +STEP: When a replication controller with a matching selector is created 01/14/23 04:33:57.41 +STEP: Then the orphan pod is adopted 01/14/23 04:33:57.415 +[AfterEach] [sig-apps] ReplicationController + test/e2e/framework/node/init/init.go:32 +Jan 14 04:33:58.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] ReplicationController + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] ReplicationController + tear down framework | framework.go:193 +STEP: Destroying namespace "replication-controller-6815" for this suite. 01/14/23 04:33:58.428 +------------------------------ +• [3.066 seconds] +[sig-apps] ReplicationController +test/e2e/apps/framework.go:23 + should adopt matching pods on creation [Conformance] + test/e2e/apps/rc.go:92 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ReplicationController + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:33:55.371 + Jan 14 04:33:55.371: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename replication-controller 01/14/23 04:33:55.372 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:33:55.385 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:33:55.388 + [BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:57 + [It] should adopt matching pods on creation [Conformance] + test/e2e/apps/rc.go:92 + STEP: Given a Pod with a 'name' label pod-adoption is created 01/14/23 04:33:55.39 + Jan 14 04:33:55.399: INFO: Waiting up to 5m0s for pod "pod-adoption" in namespace "replication-controller-6815" to be "running and ready" + Jan 14 04:33:55.402: INFO: Pod "pod-adoption": Phase="Pending", Reason="", readiness=false. Elapsed: 3.012483ms + Jan 14 04:33:55.402: INFO: The phase of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) + Jan 14 04:33:57.407: INFO: Pod "pod-adoption": Phase="Running", Reason="", readiness=true. Elapsed: 2.007764357s + Jan 14 04:33:57.407: INFO: The phase of Pod pod-adoption is Running (Ready = true) + Jan 14 04:33:57.407: INFO: Pod "pod-adoption" satisfied condition "running and ready" + STEP: When a replication controller with a matching selector is created 01/14/23 04:33:57.41 + STEP: Then the orphan pod is adopted 01/14/23 04:33:57.415 + [AfterEach] [sig-apps] ReplicationController + test/e2e/framework/node/init/init.go:32 + Jan 14 04:33:58.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] ReplicationController + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] ReplicationController + tear down framework | framework.go:193 + STEP: Destroying namespace "replication-controller-6815" for this suite. 01/14/23 04:33:58.428 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for pods for Hostname [Conformance] + test/e2e/network/dns.go:248 +[BeforeEach] [sig-network] DNS + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:33:58.438 +Jan 14 04:33:58.438: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename dns 01/14/23 04:33:58.439 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:33:58.453 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:33:58.456 +[BeforeEach] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:31 +[It] should provide DNS for pods for Hostname [Conformance] + test/e2e/network/dns.go:248 +STEP: Creating a test headless service 01/14/23 04:33:58.458 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5709.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-5709.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;sleep 1; done + 01/14/23 04:33:58.463 +STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5709.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-5709.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;sleep 1; done + 01/14/23 04:33:58.463 +STEP: creating a pod to probe DNS 01/14/23 04:33:58.463 +STEP: submitting the pod to kubernetes 01/14/23 04:33:58.463 +Jan 14 04:33:58.474: INFO: Waiting up to 15m0s for pod "dns-test-541d286e-95d0-4320-a6d3-f3eebb7f2366" in namespace "dns-5709" to be "running" +Jan 14 04:33:58.477: INFO: Pod "dns-test-541d286e-95d0-4320-a6d3-f3eebb7f2366": Phase="Pending", Reason="", readiness=false. Elapsed: 3.06616ms +Jan 14 04:34:00.481: INFO: Pod "dns-test-541d286e-95d0-4320-a6d3-f3eebb7f2366": Phase="Running", Reason="", readiness=true. Elapsed: 2.006861525s +Jan 14 04:34:00.481: INFO: Pod "dns-test-541d286e-95d0-4320-a6d3-f3eebb7f2366" satisfied condition "running" +STEP: retrieving the pod 01/14/23 04:34:00.481 +STEP: looking for the results for each expected name from probers 01/14/23 04:34:00.521 +Jan 14 04:34:00.536: INFO: DNS probes using dns-5709/dns-test-541d286e-95d0-4320-a6d3-f3eebb7f2366 succeeded + +STEP: deleting the pod 01/14/23 04:34:00.536 +STEP: deleting the test headless service 01/14/23 04:34:00.654 +[AfterEach] [sig-network] DNS + test/e2e/framework/node/init/init.go:32 +Jan 14 04:34:00.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] DNS + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] DNS + tear down framework | framework.go:193 +STEP: Destroying namespace "dns-5709" for this suite. 01/14/23 04:34:00.693 +------------------------------ +• [2.262 seconds] +[sig-network] DNS +test/e2e/network/common/framework.go:23 + should provide DNS for pods for Hostname [Conformance] + test/e2e/network/dns.go:248 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] DNS + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:33:58.438 + Jan 14 04:33:58.438: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename dns 01/14/23 04:33:58.439 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:33:58.453 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:33:58.456 + [BeforeEach] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:31 + [It] should provide DNS for pods for Hostname [Conformance] + test/e2e/network/dns.go:248 + STEP: Creating a test headless service 01/14/23 04:33:58.458 + STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5709.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-5709.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;sleep 1; done + 01/14/23 04:33:58.463 + STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5709.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-5709.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;sleep 1; done + 01/14/23 04:33:58.463 + STEP: creating a pod to probe DNS 01/14/23 04:33:58.463 + STEP: submitting the pod to kubernetes 01/14/23 04:33:58.463 + Jan 14 04:33:58.474: INFO: Waiting up to 15m0s for pod "dns-test-541d286e-95d0-4320-a6d3-f3eebb7f2366" in namespace "dns-5709" to be "running" + Jan 14 04:33:58.477: INFO: Pod "dns-test-541d286e-95d0-4320-a6d3-f3eebb7f2366": Phase="Pending", Reason="", readiness=false. Elapsed: 3.06616ms + Jan 14 04:34:00.481: INFO: Pod "dns-test-541d286e-95d0-4320-a6d3-f3eebb7f2366": Phase="Running", Reason="", readiness=true. Elapsed: 2.006861525s + Jan 14 04:34:00.481: INFO: Pod "dns-test-541d286e-95d0-4320-a6d3-f3eebb7f2366" satisfied condition "running" + STEP: retrieving the pod 01/14/23 04:34:00.481 + STEP: looking for the results for each expected name from probers 01/14/23 04:34:00.521 + Jan 14 04:34:00.536: INFO: DNS probes using dns-5709/dns-test-541d286e-95d0-4320-a6d3-f3eebb7f2366 succeeded + + STEP: deleting the pod 01/14/23 04:34:00.536 + STEP: deleting the test headless service 01/14/23 04:34:00.654 + [AfterEach] [sig-network] DNS + test/e2e/framework/node/init/init.go:32 + Jan 14 04:34:00.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] DNS + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] DNS + tear down framework | framework.go:193 + STEP: Destroying namespace "dns-5709" for this suite. 01/14/23 04:34:00.693 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:89 +[BeforeEach] [sig-storage] Secrets + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:34:00.702 +Jan 14 04:34:00.702: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename secrets 01/14/23 04:34:00.702 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:34:00.736 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:34:00.738 +[BeforeEach] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:89 +STEP: Creating secret with name secret-test-map-9e0b4b92-da76-40e4-8a64-74cbc04ad475 01/14/23 04:34:00.752 +STEP: Creating a pod to test consume secrets 01/14/23 04:34:00.757 +Jan 14 04:34:00.831: INFO: Waiting up to 5m0s for pod "pod-secrets-12bfe9d8-9c46-4744-9743-38251b983978" in namespace "secrets-4555" to be "Succeeded or Failed" +Jan 14 04:34:00.873: INFO: Pod "pod-secrets-12bfe9d8-9c46-4744-9743-38251b983978": Phase="Pending", Reason="", readiness=false. Elapsed: 42.376556ms +Jan 14 04:34:02.879: INFO: Pod "pod-secrets-12bfe9d8-9c46-4744-9743-38251b983978": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047884227s +Jan 14 04:34:04.878: INFO: Pod "pod-secrets-12bfe9d8-9c46-4744-9743-38251b983978": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047345779s +STEP: Saw pod success 01/14/23 04:34:04.878 +Jan 14 04:34:04.879: INFO: Pod "pod-secrets-12bfe9d8-9c46-4744-9743-38251b983978" satisfied condition "Succeeded or Failed" +Jan 14 04:34:04.882: INFO: Trying to get logs from node 10.0.1.106 pod pod-secrets-12bfe9d8-9c46-4744-9743-38251b983978 container secret-volume-test: +STEP: delete the pod 01/14/23 04:34:04.887 +Jan 14 04:34:04.902: INFO: Waiting for pod pod-secrets-12bfe9d8-9c46-4744-9743-38251b983978 to disappear +Jan 14 04:34:04.905: INFO: Pod pod-secrets-12bfe9d8-9c46-4744-9743-38251b983978 no longer exists +[AfterEach] [sig-storage] Secrets + test/e2e/framework/node/init/init.go:32 +Jan 14 04:34:04.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Secrets + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Secrets + tear down framework | framework.go:193 +STEP: Destroying namespace "secrets-4555" for this suite. 01/14/23 04:34:04.91 +------------------------------ +• [4.213 seconds] +[sig-storage] Secrets +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:89 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Secrets + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:34:00.702 + Jan 14 04:34:00.702: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename secrets 01/14/23 04:34:00.702 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:34:00.736 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:34:00.738 + [BeforeEach] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:89 + STEP: Creating secret with name secret-test-map-9e0b4b92-da76-40e4-8a64-74cbc04ad475 01/14/23 04:34:00.752 + STEP: Creating a pod to test consume secrets 01/14/23 04:34:00.757 + Jan 14 04:34:00.831: INFO: Waiting up to 5m0s for pod "pod-secrets-12bfe9d8-9c46-4744-9743-38251b983978" in namespace "secrets-4555" to be "Succeeded or Failed" + Jan 14 04:34:00.873: INFO: Pod "pod-secrets-12bfe9d8-9c46-4744-9743-38251b983978": Phase="Pending", Reason="", readiness=false. Elapsed: 42.376556ms + Jan 14 04:34:02.879: INFO: Pod "pod-secrets-12bfe9d8-9c46-4744-9743-38251b983978": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047884227s + Jan 14 04:34:04.878: INFO: Pod "pod-secrets-12bfe9d8-9c46-4744-9743-38251b983978": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047345779s + STEP: Saw pod success 01/14/23 04:34:04.878 + Jan 14 04:34:04.879: INFO: Pod "pod-secrets-12bfe9d8-9c46-4744-9743-38251b983978" satisfied condition "Succeeded or Failed" + Jan 14 04:34:04.882: INFO: Trying to get logs from node 10.0.1.106 pod pod-secrets-12bfe9d8-9c46-4744-9743-38251b983978 container secret-volume-test: + STEP: delete the pod 01/14/23 04:34:04.887 + Jan 14 04:34:04.902: INFO: Waiting for pod pod-secrets-12bfe9d8-9c46-4744-9743-38251b983978 to disappear + Jan 14 04:34:04.905: INFO: Pod pod-secrets-12bfe9d8-9c46-4744-9743-38251b983978 no longer exists + [AfterEach] [sig-storage] Secrets + test/e2e/framework/node/init/init.go:32 + Jan 14 04:34:04.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Secrets + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Secrets + tear down framework | framework.go:193 + STEP: Destroying namespace "secrets-4555" for this suite. 01/14/23 04:34:04.91 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should include webhook resources in discovery documents [Conformance] + test/e2e/apimachinery/webhook.go:117 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:34:04.915 +Jan 14 04:34:04.915: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename webhook 01/14/23 04:34:04.916 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:34:04.929 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:34:04.931 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 01/14/23 04:34:04.946 +STEP: Create role binding to let webhook read extension-apiserver-authentication 01/14/23 04:34:05.581 +STEP: Deploying the webhook pod 01/14/23 04:34:05.588 +STEP: Wait for the deployment to be ready 01/14/23 04:34:05.599 +Jan 14 04:34:05.607: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 01/14/23 04:34:07.619 +STEP: Verifying the service has paired with the endpoint 01/14/23 04:34:07.628 +Jan 14 04:34:08.629: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should include webhook resources in discovery documents [Conformance] + test/e2e/apimachinery/webhook.go:117 +STEP: fetching the /apis discovery document 01/14/23 04:34:08.633 +STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document 01/14/23 04:34:08.635 +STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document 01/14/23 04:34:08.635 +STEP: fetching the /apis/admissionregistration.k8s.io discovery document 01/14/23 04:34:08.635 +STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document 01/14/23 04:34:08.636 +STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document 01/14/23 04:34:08.636 +STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document 01/14/23 04:34:08.637 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:34:08.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "webhook-3397" for this suite. 01/14/23 04:34:08.681 +STEP: Destroying namespace "webhook-3397-markers" for this suite. 01/14/23 04:34:08.689 +------------------------------ +• [3.779 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should include webhook resources in discovery documents [Conformance] + test/e2e/apimachinery/webhook.go:117 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:34:04.915 + Jan 14 04:34:04.915: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename webhook 01/14/23 04:34:04.916 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:34:04.929 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:34:04.931 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 01/14/23 04:34:04.946 + STEP: Create role binding to let webhook read extension-apiserver-authentication 01/14/23 04:34:05.581 + STEP: Deploying the webhook pod 01/14/23 04:34:05.588 + STEP: Wait for the deployment to be ready 01/14/23 04:34:05.599 + Jan 14 04:34:05.607: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 01/14/23 04:34:07.619 + STEP: Verifying the service has paired with the endpoint 01/14/23 04:34:07.628 + Jan 14 04:34:08.629: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should include webhook resources in discovery documents [Conformance] + test/e2e/apimachinery/webhook.go:117 + STEP: fetching the /apis discovery document 01/14/23 04:34:08.633 + STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document 01/14/23 04:34:08.635 + STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document 01/14/23 04:34:08.635 + STEP: fetching the /apis/admissionregistration.k8s.io discovery document 01/14/23 04:34:08.635 + STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document 01/14/23 04:34:08.636 + STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document 01/14/23 04:34:08.636 + STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document 01/14/23 04:34:08.637 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:34:08.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "webhook-3397" for this suite. 01/14/23 04:34:08.681 + STEP: Destroying namespace "webhook-3397-markers" for this suite. 01/14/23 04:34:08.689 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute prestop exec hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:151 +[BeforeEach] [sig-node] Container Lifecycle Hook + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:34:08.696 +Jan 14 04:34:08.696: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename container-lifecycle-hook 01/14/23 04:34:08.697 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:34:08.711 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:34:08.714 +[BeforeEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:77 +STEP: create the container to handle the HTTPGet hook request. 01/14/23 04:34:08.72 +Jan 14 04:34:08.731: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-4577" to be "running and ready" +Jan 14 04:34:08.736: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324399ms +Jan 14 04:34:08.736: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Jan 14 04:34:10.740: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 2.009065257s +Jan 14 04:34:10.740: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) +Jan 14 04:34:10.740: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" +[It] should execute prestop exec hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:151 +STEP: create the pod with lifecycle hook 01/14/23 04:34:10.743 +Jan 14 04:34:10.751: INFO: Waiting up to 5m0s for pod "pod-with-prestop-exec-hook" in namespace "container-lifecycle-hook-4577" to be "running and ready" +Jan 14 04:34:10.754: INFO: Pod "pod-with-prestop-exec-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 3.130252ms +Jan 14 04:34:10.754: INFO: The phase of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) +Jan 14 04:34:12.759: INFO: Pod "pod-with-prestop-exec-hook": Phase="Running", Reason="", readiness=true. Elapsed: 2.007689129s +Jan 14 04:34:12.759: INFO: The phase of Pod pod-with-prestop-exec-hook is Running (Ready = true) +Jan 14 04:34:12.759: INFO: Pod "pod-with-prestop-exec-hook" satisfied condition "running and ready" +STEP: delete the pod with lifecycle hook 01/14/23 04:34:12.761 +Jan 14 04:34:12.769: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jan 14 04:34:12.772: INFO: Pod pod-with-prestop-exec-hook still exists +Jan 14 04:34:14.772: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jan 14 04:34:14.777: INFO: Pod pod-with-prestop-exec-hook still exists +Jan 14 04:34:16.773: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear +Jan 14 04:34:16.777: INFO: Pod pod-with-prestop-exec-hook no longer exists +STEP: check prestop hook 01/14/23 04:34:16.777 +[AfterEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/node/init/init.go:32 +Jan 14 04:34:16.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + tear down framework | framework.go:193 +STEP: Destroying namespace "container-lifecycle-hook-4577" for this suite. 01/14/23 04:34:16.788 +------------------------------ +• [SLOW TEST] [8.097 seconds] +[sig-node] Container Lifecycle Hook +test/e2e/common/node/framework.go:23 + when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:46 + should execute prestop exec hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:151 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Container Lifecycle Hook + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:34:08.696 + Jan 14 04:34:08.696: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename container-lifecycle-hook 01/14/23 04:34:08.697 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:34:08.711 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:34:08.714 + [BeforeEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:77 + STEP: create the container to handle the HTTPGet hook request. 01/14/23 04:34:08.72 + Jan 14 04:34:08.731: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-4577" to be "running and ready" + Jan 14 04:34:08.736: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324399ms + Jan 14 04:34:08.736: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) + Jan 14 04:34:10.740: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 2.009065257s + Jan 14 04:34:10.740: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) + Jan 14 04:34:10.740: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" + [It] should execute prestop exec hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:151 + STEP: create the pod with lifecycle hook 01/14/23 04:34:10.743 + Jan 14 04:34:10.751: INFO: Waiting up to 5m0s for pod "pod-with-prestop-exec-hook" in namespace "container-lifecycle-hook-4577" to be "running and ready" + Jan 14 04:34:10.754: INFO: Pod "pod-with-prestop-exec-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 3.130252ms + Jan 14 04:34:10.754: INFO: The phase of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) + Jan 14 04:34:12.759: INFO: Pod "pod-with-prestop-exec-hook": Phase="Running", Reason="", readiness=true. Elapsed: 2.007689129s + Jan 14 04:34:12.759: INFO: The phase of Pod pod-with-prestop-exec-hook is Running (Ready = true) + Jan 14 04:34:12.759: INFO: Pod "pod-with-prestop-exec-hook" satisfied condition "running and ready" + STEP: delete the pod with lifecycle hook 01/14/23 04:34:12.761 + Jan 14 04:34:12.769: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear + Jan 14 04:34:12.772: INFO: Pod pod-with-prestop-exec-hook still exists + Jan 14 04:34:14.772: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear + Jan 14 04:34:14.777: INFO: Pod pod-with-prestop-exec-hook still exists + Jan 14 04:34:16.773: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear + Jan 14 04:34:16.777: INFO: Pod pod-with-prestop-exec-hook no longer exists + STEP: check prestop hook 01/14/23 04:34:16.777 + [AfterEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/node/init/init.go:32 + Jan 14 04:34:16.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + tear down framework | framework.go:193 + STEP: Destroying namespace "container-lifecycle-hook-4577" for this suite. 01/14/23 04:34:16.788 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:84 +[BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:34:16.794 +Jan 14 04:34:16.794: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename projected 01/14/23 04:34:16.795 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:34:16.808 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:34:16.81 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 +[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:84 +STEP: Creating a pod to test downward API volume plugin 01/14/23 04:34:16.813 +Jan 14 04:34:16.822: INFO: Waiting up to 5m0s for pod "downwardapi-volume-66455353-3deb-484c-a92f-5bca4993e1f3" in namespace "projected-4722" to be "Succeeded or Failed" +Jan 14 04:34:16.825: INFO: Pod "downwardapi-volume-66455353-3deb-484c-a92f-5bca4993e1f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.875077ms +Jan 14 04:34:18.830: INFO: Pod "downwardapi-volume-66455353-3deb-484c-a92f-5bca4993e1f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007513712s +Jan 14 04:34:20.830: INFO: Pod "downwardapi-volume-66455353-3deb-484c-a92f-5bca4993e1f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007535538s +STEP: Saw pod success 01/14/23 04:34:20.83 +Jan 14 04:34:20.830: INFO: Pod "downwardapi-volume-66455353-3deb-484c-a92f-5bca4993e1f3" satisfied condition "Succeeded or Failed" +Jan 14 04:34:20.833: INFO: Trying to get logs from node 10.0.1.106 pod downwardapi-volume-66455353-3deb-484c-a92f-5bca4993e1f3 container client-container: +STEP: delete the pod 01/14/23 04:34:20.838 +Jan 14 04:34:20.851: INFO: Waiting for pod downwardapi-volume-66455353-3deb-484c-a92f-5bca4993e1f3 to disappear +Jan 14 04:34:20.854: INFO: Pod downwardapi-volume-66455353-3deb-484c-a92f-5bca4993e1f3 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 +Jan 14 04:34:20.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-4722" for this suite. 01/14/23 04:34:20.86 +------------------------------ +• [4.071 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:84 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:34:16.794 + Jan 14 04:34:16.794: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename projected 01/14/23 04:34:16.795 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:34:16.808 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:34:16.81 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 + [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:84 + STEP: Creating a pod to test downward API volume plugin 01/14/23 04:34:16.813 + Jan 14 04:34:16.822: INFO: Waiting up to 5m0s for pod "downwardapi-volume-66455353-3deb-484c-a92f-5bca4993e1f3" in namespace "projected-4722" to be "Succeeded or Failed" + Jan 14 04:34:16.825: INFO: Pod "downwardapi-volume-66455353-3deb-484c-a92f-5bca4993e1f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.875077ms + Jan 14 04:34:18.830: INFO: Pod "downwardapi-volume-66455353-3deb-484c-a92f-5bca4993e1f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007513712s + Jan 14 04:34:20.830: INFO: Pod "downwardapi-volume-66455353-3deb-484c-a92f-5bca4993e1f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007535538s + STEP: Saw pod success 01/14/23 04:34:20.83 + Jan 14 04:34:20.830: INFO: Pod "downwardapi-volume-66455353-3deb-484c-a92f-5bca4993e1f3" satisfied condition "Succeeded or Failed" + Jan 14 04:34:20.833: INFO: Trying to get logs from node 10.0.1.106 pod downwardapi-volume-66455353-3deb-484c-a92f-5bca4993e1f3 container client-container: + STEP: delete the pod 01/14/23 04:34:20.838 + Jan 14 04:34:20.851: INFO: Waiting for pod downwardapi-volume-66455353-3deb-484c-a92f-5bca4993e1f3 to disappear + Jan 14 04:34:20.854: INFO: Pod downwardapi-volume-66455353-3deb-484c-a92f-5bca4993e1f3 no longer exists + [AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 + Jan 14 04:34:20.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-4722" for this suite. 01/14/23 04:34:20.86 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-storage] Downward API volume + should provide podname only [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:53 +[BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:34:20.865 +Jan 14 04:34:20.865: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename downward-api 01/14/23 04:34:20.866 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:34:20.88 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:34:20.882 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 +[It] should provide podname only [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:53 +STEP: Creating a pod to test downward API volume plugin 01/14/23 04:34:20.885 +Jan 14 04:34:20.893: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bc27d0c1-d593-4486-bb6a-5ad4051ee249" in namespace "downward-api-2529" to be "Succeeded or Failed" +Jan 14 04:34:20.896: INFO: Pod "downwardapi-volume-bc27d0c1-d593-4486-bb6a-5ad4051ee249": Phase="Pending", Reason="", readiness=false. Elapsed: 2.752344ms +Jan 14 04:34:22.899: INFO: Pod "downwardapi-volume-bc27d0c1-d593-4486-bb6a-5ad4051ee249": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006431016s +Jan 14 04:34:24.900: INFO: Pod "downwardapi-volume-bc27d0c1-d593-4486-bb6a-5ad4051ee249": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007599439s +STEP: Saw pod success 01/14/23 04:34:24.901 +Jan 14 04:34:24.901: INFO: Pod "downwardapi-volume-bc27d0c1-d593-4486-bb6a-5ad4051ee249" satisfied condition "Succeeded or Failed" +Jan 14 04:34:24.904: INFO: Trying to get logs from node 10.0.1.106 pod downwardapi-volume-bc27d0c1-d593-4486-bb6a-5ad4051ee249 container client-container: +STEP: delete the pod 01/14/23 04:34:24.911 +Jan 14 04:34:24.927: INFO: Waiting for pod downwardapi-volume-bc27d0c1-d593-4486-bb6a-5ad4051ee249 to disappear +Jan 14 04:34:24.930: INFO: Pod downwardapi-volume-bc27d0c1-d593-4486-bb6a-5ad4051ee249 no longer exists +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 +Jan 14 04:34:24.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 +STEP: Destroying namespace "downward-api-2529" for this suite. 01/14/23 04:34:24.934 +------------------------------ +• [4.074 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should provide podname only [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:53 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:34:20.865 + Jan 14 04:34:20.865: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename downward-api 01/14/23 04:34:20.866 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:34:20.88 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:34:20.882 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 + [It] should provide podname only [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:53 + STEP: Creating a pod to test downward API volume plugin 01/14/23 04:34:20.885 + Jan 14 04:34:20.893: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bc27d0c1-d593-4486-bb6a-5ad4051ee249" in namespace "downward-api-2529" to be "Succeeded or Failed" + Jan 14 04:34:20.896: INFO: Pod "downwardapi-volume-bc27d0c1-d593-4486-bb6a-5ad4051ee249": Phase="Pending", Reason="", readiness=false. Elapsed: 2.752344ms + Jan 14 04:34:22.899: INFO: Pod "downwardapi-volume-bc27d0c1-d593-4486-bb6a-5ad4051ee249": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006431016s + Jan 14 04:34:24.900: INFO: Pod "downwardapi-volume-bc27d0c1-d593-4486-bb6a-5ad4051ee249": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007599439s + STEP: Saw pod success 01/14/23 04:34:24.901 + Jan 14 04:34:24.901: INFO: Pod "downwardapi-volume-bc27d0c1-d593-4486-bb6a-5ad4051ee249" satisfied condition "Succeeded or Failed" + Jan 14 04:34:24.904: INFO: Trying to get logs from node 10.0.1.106 pod downwardapi-volume-bc27d0c1-d593-4486-bb6a-5ad4051ee249 container client-container: + STEP: delete the pod 01/14/23 04:34:24.911 + Jan 14 04:34:24.927: INFO: Waiting for pod downwardapi-volume-bc27d0c1-d593-4486-bb6a-5ad4051ee249 to disappear + Jan 14 04:34:24.930: INFO: Pod downwardapi-volume-bc27d0c1-d593-4486-bb6a-5ad4051ee249 no longer exists + [AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 + Jan 14 04:34:24.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 + STEP: Destroying namespace "downward-api-2529" for this suite. 01/14/23 04:34:24.934 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-node] Pods + should delete a collection of pods [Conformance] + test/e2e/common/node/pods.go:845 +[BeforeEach] [sig-node] Pods + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:34:24.94 +Jan 14 04:34:24.940: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename pods 01/14/23 04:34:24.94 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:34:24.955 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:34:24.957 +[BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 +[It] should delete a collection of pods [Conformance] + test/e2e/common/node/pods.go:845 +STEP: Create set of pods 01/14/23 04:34:24.959 +Jan 14 04:34:24.967: INFO: created test-pod-1 +Jan 14 04:34:24.974: INFO: created test-pod-2 +Jan 14 04:34:24.980: INFO: created test-pod-3 +STEP: waiting for all 3 pods to be running 01/14/23 04:34:24.98 +Jan 14 04:34:24.980: INFO: Waiting up to 5m0s for all pods (need at least 3) in namespace 'pods-1859' to be running and ready +Jan 14 04:34:24.997: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed +Jan 14 04:34:24.997: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed +Jan 14 04:34:24.997: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed +Jan 14 04:34:24.997: INFO: 0 / 3 pods in namespace 'pods-1859' are running and ready (0 seconds elapsed) +Jan 14 04:34:24.997: INFO: expected 0 pod replicas in namespace 'pods-1859', 0 are Running and Ready. +Jan 14 04:34:24.997: INFO: POD NODE PHASE GRACE CONDITIONS +Jan 14 04:34:24.997: INFO: test-pod-1 10.0.1.106 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:34:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:34:24 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:34:24 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:34:24 +0000 UTC }] +Jan 14 04:34:24.997: INFO: test-pod-2 10.0.1.106 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:34:24 +0000 UTC }] +Jan 14 04:34:24.997: INFO: test-pod-3 10.0.1.106 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:34:24 +0000 UTC }] +Jan 14 04:34:24.997: INFO: +Jan 14 04:34:27.008: INFO: 3 / 3 pods in namespace 'pods-1859' are running and ready (2 seconds elapsed) +Jan 14 04:34:27.008: INFO: expected 0 pod replicas in namespace 'pods-1859', 0 are Running and Ready. +STEP: waiting for all pods to be deleted 01/14/23 04:34:27.029 +Jan 14 04:34:27.032: INFO: Pod quantity 3 is different from expected quantity 0 +Jan 14 04:34:28.036: INFO: Pod quantity 3 is different from expected quantity 0 +[AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 +Jan 14 04:34:29.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 +STEP: Destroying namespace "pods-1859" for this suite. 01/14/23 04:34:29.042 +------------------------------ +• [4.108 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should delete a collection of pods [Conformance] + test/e2e/common/node/pods.go:845 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Pods + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:34:24.94 + Jan 14 04:34:24.940: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename pods 01/14/23 04:34:24.94 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:34:24.955 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:34:24.957 + [BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 + [It] should delete a collection of pods [Conformance] + test/e2e/common/node/pods.go:845 + STEP: Create set of pods 01/14/23 04:34:24.959 + Jan 14 04:34:24.967: INFO: created test-pod-1 + Jan 14 04:34:24.974: INFO: created test-pod-2 + Jan 14 04:34:24.980: INFO: created test-pod-3 + STEP: waiting for all 3 pods to be running 01/14/23 04:34:24.98 + Jan 14 04:34:24.980: INFO: Waiting up to 5m0s for all pods (need at least 3) in namespace 'pods-1859' to be running and ready + Jan 14 04:34:24.997: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed + Jan 14 04:34:24.997: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed + Jan 14 04:34:24.997: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed + Jan 14 04:34:24.997: INFO: 0 / 3 pods in namespace 'pods-1859' are running and ready (0 seconds elapsed) + Jan 14 04:34:24.997: INFO: expected 0 pod replicas in namespace 'pods-1859', 0 are Running and Ready. + Jan 14 04:34:24.997: INFO: POD NODE PHASE GRACE CONDITIONS + Jan 14 04:34:24.997: INFO: test-pod-1 10.0.1.106 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:34:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:34:24 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:34:24 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:34:24 +0000 UTC }] + Jan 14 04:34:24.997: INFO: test-pod-2 10.0.1.106 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:34:24 +0000 UTC }] + Jan 14 04:34:24.997: INFO: test-pod-3 10.0.1.106 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:34:24 +0000 UTC }] + Jan 14 04:34:24.997: INFO: + Jan 14 04:34:27.008: INFO: 3 / 3 pods in namespace 'pods-1859' are running and ready (2 seconds elapsed) + Jan 14 04:34:27.008: INFO: expected 0 pod replicas in namespace 'pods-1859', 0 are Running and Ready. + STEP: waiting for all pods to be deleted 01/14/23 04:34:27.029 + Jan 14 04:34:27.032: INFO: Pod quantity 3 is different from expected quantity 0 + Jan 14 04:34:28.036: INFO: Pod quantity 3 is different from expected quantity 0 + [AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 + Jan 14 04:34:29.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 + STEP: Destroying namespace "pods-1859" for this suite. 01/14/23 04:34:29.042 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:137 +[BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:34:29.048 +Jan 14 04:34:29.048: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename emptydir 01/14/23 04:34:29.049 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:34:29.064 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:34:29.066 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 +[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:137 +STEP: Creating a pod to test emptydir 0666 on tmpfs 01/14/23 04:34:29.069 +Jan 14 04:34:29.077: INFO: Waiting up to 5m0s for pod "pod-dc3a8523-2b77-4428-81bd-ec0e01622708" in namespace "emptydir-9462" to be "Succeeded or Failed" +Jan 14 04:34:29.080: INFO: Pod "pod-dc3a8523-2b77-4428-81bd-ec0e01622708": Phase="Pending", Reason="", readiness=false. Elapsed: 2.918321ms +Jan 14 04:34:31.085: INFO: Pod "pod-dc3a8523-2b77-4428-81bd-ec0e01622708": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007926569s +Jan 14 04:34:33.086: INFO: Pod "pod-dc3a8523-2b77-4428-81bd-ec0e01622708": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008948542s +STEP: Saw pod success 01/14/23 04:34:33.086 +Jan 14 04:34:33.087: INFO: Pod "pod-dc3a8523-2b77-4428-81bd-ec0e01622708" satisfied condition "Succeeded or Failed" +Jan 14 04:34:33.090: INFO: Trying to get logs from node 10.0.1.106 pod pod-dc3a8523-2b77-4428-81bd-ec0e01622708 container test-container: +STEP: delete the pod 01/14/23 04:34:33.095 +Jan 14 04:34:33.107: INFO: Waiting for pod pod-dc3a8523-2b77-4428-81bd-ec0e01622708 to disappear +Jan 14 04:34:33.110: INFO: Pod pod-dc3a8523-2b77-4428-81bd-ec0e01622708 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 +Jan 14 04:34:33.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 +STEP: Destroying namespace "emptydir-9462" for this suite. 01/14/23 04:34:33.115 +------------------------------ +• [4.071 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:137 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:34:29.048 + Jan 14 04:34:29.048: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename emptydir 01/14/23 04:34:29.049 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:34:29.064 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:34:29.066 + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 + [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:137 + STEP: Creating a pod to test emptydir 0666 on tmpfs 01/14/23 04:34:29.069 + Jan 14 04:34:29.077: INFO: Waiting up to 5m0s for pod "pod-dc3a8523-2b77-4428-81bd-ec0e01622708" in namespace "emptydir-9462" to be "Succeeded or Failed" + Jan 14 04:34:29.080: INFO: Pod "pod-dc3a8523-2b77-4428-81bd-ec0e01622708": Phase="Pending", Reason="", readiness=false. Elapsed: 2.918321ms + Jan 14 04:34:31.085: INFO: Pod "pod-dc3a8523-2b77-4428-81bd-ec0e01622708": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007926569s + Jan 14 04:34:33.086: INFO: Pod "pod-dc3a8523-2b77-4428-81bd-ec0e01622708": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008948542s + STEP: Saw pod success 01/14/23 04:34:33.086 + Jan 14 04:34:33.087: INFO: Pod "pod-dc3a8523-2b77-4428-81bd-ec0e01622708" satisfied condition "Succeeded or Failed" + Jan 14 04:34:33.090: INFO: Trying to get logs from node 10.0.1.106 pod pod-dc3a8523-2b77-4428-81bd-ec0e01622708 container test-container: + STEP: delete the pod 01/14/23 04:34:33.095 + Jan 14 04:34:33.107: INFO: Waiting for pod pod-dc3a8523-2b77-4428-81bd-ec0e01622708 to disappear + Jan 14 04:34:33.110: INFO: Pod pod-dc3a8523-2b77-4428-81bd-ec0e01622708 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 + Jan 14 04:34:33.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 + STEP: Destroying namespace "emptydir-9462" for this suite. 01/14/23 04:34:33.115 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath + runs ReplicaSets to verify preemption running path [Conformance] + test/e2e/scheduling/preemption.go:616 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:34:33.12 +Jan 14 04:34:33.120: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename sched-preemption 01/14/23 04:34:33.121 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:34:33.135 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:34:33.137 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:96 +Jan 14 04:34:33.153: INFO: Waiting up to 1m0s for all nodes to be ready +Jan 14 04:35:33.190: INFO: Waiting for terminating namespaces to be deleted... +[BeforeEach] PreemptionExecutionPath + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:35:33.192 +Jan 14 04:35:33.192: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename sched-preemption-path 01/14/23 04:35:33.193 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:35:33.208 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:35:33.211 +[BeforeEach] PreemptionExecutionPath + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] PreemptionExecutionPath + test/e2e/scheduling/preemption.go:569 +STEP: Finding an available node 01/14/23 04:35:33.213 +STEP: Trying to launch a pod without a label to get a node which can launch it. 01/14/23 04:35:33.213 +Jan 14 04:35:33.222: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-path-3800" to be "running" +Jan 14 04:35:33.225: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.111344ms +Jan 14 04:35:35.230: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007569669s +Jan 14 04:35:35.230: INFO: Pod "without-label" satisfied condition "running" +STEP: Explicitly delete pod here to free the resource it takes. 01/14/23 04:35:35.233 +Jan 14 04:35:35.246: INFO: found a healthy node: 10.0.1.106 +[It] runs ReplicaSets to verify preemption running path [Conformance] + test/e2e/scheduling/preemption.go:616 +Jan 14 04:35:41.310: INFO: pods created so far: [1 1 1] +Jan 14 04:35:41.310: INFO: length of pods created so far: 3 +Jan 14 04:35:43.323: INFO: pods created so far: [2 2 1] +[AfterEach] PreemptionExecutionPath + test/e2e/framework/node/init/init.go:32 +Jan 14 04:35:50.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] PreemptionExecutionPath + test/e2e/scheduling/preemption.go:543 +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:35:50.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:84 +[DeferCleanup (Each)] PreemptionExecutionPath + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] PreemptionExecutionPath + dump namespaces | framework.go:196 +[DeferCleanup (Each)] PreemptionExecutionPath + tear down framework | framework.go:193 +STEP: Destroying namespace "sched-preemption-path-3800" for this suite. 01/14/23 04:35:50.406 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "sched-preemption-9936" for this suite. 01/14/23 04:35:50.411 +------------------------------ +• [SLOW TEST] [77.295 seconds] +[sig-scheduling] SchedulerPreemption [Serial] +test/e2e/scheduling/framework.go:40 + PreemptionExecutionPath + test/e2e/scheduling/preemption.go:531 + runs ReplicaSets to verify preemption running path [Conformance] + test/e2e/scheduling/preemption.go:616 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:34:33.12 + Jan 14 04:34:33.120: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename sched-preemption 01/14/23 04:34:33.121 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:34:33.135 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:34:33.137 + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:96 + Jan 14 04:34:33.153: INFO: Waiting up to 1m0s for all nodes to be ready + Jan 14 04:35:33.190: INFO: Waiting for terminating namespaces to be deleted... + [BeforeEach] PreemptionExecutionPath + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:35:33.192 + Jan 14 04:35:33.192: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename sched-preemption-path 01/14/23 04:35:33.193 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:35:33.208 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:35:33.211 + [BeforeEach] PreemptionExecutionPath + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] PreemptionExecutionPath + test/e2e/scheduling/preemption.go:569 + STEP: Finding an available node 01/14/23 04:35:33.213 + STEP: Trying to launch a pod without a label to get a node which can launch it. 01/14/23 04:35:33.213 + Jan 14 04:35:33.222: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-preemption-path-3800" to be "running" + Jan 14 04:35:33.225: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 3.111344ms + Jan 14 04:35:35.230: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.007569669s + Jan 14 04:35:35.230: INFO: Pod "without-label" satisfied condition "running" + STEP: Explicitly delete pod here to free the resource it takes. 01/14/23 04:35:35.233 + Jan 14 04:35:35.246: INFO: found a healthy node: 10.0.1.106 + [It] runs ReplicaSets to verify preemption running path [Conformance] + test/e2e/scheduling/preemption.go:616 + Jan 14 04:35:41.310: INFO: pods created so far: [1 1 1] + Jan 14 04:35:41.310: INFO: length of pods created so far: 3 + Jan 14 04:35:43.323: INFO: pods created so far: [2 2 1] + [AfterEach] PreemptionExecutionPath + test/e2e/framework/node/init/init.go:32 + Jan 14 04:35:50.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] PreemptionExecutionPath + test/e2e/scheduling/preemption.go:543 + [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:35:50.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:84 + [DeferCleanup (Each)] PreemptionExecutionPath + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] PreemptionExecutionPath + dump namespaces | framework.go:196 + [DeferCleanup (Each)] PreemptionExecutionPath + tear down framework | framework.go:193 + STEP: Destroying namespace "sched-preemption-path-3800" for this suite. 01/14/23 04:35:50.406 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "sched-preemption-9936" for this suite. 01/14/23 04:35:50.411 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-apps] CronJob + should replace jobs when ReplaceConcurrent [Conformance] + test/e2e/apps/cronjob.go:160 +[BeforeEach] [sig-apps] CronJob + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:35:50.416 +Jan 14 04:35:50.416: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename cronjob 01/14/23 04:35:50.416 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:35:50.431 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:35:50.433 +[BeforeEach] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:31 +[It] should replace jobs when ReplaceConcurrent [Conformance] + test/e2e/apps/cronjob.go:160 +STEP: Creating a ReplaceConcurrent cronjob 01/14/23 04:35:50.435 +STEP: Ensuring a job is scheduled 01/14/23 04:35:50.441 +STEP: Ensuring exactly one is scheduled 01/14/23 04:36:00.445 +STEP: Ensuring exactly one running job exists by listing jobs explicitly 01/14/23 04:36:00.447 +STEP: Ensuring the job is replaced with a new one 01/14/23 04:36:00.45 +STEP: Removing cronjob 01/14/23 04:37:00.455 +[AfterEach] [sig-apps] CronJob + test/e2e/framework/node/init/init.go:32 +Jan 14 04:37:00.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] CronJob + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] CronJob + tear down framework | framework.go:193 +STEP: Destroying namespace "cronjob-1224" for this suite. 01/14/23 04:37:00.465 +------------------------------ +• [SLOW TEST] [70.056 seconds] +[sig-apps] CronJob +test/e2e/apps/framework.go:23 + should replace jobs when ReplaceConcurrent [Conformance] + test/e2e/apps/cronjob.go:160 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] CronJob + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:35:50.416 + Jan 14 04:35:50.416: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename cronjob 01/14/23 04:35:50.416 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:35:50.431 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:35:50.433 + [BeforeEach] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:31 + [It] should replace jobs when ReplaceConcurrent [Conformance] + test/e2e/apps/cronjob.go:160 + STEP: Creating a ReplaceConcurrent cronjob 01/14/23 04:35:50.435 + STEP: Ensuring a job is scheduled 01/14/23 04:35:50.441 + STEP: Ensuring exactly one is scheduled 01/14/23 04:36:00.445 + STEP: Ensuring exactly one running job exists by listing jobs explicitly 01/14/23 04:36:00.447 + STEP: Ensuring the job is replaced with a new one 01/14/23 04:36:00.45 + STEP: Removing cronjob 01/14/23 04:37:00.455 + [AfterEach] [sig-apps] CronJob + test/e2e/framework/node/init/init.go:32 + Jan 14 04:37:00.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] CronJob + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] CronJob + tear down framework | framework.go:193 + STEP: Destroying namespace "cronjob-1224" for this suite. 01/14/23 04:37:00.465 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + deployment should delete old replica sets [Conformance] + test/e2e/apps/deployment.go:122 +[BeforeEach] [sig-apps] Deployment + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:37:00.473 +Jan 14 04:37:00.473: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename deployment 01/14/23 04:37:00.474 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:37:00.493 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:37:00.495 +[BeforeEach] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] deployment should delete old replica sets [Conformance] + test/e2e/apps/deployment.go:122 +Jan 14 04:37:00.505: INFO: Pod name cleanup-pod: Found 0 pods out of 1 +Jan 14 04:37:05.511: INFO: Pod name cleanup-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running 01/14/23 04:37:05.511 +Jan 14 04:37:05.511: INFO: Creating deployment test-cleanup-deployment +STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up 01/14/23 04:37:05.528 +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Jan 14 04:37:05.538: INFO: Deployment "test-cleanup-deployment": +&Deployment{ObjectMeta:{test-cleanup-deployment deployment-4363 30b8908d-28c5-4c92-99fb-854470a7cb2a 447121 1 2023-01-14 04:37:05 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2023-01-14 04:37:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00544dfc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} + +Jan 14 04:37:05.541: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. +Jan 14 04:37:05.541: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": +Jan 14 04:37:05.541: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-4363 e415a815-90ac-4d27-8c90-66032823af4b 447124 1 2023-01-14 04:37:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 30b8908d-28c5-4c92-99fb-854470a7cb2a 0xc000f64ae7 0xc000f64ae8}] [] [{e2e.test Update apps/v1 2023-01-14 04:37:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 04:37:02 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status} {kube-controller-manager Update apps/v1 2023-01-14 04:37:05 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"30b8908d-28c5-4c92-99fb-854470a7cb2a\"}":{}}}} }]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc000f64c88 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Jan 14 04:37:05.545: INFO: Pod "test-cleanup-controller-vjkjz" is available: +&Pod{ObjectMeta:{test-cleanup-controller-vjkjz test-cleanup-controller- deployment-4363 b2869ff7-ea9a-4d08-99cc-d8d97a4b583e 447098 0 2023-01-14 04:37:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.1.40" + ], + "mac": "12:5e:35:dd:2e:b5", + "default": true, + "dns": {} +}]] [{apps/v1 ReplicaSet test-cleanup-controller e415a815-90ac-4d27-8c90-66032823af4b 0xc000f655f7 0xc000f655f8}] [] [{kube-controller-manager Update v1 2023-01-14 04:37:00 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e415a815-90ac-4d27-8c90-66032823af4b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus Update v1 2023-01-14 04:37:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status} {kubelet Update v1 2023-01-14 04:37:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.1.40\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-m9dqk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m9dqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.106,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:37:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:37:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:37:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:37:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.106,PodIP:10.52.1.40,StartTime:2023-01-14 04:37:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 04:37:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:containerd://27ae9d3b6affe6e0b8d9024d93b8f78f066bfbb21848109f6bfa067b2cfed41a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.52.1.40,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + test/e2e/framework/node/init/init.go:32 +Jan 14 04:37:05.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Deployment + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Deployment + tear down framework | framework.go:193 +STEP: Destroying namespace "deployment-4363" for this suite. 01/14/23 04:37:05.554 +------------------------------ +• [SLOW TEST] [5.089 seconds] +[sig-apps] Deployment +test/e2e/apps/framework.go:23 + deployment should delete old replica sets [Conformance] + test/e2e/apps/deployment.go:122 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Deployment + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:37:00.473 + Jan 14 04:37:00.473: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename deployment 01/14/23 04:37:00.474 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:37:00.493 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:37:00.495 + [BeforeEach] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 + [It] deployment should delete old replica sets [Conformance] + test/e2e/apps/deployment.go:122 + Jan 14 04:37:00.505: INFO: Pod name cleanup-pod: Found 0 pods out of 1 + Jan 14 04:37:05.511: INFO: Pod name cleanup-pod: Found 1 pods out of 1 + STEP: ensuring each pod is running 01/14/23 04:37:05.511 + Jan 14 04:37:05.511: INFO: Creating deployment test-cleanup-deployment + STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up 01/14/23 04:37:05.528 + [AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 + Jan 14 04:37:05.538: INFO: Deployment "test-cleanup-deployment": + &Deployment{ObjectMeta:{test-cleanup-deployment deployment-4363 30b8908d-28c5-4c92-99fb-854470a7cb2a 447121 1 2023-01-14 04:37:05 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2023-01-14 04:37:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00544dfc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} + + Jan 14 04:37:05.541: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. + Jan 14 04:37:05.541: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": + Jan 14 04:37:05.541: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-4363 e415a815-90ac-4d27-8c90-66032823af4b 447124 1 2023-01-14 04:37:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 30b8908d-28c5-4c92-99fb-854470a7cb2a 0xc000f64ae7 0xc000f64ae8}] [] [{e2e.test Update apps/v1 2023-01-14 04:37:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 04:37:02 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status} {kube-controller-manager Update apps/v1 2023-01-14 04:37:05 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"30b8908d-28c5-4c92-99fb-854470a7cb2a\"}":{}}}} }]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc000f64c88 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} + Jan 14 04:37:05.545: INFO: Pod "test-cleanup-controller-vjkjz" is available: + &Pod{ObjectMeta:{test-cleanup-controller-vjkjz test-cleanup-controller- deployment-4363 b2869ff7-ea9a-4d08-99cc-d8d97a4b583e 447098 0 2023-01-14 04:37:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.1.40" + ], + "mac": "12:5e:35:dd:2e:b5", + "default": true, + "dns": {} + }]] [{apps/v1 ReplicaSet test-cleanup-controller e415a815-90ac-4d27-8c90-66032823af4b 0xc000f655f7 0xc000f655f8}] [] [{kube-controller-manager Update v1 2023-01-14 04:37:00 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e415a815-90ac-4d27-8c90-66032823af4b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus Update v1 2023-01-14 04:37:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status} {kubelet Update v1 2023-01-14 04:37:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.1.40\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-m9dqk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m9dqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.106,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:37:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:37:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:37:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:37:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.106,PodIP:10.52.1.40,StartTime:2023-01-14 04:37:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 04:37:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:containerd://27ae9d3b6affe6e0b8d9024d93b8f78f066bfbb21848109f6bfa067b2cfed41a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.52.1.40,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + [AfterEach] [sig-apps] Deployment + test/e2e/framework/node/init/init.go:32 + Jan 14 04:37:05.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Deployment + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Deployment + tear down framework | framework.go:193 + STEP: Destroying namespace "deployment-4363" for this suite. 01/14/23 04:37:05.554 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:187 +[BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:37:05.563 +Jan 14 04:37:05.563: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename emptydir 01/14/23 04:37:05.564 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:37:05.582 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:37:05.585 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 +[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:187 +STEP: Creating a pod to test emptydir 0777 on node default medium 01/14/23 04:37:05.588 +Jan 14 04:37:05.599: INFO: Waiting up to 5m0s for pod "pod-028af6e7-2b2f-453b-8b62-06deb7c06b74" in namespace "emptydir-1607" to be "Succeeded or Failed" +Jan 14 04:37:05.602: INFO: Pod "pod-028af6e7-2b2f-453b-8b62-06deb7c06b74": Phase="Pending", Reason="", readiness=false. Elapsed: 3.044161ms +Jan 14 04:37:07.608: INFO: Pod "pod-028af6e7-2b2f-453b-8b62-06deb7c06b74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009013151s +Jan 14 04:37:09.607: INFO: Pod "pod-028af6e7-2b2f-453b-8b62-06deb7c06b74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00791835s +STEP: Saw pod success 01/14/23 04:37:09.607 +Jan 14 04:37:09.607: INFO: Pod "pod-028af6e7-2b2f-453b-8b62-06deb7c06b74" satisfied condition "Succeeded or Failed" +Jan 14 04:37:09.610: INFO: Trying to get logs from node 10.0.1.99 pod pod-028af6e7-2b2f-453b-8b62-06deb7c06b74 container test-container: +STEP: delete the pod 01/14/23 04:37:09.621 +Jan 14 04:37:09.635: INFO: Waiting for pod pod-028af6e7-2b2f-453b-8b62-06deb7c06b74 to disappear +Jan 14 04:37:09.638: INFO: Pod pod-028af6e7-2b2f-453b-8b62-06deb7c06b74 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 +Jan 14 04:37:09.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 +STEP: Destroying namespace "emptydir-1607" for this suite. 01/14/23 04:37:09.643 +------------------------------ +• [4.086 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:187 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:37:05.563 + Jan 14 04:37:05.563: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename emptydir 01/14/23 04:37:05.564 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:37:05.582 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:37:05.585 + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 + [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:187 + STEP: Creating a pod to test emptydir 0777 on node default medium 01/14/23 04:37:05.588 + Jan 14 04:37:05.599: INFO: Waiting up to 5m0s for pod "pod-028af6e7-2b2f-453b-8b62-06deb7c06b74" in namespace "emptydir-1607" to be "Succeeded or Failed" + Jan 14 04:37:05.602: INFO: Pod "pod-028af6e7-2b2f-453b-8b62-06deb7c06b74": Phase="Pending", Reason="", readiness=false. Elapsed: 3.044161ms + Jan 14 04:37:07.608: INFO: Pod "pod-028af6e7-2b2f-453b-8b62-06deb7c06b74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009013151s + Jan 14 04:37:09.607: INFO: Pod "pod-028af6e7-2b2f-453b-8b62-06deb7c06b74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00791835s + STEP: Saw pod success 01/14/23 04:37:09.607 + Jan 14 04:37:09.607: INFO: Pod "pod-028af6e7-2b2f-453b-8b62-06deb7c06b74" satisfied condition "Succeeded or Failed" + Jan 14 04:37:09.610: INFO: Trying to get logs from node 10.0.1.99 pod pod-028af6e7-2b2f-453b-8b62-06deb7c06b74 container test-container: + STEP: delete the pod 01/14/23 04:37:09.621 + Jan 14 04:37:09.635: INFO: Waiting for pod pod-028af6e7-2b2f-453b-8b62-06deb7c06b74 to disappear + Jan 14 04:37:09.638: INFO: Pod pod-028af6e7-2b2f-453b-8b62-06deb7c06b74 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 + Jan 14 04:37:09.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 + STEP: Destroying namespace "emptydir-1607" for this suite. 01/14/23 04:37:09.643 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide host IP as an env var [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:90 +[BeforeEach] [sig-node] Downward API + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:37:09.649 +Jan 14 04:37:09.649: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename downward-api 01/14/23 04:37:09.65 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:37:09.665 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:37:09.668 +[BeforeEach] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:31 +[It] should provide host IP as an env var [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:90 +STEP: Creating a pod to test downward api env vars 01/14/23 04:37:09.67 +Jan 14 04:37:09.680: INFO: Waiting up to 5m0s for pod "downward-api-ccac7118-4107-47cd-85ba-cbabd0026e6c" in namespace "downward-api-2405" to be "Succeeded or Failed" +Jan 14 04:37:09.683: INFO: Pod "downward-api-ccac7118-4107-47cd-85ba-cbabd0026e6c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.350061ms +Jan 14 04:37:11.688: INFO: Pod "downward-api-ccac7118-4107-47cd-85ba-cbabd0026e6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008285545s +Jan 14 04:37:13.689: INFO: Pod "downward-api-ccac7118-4107-47cd-85ba-cbabd0026e6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00932734s +STEP: Saw pod success 01/14/23 04:37:13.689 +Jan 14 04:37:13.690: INFO: Pod "downward-api-ccac7118-4107-47cd-85ba-cbabd0026e6c" satisfied condition "Succeeded or Failed" +Jan 14 04:37:13.693: INFO: Trying to get logs from node 10.0.1.212 pod downward-api-ccac7118-4107-47cd-85ba-cbabd0026e6c container dapi-container: +STEP: delete the pod 01/14/23 04:37:13.706 +Jan 14 04:37:13.723: INFO: Waiting for pod downward-api-ccac7118-4107-47cd-85ba-cbabd0026e6c to disappear +Jan 14 04:37:13.726: INFO: Pod downward-api-ccac7118-4107-47cd-85ba-cbabd0026e6c no longer exists +[AfterEach] [sig-node] Downward API + test/e2e/framework/node/init/init.go:32 +Jan 14 04:37:13.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Downward API + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Downward API + tear down framework | framework.go:193 +STEP: Destroying namespace "downward-api-2405" for this suite. 01/14/23 04:37:13.731 +------------------------------ +• [4.088 seconds] +[sig-node] Downward API +test/e2e/common/node/framework.go:23 + should provide host IP as an env var [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:90 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Downward API + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:37:09.649 + Jan 14 04:37:09.649: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename downward-api 01/14/23 04:37:09.65 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:37:09.665 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:37:09.668 + [BeforeEach] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:31 + [It] should provide host IP as an env var [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:90 + STEP: Creating a pod to test downward api env vars 01/14/23 04:37:09.67 + Jan 14 04:37:09.680: INFO: Waiting up to 5m0s for pod "downward-api-ccac7118-4107-47cd-85ba-cbabd0026e6c" in namespace "downward-api-2405" to be "Succeeded or Failed" + Jan 14 04:37:09.683: INFO: Pod "downward-api-ccac7118-4107-47cd-85ba-cbabd0026e6c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.350061ms + Jan 14 04:37:11.688: INFO: Pod "downward-api-ccac7118-4107-47cd-85ba-cbabd0026e6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008285545s + Jan 14 04:37:13.689: INFO: Pod "downward-api-ccac7118-4107-47cd-85ba-cbabd0026e6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00932734s + STEP: Saw pod success 01/14/23 04:37:13.689 + Jan 14 04:37:13.690: INFO: Pod "downward-api-ccac7118-4107-47cd-85ba-cbabd0026e6c" satisfied condition "Succeeded or Failed" + Jan 14 04:37:13.693: INFO: Trying to get logs from node 10.0.1.212 pod downward-api-ccac7118-4107-47cd-85ba-cbabd0026e6c container dapi-container: + STEP: delete the pod 01/14/23 04:37:13.706 + Jan 14 04:37:13.723: INFO: Waiting for pod downward-api-ccac7118-4107-47cd-85ba-cbabd0026e6c to disappear + Jan 14 04:37:13.726: INFO: Pod downward-api-ccac7118-4107-47cd-85ba-cbabd0026e6c no longer exists + [AfterEach] [sig-node] Downward API + test/e2e/framework/node/init/init.go:32 + Jan 14 04:37:13.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Downward API + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Downward API + tear down framework | framework.go:193 + STEP: Destroying namespace "downward-api-2405" for this suite. 01/14/23 04:37:13.731 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a replica set. [Conformance] + test/e2e/apimachinery/resource_quota.go:448 +[BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:37:13.737 +Jan 14 04:37:13.737: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename resourcequota 01/14/23 04:37:13.738 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:37:13.752 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:37:13.755 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 +[It] should create a ResourceQuota and capture the life of a replica set. [Conformance] + test/e2e/apimachinery/resource_quota.go:448 +STEP: Counting existing ResourceQuota 01/14/23 04:37:13.757 +STEP: Creating a ResourceQuota 01/14/23 04:37:18.762 +STEP: Ensuring resource quota status is calculated 01/14/23 04:37:18.766 +STEP: Creating a ReplicaSet 01/14/23 04:37:20.77 +STEP: Ensuring resource quota status captures replicaset creation 01/14/23 04:37:20.785 +STEP: Deleting a ReplicaSet 01/14/23 04:37:22.788 +STEP: Ensuring resource quota status released usage 01/14/23 04:37:22.794 +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 +Jan 14 04:37:24.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 +STEP: Destroying namespace "resourcequota-5386" for this suite. 01/14/23 04:37:24.805 +------------------------------ +• [SLOW TEST] [11.074 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a replica set. [Conformance] + test/e2e/apimachinery/resource_quota.go:448 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:37:13.737 + Jan 14 04:37:13.737: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename resourcequota 01/14/23 04:37:13.738 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:37:13.752 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:37:13.755 + [BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 + [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] + test/e2e/apimachinery/resource_quota.go:448 + STEP: Counting existing ResourceQuota 01/14/23 04:37:13.757 + STEP: Creating a ResourceQuota 01/14/23 04:37:18.762 + STEP: Ensuring resource quota status is calculated 01/14/23 04:37:18.766 + STEP: Creating a ReplicaSet 01/14/23 04:37:20.77 + STEP: Ensuring resource quota status captures replicaset creation 01/14/23 04:37:20.785 + STEP: Deleting a ReplicaSet 01/14/23 04:37:22.788 + STEP: Ensuring resource quota status released usage 01/14/23 04:37:22.794 + [AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 + Jan 14 04:37:24.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 + STEP: Destroying namespace "resourcequota-5386" for this suite. 01/14/23 04:37:24.805 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces + should list and delete a collection of PodDisruptionBudgets [Conformance] + test/e2e/apps/disruption.go:87 +[BeforeEach] [sig-apps] DisruptionController + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:37:24.812 +Jan 14 04:37:24.812: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename disruption 01/14/23 04:37:24.813 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:37:24.852 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:37:24.855 +[BeforeEach] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:72 +[BeforeEach] Listing PodDisruptionBudgets for all namespaces + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:37:24.86 +Jan 14 04:37:24.860: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename disruption-2 01/14/23 04:37:24.861 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:37:24.918 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:37:24.92 +[BeforeEach] Listing PodDisruptionBudgets for all namespaces + test/e2e/framework/metrics/init/init.go:31 +[It] should list and delete a collection of PodDisruptionBudgets [Conformance] + test/e2e/apps/disruption.go:87 +STEP: Waiting for the pdb to be processed 01/14/23 04:37:24.948 +STEP: Waiting for the pdb to be processed 01/14/23 04:37:26.963 +STEP: Waiting for the pdb to be processed 01/14/23 04:37:28.976 +STEP: listing a collection of PDBs across all namespaces 01/14/23 04:37:30.984 +STEP: listing a collection of PDBs in namespace disruption-4366 01/14/23 04:37:30.987 +STEP: deleting a collection of PDBs 01/14/23 04:37:30.99 +STEP: Waiting for the PDB collection to be deleted 01/14/23 04:37:31.006 +[AfterEach] Listing PodDisruptionBudgets for all namespaces + test/e2e/framework/node/init/init.go:32 +Jan 14 04:37:31.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-apps] DisruptionController + test/e2e/framework/node/init/init.go:32 +Jan 14 04:37:31.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] Listing PodDisruptionBudgets for all namespaces + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] Listing PodDisruptionBudgets for all namespaces + dump namespaces | framework.go:196 +[DeferCleanup (Each)] Listing PodDisruptionBudgets for all namespaces + tear down framework | framework.go:193 +STEP: Destroying namespace "disruption-2-4459" for this suite. 01/14/23 04:37:31.018 +[DeferCleanup (Each)] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] DisruptionController + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] DisruptionController + tear down framework | framework.go:193 +STEP: Destroying namespace "disruption-4366" for this suite. 01/14/23 04:37:31.024 +------------------------------ +• [SLOW TEST] [6.218 seconds] +[sig-apps] DisruptionController +test/e2e/apps/framework.go:23 + Listing PodDisruptionBudgets for all namespaces + test/e2e/apps/disruption.go:78 + should list and delete a collection of PodDisruptionBudgets [Conformance] + test/e2e/apps/disruption.go:87 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] DisruptionController + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:37:24.812 + Jan 14 04:37:24.812: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename disruption 01/14/23 04:37:24.813 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:37:24.852 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:37:24.855 + [BeforeEach] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:72 + [BeforeEach] Listing PodDisruptionBudgets for all namespaces + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:37:24.86 + Jan 14 04:37:24.860: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename disruption-2 01/14/23 04:37:24.861 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:37:24.918 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:37:24.92 + [BeforeEach] Listing PodDisruptionBudgets for all namespaces + test/e2e/framework/metrics/init/init.go:31 + [It] should list and delete a collection of PodDisruptionBudgets [Conformance] + test/e2e/apps/disruption.go:87 + STEP: Waiting for the pdb to be processed 01/14/23 04:37:24.948 + STEP: Waiting for the pdb to be processed 01/14/23 04:37:26.963 + STEP: Waiting for the pdb to be processed 01/14/23 04:37:28.976 + STEP: listing a collection of PDBs across all namespaces 01/14/23 04:37:30.984 + STEP: listing a collection of PDBs in namespace disruption-4366 01/14/23 04:37:30.987 + STEP: deleting a collection of PDBs 01/14/23 04:37:30.99 + STEP: Waiting for the PDB collection to be deleted 01/14/23 04:37:31.006 + [AfterEach] Listing PodDisruptionBudgets for all namespaces + test/e2e/framework/node/init/init.go:32 + Jan 14 04:37:31.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-apps] DisruptionController + test/e2e/framework/node/init/init.go:32 + Jan 14 04:37:31.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] Listing PodDisruptionBudgets for all namespaces + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] Listing PodDisruptionBudgets for all namespaces + dump namespaces | framework.go:196 + [DeferCleanup (Each)] Listing PodDisruptionBudgets for all namespaces + tear down framework | framework.go:193 + STEP: Destroying namespace "disruption-2-4459" for this suite. 01/14/23 04:37:31.018 + [DeferCleanup (Each)] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] DisruptionController + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] DisruptionController + tear down framework | framework.go:193 + STEP: Destroying namespace "disruption-4366" for this suite. 01/14/23 04:37:31.024 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:47 +[BeforeEach] [sig-storage] Projected configMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:37:31.03 +Jan 14 04:37:31.030: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename projected 01/14/23 04:37:31.031 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:37:31.044 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:37:31.047 +[BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:47 +STEP: Creating configMap with name projected-configmap-test-volume-ec72b0e8-558e-4fc0-a500-3d885f36dd4c 01/14/23 04:37:31.049 +STEP: Creating a pod to test consume configMaps 01/14/23 04:37:31.055 +Jan 14 04:37:31.064: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-33148bad-4f31-4bd3-840b-f207ecb0994f" in namespace "projected-1588" to be "Succeeded or Failed" +Jan 14 04:37:31.072: INFO: Pod "pod-projected-configmaps-33148bad-4f31-4bd3-840b-f207ecb0994f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.498283ms +Jan 14 04:37:33.077: INFO: Pod "pod-projected-configmaps-33148bad-4f31-4bd3-840b-f207ecb0994f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012936938s +Jan 14 04:37:35.077: INFO: Pod "pod-projected-configmaps-33148bad-4f31-4bd3-840b-f207ecb0994f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013193456s +STEP: Saw pod success 01/14/23 04:37:35.077 +Jan 14 04:37:35.078: INFO: Pod "pod-projected-configmaps-33148bad-4f31-4bd3-840b-f207ecb0994f" satisfied condition "Succeeded or Failed" +Jan 14 04:37:35.081: INFO: Trying to get logs from node 10.0.1.106 pod pod-projected-configmaps-33148bad-4f31-4bd3-840b-f207ecb0994f container agnhost-container: +STEP: delete the pod 01/14/23 04:37:35.093 +Jan 14 04:37:35.107: INFO: Waiting for pod pod-projected-configmaps-33148bad-4f31-4bd3-840b-f207ecb0994f to disappear +Jan 14 04:37:35.110: INFO: Pod pod-projected-configmaps-33148bad-4f31-4bd3-840b-f207ecb0994f no longer exists +[AfterEach] [sig-storage] Projected configMap + test/e2e/framework/node/init/init.go:32 +Jan 14 04:37:35.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected configMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected configMap + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-1588" for this suite. 01/14/23 04:37:35.115 +------------------------------ +• [4.090 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:47 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected configMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:37:31.03 + Jan 14 04:37:31.030: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename projected 01/14/23 04:37:31.031 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:37:31.044 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:37:31.047 + [BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:47 + STEP: Creating configMap with name projected-configmap-test-volume-ec72b0e8-558e-4fc0-a500-3d885f36dd4c 01/14/23 04:37:31.049 + STEP: Creating a pod to test consume configMaps 01/14/23 04:37:31.055 + Jan 14 04:37:31.064: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-33148bad-4f31-4bd3-840b-f207ecb0994f" in namespace "projected-1588" to be "Succeeded or Failed" + Jan 14 04:37:31.072: INFO: Pod "pod-projected-configmaps-33148bad-4f31-4bd3-840b-f207ecb0994f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.498283ms + Jan 14 04:37:33.077: INFO: Pod "pod-projected-configmaps-33148bad-4f31-4bd3-840b-f207ecb0994f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012936938s + Jan 14 04:37:35.077: INFO: Pod "pod-projected-configmaps-33148bad-4f31-4bd3-840b-f207ecb0994f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013193456s + STEP: Saw pod success 01/14/23 04:37:35.077 + Jan 14 04:37:35.078: INFO: Pod "pod-projected-configmaps-33148bad-4f31-4bd3-840b-f207ecb0994f" satisfied condition "Succeeded or Failed" + Jan 14 04:37:35.081: INFO: Trying to get logs from node 10.0.1.106 pod pod-projected-configmaps-33148bad-4f31-4bd3-840b-f207ecb0994f container agnhost-container: + STEP: delete the pod 01/14/23 04:37:35.093 + Jan 14 04:37:35.107: INFO: Waiting for pod pod-projected-configmaps-33148bad-4f31-4bd3-840b-f207ecb0994f to disappear + Jan 14 04:37:35.110: INFO: Pod pod-projected-configmaps-33148bad-4f31-4bd3-840b-f207ecb0994f no longer exists + [AfterEach] [sig-storage] Projected configMap + test/e2e/framework/node/init/init.go:32 + Jan 14 04:37:35.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected configMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected configMap + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-1588" for this suite. 01/14/23 04:37:35.115 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource [Conformance] + test/e2e/apimachinery/webhook.go:291 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:37:35.122 +Jan 14 04:37:35.122: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename webhook 01/14/23 04:37:35.122 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:37:35.138 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:37:35.14 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 01/14/23 04:37:35.154 +STEP: Create role binding to let webhook read extension-apiserver-authentication 01/14/23 04:37:35.549 +STEP: Deploying the webhook pod 01/14/23 04:37:35.558 +STEP: Wait for the deployment to be ready 01/14/23 04:37:35.569 +Jan 14 04:37:35.577: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 01/14/23 04:37:37.588 +STEP: Verifying the service has paired with the endpoint 01/14/23 04:37:37.597 +Jan 14 04:37:38.597: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource [Conformance] + test/e2e/apimachinery/webhook.go:291 +Jan 14 04:37:38.601: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4217-crds.webhook.example.com via the AdmissionRegistration API 01/14/23 04:37:39.116 +STEP: Creating a custom resource that should be mutated by the webhook 01/14/23 04:37:39.13 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:37:41.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "webhook-8379" for this suite. 01/14/23 04:37:41.763 +STEP: Destroying namespace "webhook-8379-markers" for this suite. 01/14/23 04:37:41.768 +------------------------------ +• [SLOW TEST] [6.653 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should mutate custom resource [Conformance] + test/e2e/apimachinery/webhook.go:291 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:37:35.122 + Jan 14 04:37:35.122: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename webhook 01/14/23 04:37:35.122 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:37:35.138 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:37:35.14 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 01/14/23 04:37:35.154 + STEP: Create role binding to let webhook read extension-apiserver-authentication 01/14/23 04:37:35.549 + STEP: Deploying the webhook pod 01/14/23 04:37:35.558 + STEP: Wait for the deployment to be ready 01/14/23 04:37:35.569 + Jan 14 04:37:35.577: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 01/14/23 04:37:37.588 + STEP: Verifying the service has paired with the endpoint 01/14/23 04:37:37.597 + Jan 14 04:37:38.597: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should mutate custom resource [Conformance] + test/e2e/apimachinery/webhook.go:291 + Jan 14 04:37:38.601: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4217-crds.webhook.example.com via the AdmissionRegistration API 01/14/23 04:37:39.116 + STEP: Creating a custom resource that should be mutated by the webhook 01/14/23 04:37:39.13 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:37:41.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "webhook-8379" for this suite. 01/14/23 04:37:41.763 + STEP: Destroying namespace "webhook-8379-markers" for this suite. 01/14/23 04:37:41.768 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSS +------------------------------ +[sig-apps] Deployment + Deployment should have a working scale subresource [Conformance] + test/e2e/apps/deployment.go:150 +[BeforeEach] [sig-apps] Deployment + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:37:41.775 +Jan 14 04:37:41.775: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename deployment 01/14/23 04:37:41.776 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:37:41.793 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:37:41.795 +[BeforeEach] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] Deployment should have a working scale subresource [Conformance] + test/e2e/apps/deployment.go:150 +Jan 14 04:37:41.798: INFO: Creating simple deployment test-new-deployment +Jan 14 04:37:41.811: INFO: deployment "test-new-deployment" doesn't have the required revision set +STEP: getting scale subresource 01/14/23 04:37:43.824 +STEP: updating a scale subresource 01/14/23 04:37:43.827 +STEP: verifying the deployment Spec.Replicas was modified 01/14/23 04:37:43.833 +STEP: Patch a scale subresource 01/14/23 04:37:43.836 +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Jan 14 04:37:43.858: INFO: Deployment "test-new-deployment": +&Deployment{ObjectMeta:{test-new-deployment deployment-4176 30b72fe5-3f21-4489-81c6-af56ae0560ab 447586 3 2023-01-14 04:37:41 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {e2e.test Update apps/v1 2023-01-14 04:37:41 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 04:37:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004c787b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-01-14 04:37:43 +0000 UTC,LastTransitionTime:2023-01-14 04:37:43 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-7f5969cbc7" has successfully progressed.,LastUpdateTime:2023-01-14 04:37:43 +0000 UTC,LastTransitionTime:2023-01-14 04:37:41 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Jan 14 04:37:43.861: INFO: New ReplicaSet "test-new-deployment-7f5969cbc7" of Deployment "test-new-deployment": +&ReplicaSet{ObjectMeta:{test-new-deployment-7f5969cbc7 deployment-4176 24c90947-d4ae-48a4-89cc-3a6c8cc5ab2a 447590 2 2023-01-14 04:37:41 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment 30b72fe5-3f21-4489-81c6-af56ae0560ab 0xc003f9cd77 0xc003f9cd78}] [] [{kube-controller-manager Update apps/v1 2023-01-14 04:37:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"30b72fe5-3f21-4489-81c6-af56ae0560ab\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 04:37:43 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 7f5969cbc7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003f9ce08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Jan 14 04:37:43.868: INFO: Pod "test-new-deployment-7f5969cbc7-nrgnw" is not available: +&Pod{ObjectMeta:{test-new-deployment-7f5969cbc7-nrgnw test-new-deployment-7f5969cbc7- deployment-4176 ff9ade22-ee71-48a7-8f24-9286e930754b 447594 0 2023-01-14 04:37:43 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet test-new-deployment-7f5969cbc7 24c90947-d4ae-48a4-89cc-3a6c8cc5ab2a 0xc00467cfc7 0xc00467cfc8}] [] [{kube-controller-manager Update v1 2023-01-14 04:37:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24c90947-d4ae-48a4-89cc-3a6c8cc5ab2a\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 04:37:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-x6wjq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x6wjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.99,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:37:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:37:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:37:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:37:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.99,PodIP:,StartTime:2023-01-14 04:37:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jan 14 04:37:43.868: INFO: Pod "test-new-deployment-7f5969cbc7-wdhnx" is available: +&Pod{ObjectMeta:{test-new-deployment-7f5969cbc7-wdhnx test-new-deployment-7f5969cbc7- deployment-4176 7bc9ddc3-69c5-4c57-8a3e-9b2cce59bdc5 447579 0 2023-01-14 04:37:41 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.1.43" + ], + "mac": "5e:e3:e1:c4:58:8d", + "default": true, + "dns": {} +}]] [{apps/v1 ReplicaSet test-new-deployment-7f5969cbc7 24c90947-d4ae-48a4-89cc-3a6c8cc5ab2a 0xc00467d190 0xc00467d191}] [] [{kube-controller-manager Update v1 2023-01-14 04:37:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24c90947-d4ae-48a4-89cc-3a6c8cc5ab2a\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus Update v1 2023-01-14 04:37:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status} {kubelet Update v1 2023-01-14 04:37:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.1.43\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wz5ts,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wz5ts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.106,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:37:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:37:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:37:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:37:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.106,PodIP:10.52.1.43,StartTime:2023-01-14 04:37:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 04:37:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:containerd://9ac5594100677ff440e3f9a4d6abc1cc6379b0350057debee3fafe2f94edbe88,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.52.1.43,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + test/e2e/framework/node/init/init.go:32 +Jan 14 04:37:43.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Deployment + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Deployment + tear down framework | framework.go:193 +STEP: Destroying namespace "deployment-4176" for this suite. 01/14/23 04:37:43.876 +------------------------------ +• [2.105 seconds] +[sig-apps] Deployment +test/e2e/apps/framework.go:23 + Deployment should have a working scale subresource [Conformance] + test/e2e/apps/deployment.go:150 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Deployment + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:37:41.775 + Jan 14 04:37:41.775: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename deployment 01/14/23 04:37:41.776 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:37:41.793 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:37:41.795 + [BeforeEach] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 + [It] Deployment should have a working scale subresource [Conformance] + test/e2e/apps/deployment.go:150 + Jan 14 04:37:41.798: INFO: Creating simple deployment test-new-deployment + Jan 14 04:37:41.811: INFO: deployment "test-new-deployment" doesn't have the required revision set + STEP: getting scale subresource 01/14/23 04:37:43.824 + STEP: updating a scale subresource 01/14/23 04:37:43.827 + STEP: verifying the deployment Spec.Replicas was modified 01/14/23 04:37:43.833 + STEP: Patch a scale subresource 01/14/23 04:37:43.836 + [AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 + Jan 14 04:37:43.858: INFO: Deployment "test-new-deployment": + &Deployment{ObjectMeta:{test-new-deployment deployment-4176 30b72fe5-3f21-4489-81c6-af56ae0560ab 447586 3 2023-01-14 04:37:41 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {e2e.test Update apps/v1 2023-01-14 04:37:41 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 04:37:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004c787b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-01-14 04:37:43 +0000 UTC,LastTransitionTime:2023-01-14 04:37:43 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-7f5969cbc7" has successfully progressed.,LastUpdateTime:2023-01-14 04:37:43 +0000 UTC,LastTransitionTime:2023-01-14 04:37:41 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + + Jan 14 04:37:43.861: INFO: New ReplicaSet "test-new-deployment-7f5969cbc7" of Deployment "test-new-deployment": + &ReplicaSet{ObjectMeta:{test-new-deployment-7f5969cbc7 deployment-4176 24c90947-d4ae-48a4-89cc-3a6c8cc5ab2a 447590 2 2023-01-14 04:37:41 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment 30b72fe5-3f21-4489-81c6-af56ae0560ab 0xc003f9cd77 0xc003f9cd78}] [] [{kube-controller-manager Update apps/v1 2023-01-14 04:37:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"30b72fe5-3f21-4489-81c6-af56ae0560ab\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 04:37:43 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 7f5969cbc7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003f9ce08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} + Jan 14 04:37:43.868: INFO: Pod "test-new-deployment-7f5969cbc7-nrgnw" is not available: + &Pod{ObjectMeta:{test-new-deployment-7f5969cbc7-nrgnw test-new-deployment-7f5969cbc7- deployment-4176 ff9ade22-ee71-48a7-8f24-9286e930754b 447594 0 2023-01-14 04:37:43 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet test-new-deployment-7f5969cbc7 24c90947-d4ae-48a4-89cc-3a6c8cc5ab2a 0xc00467cfc7 0xc00467cfc8}] [] [{kube-controller-manager Update v1 2023-01-14 04:37:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24c90947-d4ae-48a4-89cc-3a6c8cc5ab2a\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 04:37:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-x6wjq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x6wjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.99,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:37:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:37:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:37:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:37:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.99,PodIP:,StartTime:2023-01-14 04:37:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jan 14 04:37:43.868: INFO: Pod "test-new-deployment-7f5969cbc7-wdhnx" is available: + &Pod{ObjectMeta:{test-new-deployment-7f5969cbc7-wdhnx test-new-deployment-7f5969cbc7- deployment-4176 7bc9ddc3-69c5-4c57-8a3e-9b2cce59bdc5 447579 0 2023-01-14 04:37:41 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.1.43" + ], + "mac": "5e:e3:e1:c4:58:8d", + "default": true, + "dns": {} + }]] [{apps/v1 ReplicaSet test-new-deployment-7f5969cbc7 24c90947-d4ae-48a4-89cc-3a6c8cc5ab2a 0xc00467d190 0xc00467d191}] [] [{kube-controller-manager Update v1 2023-01-14 04:37:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"24c90947-d4ae-48a4-89cc-3a6c8cc5ab2a\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus Update v1 2023-01-14 04:37:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status} {kubelet Update v1 2023-01-14 04:37:43 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.1.43\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wz5ts,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wz5ts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.106,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:37:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:37:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:37:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:37:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.106,PodIP:10.52.1.43,StartTime:2023-01-14 04:37:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 04:37:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:containerd://9ac5594100677ff440e3f9a4d6abc1cc6379b0350057debee3fafe2f94edbe88,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.52.1.43,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + [AfterEach] [sig-apps] Deployment + test/e2e/framework/node/init/init.go:32 + Jan 14 04:37:43.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Deployment + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Deployment + tear down framework | framework.go:193 + STEP: Destroying namespace "deployment-4176" for this suite. 01/14/23 04:37:43.876 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-apps] DisruptionController + should observe PodDisruptionBudget status updated [Conformance] + test/e2e/apps/disruption.go:141 +[BeforeEach] [sig-apps] DisruptionController + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:37:43.881 +Jan 14 04:37:43.881: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename disruption 01/14/23 04:37:43.882 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:37:43.901 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:37:43.904 +[BeforeEach] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:72 +[It] should observe PodDisruptionBudget status updated [Conformance] + test/e2e/apps/disruption.go:141 +STEP: Waiting for the pdb to be processed 01/14/23 04:37:43.914 +STEP: Waiting for all pods to be running 01/14/23 04:37:45.952 +Jan 14 04:37:45.959: INFO: running pods: 0 < 3 +[AfterEach] [sig-apps] DisruptionController + test/e2e/framework/node/init/init.go:32 +Jan 14 04:37:47.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] DisruptionController + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] DisruptionController + tear down framework | framework.go:193 +STEP: Destroying namespace "disruption-6040" for this suite. 01/14/23 04:37:47.972 +------------------------------ +• [4.098 seconds] +[sig-apps] DisruptionController +test/e2e/apps/framework.go:23 + should observe PodDisruptionBudget status updated [Conformance] + test/e2e/apps/disruption.go:141 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] DisruptionController + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:37:43.881 + Jan 14 04:37:43.881: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename disruption 01/14/23 04:37:43.882 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:37:43.901 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:37:43.904 + [BeforeEach] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:72 + [It] should observe PodDisruptionBudget status updated [Conformance] + test/e2e/apps/disruption.go:141 + STEP: Waiting for the pdb to be processed 01/14/23 04:37:43.914 + STEP: Waiting for all pods to be running 01/14/23 04:37:45.952 + Jan 14 04:37:45.959: INFO: running pods: 0 < 3 + [AfterEach] [sig-apps] DisruptionController + test/e2e/framework/node/init/init.go:32 + Jan 14 04:37:47.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] DisruptionController + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] DisruptionController + tear down framework | framework.go:193 + STEP: Destroying namespace "disruption-6040" for this suite. 01/14/23 04:37:47.972 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:147 +[BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:37:47.979 +Jan 14 04:37:47.979: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename emptydir 01/14/23 04:37:47.98 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:37:48.011 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:37:48.014 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 +[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:147 +STEP: Creating a pod to test emptydir 0777 on tmpfs 01/14/23 04:37:48.017 +Jan 14 04:37:48.028: INFO: Waiting up to 5m0s for pod "pod-9e244efc-1a93-4927-b3af-0149887d1b93" in namespace "emptydir-902" to be "Succeeded or Failed" +Jan 14 04:37:48.031: INFO: Pod "pod-9e244efc-1a93-4927-b3af-0149887d1b93": Phase="Pending", Reason="", readiness=false. Elapsed: 3.158047ms +Jan 14 04:37:50.037: INFO: Pod "pod-9e244efc-1a93-4927-b3af-0149887d1b93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009142934s +Jan 14 04:37:52.037: INFO: Pod "pod-9e244efc-1a93-4927-b3af-0149887d1b93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008781662s +STEP: Saw pod success 01/14/23 04:37:52.037 +Jan 14 04:37:52.037: INFO: Pod "pod-9e244efc-1a93-4927-b3af-0149887d1b93" satisfied condition "Succeeded or Failed" +Jan 14 04:37:52.040: INFO: Trying to get logs from node 10.0.1.99 pod pod-9e244efc-1a93-4927-b3af-0149887d1b93 container test-container: +STEP: delete the pod 01/14/23 04:37:52.046 +Jan 14 04:37:52.059: INFO: Waiting for pod pod-9e244efc-1a93-4927-b3af-0149887d1b93 to disappear +Jan 14 04:37:52.062: INFO: Pod pod-9e244efc-1a93-4927-b3af-0149887d1b93 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 +Jan 14 04:37:52.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 +STEP: Destroying namespace "emptydir-902" for this suite. 01/14/23 04:37:52.067 +------------------------------ +• [4.093 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:147 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:37:47.979 + Jan 14 04:37:47.979: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename emptydir 01/14/23 04:37:47.98 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:37:48.011 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:37:48.014 + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 + [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:147 + STEP: Creating a pod to test emptydir 0777 on tmpfs 01/14/23 04:37:48.017 + Jan 14 04:37:48.028: INFO: Waiting up to 5m0s for pod "pod-9e244efc-1a93-4927-b3af-0149887d1b93" in namespace "emptydir-902" to be "Succeeded or Failed" + Jan 14 04:37:48.031: INFO: Pod "pod-9e244efc-1a93-4927-b3af-0149887d1b93": Phase="Pending", Reason="", readiness=false. Elapsed: 3.158047ms + Jan 14 04:37:50.037: INFO: Pod "pod-9e244efc-1a93-4927-b3af-0149887d1b93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009142934s + Jan 14 04:37:52.037: INFO: Pod "pod-9e244efc-1a93-4927-b3af-0149887d1b93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008781662s + STEP: Saw pod success 01/14/23 04:37:52.037 + Jan 14 04:37:52.037: INFO: Pod "pod-9e244efc-1a93-4927-b3af-0149887d1b93" satisfied condition "Succeeded or Failed" + Jan 14 04:37:52.040: INFO: Trying to get logs from node 10.0.1.99 pod pod-9e244efc-1a93-4927-b3af-0149887d1b93 container test-container: + STEP: delete the pod 01/14/23 04:37:52.046 + Jan 14 04:37:52.059: INFO: Waiting for pod pod-9e244efc-1a93-4927-b3af-0149887d1b93 to disappear + Jan 14 04:37:52.062: INFO: Pod pod-9e244efc-1a93-4927-b3af-0149887d1b93 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 + Jan 14 04:37:52.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 + STEP: Destroying namespace "emptydir-902" for this suite. 01/14/23 04:37:52.067 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-api-machinery] Garbage collector + should delete RS created by deployment when not orphaning [Conformance] + test/e2e/apimachinery/garbage_collector.go:491 +[BeforeEach] [sig-api-machinery] Garbage collector + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:37:52.073 +Jan 14 04:37:52.073: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename gc 01/14/23 04:37:52.073 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:37:52.096 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:37:52.1 +[BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:31 +[It] should delete RS created by deployment when not orphaning [Conformance] + test/e2e/apimachinery/garbage_collector.go:491 +STEP: create the deployment 01/14/23 04:37:52.108 +STEP: Wait for the Deployment to create new ReplicaSet 01/14/23 04:37:52.114 +STEP: delete the deployment 01/14/23 04:37:52.622 +STEP: wait for all rs to be garbage collected 01/14/23 04:37:52.628 +STEP: expected 0 rs, got 1 rs 01/14/23 04:37:52.631 +STEP: expected 0 pods, got 2 pods 01/14/23 04:37:52.639 +STEP: Gathering metrics 01/14/23 04:37:53.152 +Jan 14 04:37:53.177: INFO: Waiting up to 5m0s for pod "kube-controller-manager-10.0.1.231" in namespace "kube-system" to be "running and ready" +Jan 14 04:37:53.180: INFO: Pod "kube-controller-manager-10.0.1.231": Phase="Running", Reason="", readiness=true. Elapsed: 3.268692ms +Jan 14 04:37:53.180: INFO: The phase of Pod kube-controller-manager-10.0.1.231 is Running (Ready = true) +Jan 14 04:37:53.180: INFO: Pod "kube-controller-manager-10.0.1.231" satisfied condition "running and ready" +Jan 14 04:37:53.235: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/node/init/init.go:32 +Jan 14 04:37:53.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + tear down framework | framework.go:193 +STEP: Destroying namespace "gc-7946" for this suite. 01/14/23 04:37:53.24 +------------------------------ +• [1.173 seconds] +[sig-api-machinery] Garbage collector +test/e2e/apimachinery/framework.go:23 + should delete RS created by deployment when not orphaning [Conformance] + test/e2e/apimachinery/garbage_collector.go:491 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Garbage collector + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:37:52.073 + Jan 14 04:37:52.073: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename gc 01/14/23 04:37:52.073 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:37:52.096 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:37:52.1 + [BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:31 + [It] should delete RS created by deployment when not orphaning [Conformance] + test/e2e/apimachinery/garbage_collector.go:491 + STEP: create the deployment 01/14/23 04:37:52.108 + STEP: Wait for the Deployment to create new ReplicaSet 01/14/23 04:37:52.114 + STEP: delete the deployment 01/14/23 04:37:52.622 + STEP: wait for all rs to be garbage collected 01/14/23 04:37:52.628 + STEP: expected 0 rs, got 1 rs 01/14/23 04:37:52.631 + STEP: expected 0 pods, got 2 pods 01/14/23 04:37:52.639 + STEP: Gathering metrics 01/14/23 04:37:53.152 + Jan 14 04:37:53.177: INFO: Waiting up to 5m0s for pod "kube-controller-manager-10.0.1.231" in namespace "kube-system" to be "running and ready" + Jan 14 04:37:53.180: INFO: Pod "kube-controller-manager-10.0.1.231": Phase="Running", Reason="", readiness=true. Elapsed: 3.268692ms + Jan 14 04:37:53.180: INFO: The phase of Pod kube-controller-manager-10.0.1.231 is Running (Ready = true) + Jan 14 04:37:53.180: INFO: Pod "kube-controller-manager-10.0.1.231" satisfied condition "running and ready" + Jan 14 04:37:53.235: INFO: For apiserver_request_total: + For apiserver_request_latency_seconds: + For apiserver_init_events_total: + For garbage_collector_attempt_to_delete_queue_latency: + For garbage_collector_attempt_to_delete_work_duration: + For garbage_collector_attempt_to_orphan_queue_latency: + For garbage_collector_attempt_to_orphan_work_duration: + For garbage_collector_dirty_processing_latency_microseconds: + For garbage_collector_event_processing_latency_microseconds: + For garbage_collector_graph_changes_queue_latency: + For garbage_collector_graph_changes_work_duration: + For garbage_collector_orphan_processing_latency_microseconds: + For namespace_queue_latency: + For namespace_queue_latency_sum: + For namespace_queue_latency_count: + For namespace_retries: + For namespace_work_duration: + For namespace_work_duration_sum: + For namespace_work_duration_count: + For function_duration_seconds: + For errors_total: + For evicted_pods_total: + + [AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/node/init/init.go:32 + Jan 14 04:37:53.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + tear down framework | framework.go:193 + STEP: Destroying namespace "gc-7946" for this suite. 01/14/23 04:37:53.24 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Aggregator + Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + test/e2e/apimachinery/aggregator.go:100 +[BeforeEach] [sig-api-machinery] Aggregator + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:37:53.246 +Jan 14 04:37:53.246: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename aggregator 01/14/23 04:37:53.247 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:37:53.267 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:37:53.27 +[BeforeEach] [sig-api-machinery] Aggregator + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] Aggregator + test/e2e/apimachinery/aggregator.go:78 +Jan 14 04:37:53.272: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + test/e2e/apimachinery/aggregator.go:100 +STEP: Registering the sample API server. 01/14/23 04:37:53.273 +Jan 14 04:37:53.732: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set +Jan 14 04:37:55.770: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jan 14 04:37:57.775: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jan 14 04:37:59.775: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jan 14 04:38:01.777: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jan 14 04:38:03.775: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jan 14 04:38:05.775: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jan 14 04:38:07.776: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jan 14 04:38:09.774: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jan 14 04:38:11.774: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jan 14 04:38:13.775: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jan 14 04:38:15.774: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jan 14 04:38:17.775: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jan 14 04:38:19.775: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jan 14 04:38:21.775: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jan 14 04:38:23.775: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jan 14 04:38:25.775: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jan 14 04:38:27.775: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jan 14 04:38:29.775: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jan 14 04:38:31.902: INFO: Waited 119.758752ms for the sample-apiserver to be ready to handle requests. +STEP: Read Status for v1alpha1.wardle.example.com 01/14/23 04:38:31.944 +STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' 01/14/23 04:38:31.954 +STEP: List APIServices 01/14/23 04:38:31.982 +Jan 14 04:38:31.988: INFO: Found v1alpha1.wardle.example.com in APIServiceList +[AfterEach] [sig-api-machinery] Aggregator + test/e2e/apimachinery/aggregator.go:68 +[AfterEach] [sig-api-machinery] Aggregator + test/e2e/framework/node/init/init.go:32 +Jan 14 04:38:32.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Aggregator + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Aggregator + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Aggregator + tear down framework | framework.go:193 +STEP: Destroying namespace "aggregator-3323" for this suite. 01/14/23 04:38:32.361 +------------------------------ +• [SLOW TEST] [39.148 seconds] +[sig-api-machinery] Aggregator +test/e2e/apimachinery/framework.go:23 + Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + test/e2e/apimachinery/aggregator.go:100 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Aggregator + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:37:53.246 + Jan 14 04:37:53.246: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename aggregator 01/14/23 04:37:53.247 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:37:53.267 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:37:53.27 + [BeforeEach] [sig-api-machinery] Aggregator + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] Aggregator + test/e2e/apimachinery/aggregator.go:78 + Jan 14 04:37:53.272: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] + test/e2e/apimachinery/aggregator.go:100 + STEP: Registering the sample API server. 01/14/23 04:37:53.273 + Jan 14 04:37:53.732: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set + Jan 14 04:37:55.770: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jan 14 04:37:57.775: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jan 14 04:37:59.775: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jan 14 04:38:01.777: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jan 14 04:38:03.775: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jan 14 04:38:05.775: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jan 14 04:38:07.776: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jan 14 04:38:09.774: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jan 14 04:38:11.774: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jan 14 04:38:13.775: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jan 14 04:38:15.774: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jan 14 04:38:17.775: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jan 14 04:38:19.775: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jan 14 04:38:21.775: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jan 14 04:38:23.775: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jan 14 04:38:25.775: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jan 14 04:38:27.775: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jan 14 04:38:29.775: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 37, 53, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-55bd96fd47\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jan 14 04:38:31.902: INFO: Waited 119.758752ms for the sample-apiserver to be ready to handle requests. + STEP: Read Status for v1alpha1.wardle.example.com 01/14/23 04:38:31.944 + STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' 01/14/23 04:38:31.954 + STEP: List APIServices 01/14/23 04:38:31.982 + Jan 14 04:38:31.988: INFO: Found v1alpha1.wardle.example.com in APIServiceList + [AfterEach] [sig-api-machinery] Aggregator + test/e2e/apimachinery/aggregator.go:68 + [AfterEach] [sig-api-machinery] Aggregator + test/e2e/framework/node/init/init.go:32 + Jan 14 04:38:32.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Aggregator + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Aggregator + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Aggregator + tear down framework | framework.go:193 + STEP: Destroying namespace "aggregator-3323" for this suite. 01/14/23 04:38:32.361 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-storage] EmptyDir wrapper volumes + should not conflict [Conformance] + test/e2e/storage/empty_dir_wrapper.go:67 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:38:32.396 +Jan 14 04:38:32.396: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename emptydir-wrapper 01/14/23 04:38:32.397 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:38:32.413 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:38:32.415 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + test/e2e/framework/metrics/init/init.go:31 +[It] should not conflict [Conformance] + test/e2e/storage/empty_dir_wrapper.go:67 +Jan 14 04:38:32.437: INFO: Waiting up to 5m0s for pod "pod-secrets-1ab9d430-88e1-460a-a2ad-47532ce4b01c" in namespace "emptydir-wrapper-6406" to be "running and ready" +Jan 14 04:38:32.440: INFO: Pod "pod-secrets-1ab9d430-88e1-460a-a2ad-47532ce4b01c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.941392ms +Jan 14 04:38:32.440: INFO: The phase of Pod pod-secrets-1ab9d430-88e1-460a-a2ad-47532ce4b01c is Pending, waiting for it to be Running (with Ready = true) +Jan 14 04:38:34.446: INFO: Pod "pod-secrets-1ab9d430-88e1-460a-a2ad-47532ce4b01c": Phase="Running", Reason="", readiness=true. Elapsed: 2.00845338s +Jan 14 04:38:34.446: INFO: The phase of Pod pod-secrets-1ab9d430-88e1-460a-a2ad-47532ce4b01c is Running (Ready = true) +Jan 14 04:38:34.446: INFO: Pod "pod-secrets-1ab9d430-88e1-460a-a2ad-47532ce4b01c" satisfied condition "running and ready" +STEP: Cleaning up the secret 01/14/23 04:38:34.449 +STEP: Cleaning up the configmap 01/14/23 04:38:34.455 +STEP: Cleaning up the pod 01/14/23 04:38:34.463 +[AfterEach] [sig-storage] EmptyDir wrapper volumes + test/e2e/framework/node/init/init.go:32 +Jan 14 04:38:34.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes + tear down framework | framework.go:193 +STEP: Destroying namespace "emptydir-wrapper-6406" for this suite. 01/14/23 04:38:34.484 +------------------------------ +• [2.093 seconds] +[sig-storage] EmptyDir wrapper volumes +test/e2e/storage/utils/framework.go:23 + should not conflict [Conformance] + test/e2e/storage/empty_dir_wrapper.go:67 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir wrapper volumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:38:32.396 + Jan 14 04:38:32.396: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename emptydir-wrapper 01/14/23 04:38:32.397 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:38:32.413 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:38:32.415 + [BeforeEach] [sig-storage] EmptyDir wrapper volumes + test/e2e/framework/metrics/init/init.go:31 + [It] should not conflict [Conformance] + test/e2e/storage/empty_dir_wrapper.go:67 + Jan 14 04:38:32.437: INFO: Waiting up to 5m0s for pod "pod-secrets-1ab9d430-88e1-460a-a2ad-47532ce4b01c" in namespace "emptydir-wrapper-6406" to be "running and ready" + Jan 14 04:38:32.440: INFO: Pod "pod-secrets-1ab9d430-88e1-460a-a2ad-47532ce4b01c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.941392ms + Jan 14 04:38:32.440: INFO: The phase of Pod pod-secrets-1ab9d430-88e1-460a-a2ad-47532ce4b01c is Pending, waiting for it to be Running (with Ready = true) + Jan 14 04:38:34.446: INFO: Pod "pod-secrets-1ab9d430-88e1-460a-a2ad-47532ce4b01c": Phase="Running", Reason="", readiness=true. Elapsed: 2.00845338s + Jan 14 04:38:34.446: INFO: The phase of Pod pod-secrets-1ab9d430-88e1-460a-a2ad-47532ce4b01c is Running (Ready = true) + Jan 14 04:38:34.446: INFO: Pod "pod-secrets-1ab9d430-88e1-460a-a2ad-47532ce4b01c" satisfied condition "running and ready" + STEP: Cleaning up the secret 01/14/23 04:38:34.449 + STEP: Cleaning up the configmap 01/14/23 04:38:34.455 + STEP: Cleaning up the pod 01/14/23 04:38:34.463 + [AfterEach] [sig-storage] EmptyDir wrapper volumes + test/e2e/framework/node/init/init.go:32 + Jan 14 04:38:34.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] EmptyDir wrapper volumes + tear down framework | framework.go:193 + STEP: Destroying namespace "emptydir-wrapper-6406" for this suite. 01/14/23 04:38:34.484 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a configMap. [Conformance] + test/e2e/apimachinery/resource_quota.go:326 +[BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:38:34.49 +Jan 14 04:38:34.490: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename resourcequota 01/14/23 04:38:34.491 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:38:34.506 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:38:34.508 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 +[It] should create a ResourceQuota and capture the life of a configMap. [Conformance] + test/e2e/apimachinery/resource_quota.go:326 +STEP: Counting existing ResourceQuota 01/14/23 04:38:51.516 +STEP: Creating a ResourceQuota 01/14/23 04:38:56.52 +STEP: Ensuring resource quota status is calculated 01/14/23 04:38:56.526 +STEP: Creating a ConfigMap 01/14/23 04:38:58.531 +STEP: Ensuring resource quota status captures configMap creation 01/14/23 04:38:58.541 +STEP: Deleting a ConfigMap 01/14/23 04:39:00.546 +STEP: Ensuring resource quota status released usage 01/14/23 04:39:00.552 +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 +Jan 14 04:39:02.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 +STEP: Destroying namespace "resourcequota-4645" for this suite. 01/14/23 04:39:02.562 +------------------------------ +• [SLOW TEST] [28.077 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a configMap. [Conformance] + test/e2e/apimachinery/resource_quota.go:326 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:38:34.49 + Jan 14 04:38:34.490: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename resourcequota 01/14/23 04:38:34.491 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:38:34.506 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:38:34.508 + [BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 + [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] + test/e2e/apimachinery/resource_quota.go:326 + STEP: Counting existing ResourceQuota 01/14/23 04:38:51.516 + STEP: Creating a ResourceQuota 01/14/23 04:38:56.52 + STEP: Ensuring resource quota status is calculated 01/14/23 04:38:56.526 + STEP: Creating a ConfigMap 01/14/23 04:38:58.531 + STEP: Ensuring resource quota status captures configMap creation 01/14/23 04:38:58.541 + STEP: Deleting a ConfigMap 01/14/23 04:39:00.546 + STEP: Ensuring resource quota status released usage 01/14/23 04:39:00.552 + [AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 + Jan 14 04:39:02.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 + STEP: Destroying namespace "resourcequota-4645" for this suite. 01/14/23 04:39:02.562 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with projected pod [Conformance] + test/e2e/storage/subpath.go:106 +[BeforeEach] [sig-storage] Subpath + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:39:02.568 +Jan 14 04:39:02.568: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename subpath 01/14/23 04:39:02.569 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:39:02.59 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:39:02.592 +[BeforeEach] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 +STEP: Setting up data 01/14/23 04:39:02.595 +[It] should support subpaths with projected pod [Conformance] + test/e2e/storage/subpath.go:106 +STEP: Creating pod pod-subpath-test-projected-7nxw 01/14/23 04:39:02.603 +STEP: Creating a pod to test atomic-volume-subpath 01/14/23 04:39:02.603 +Jan 14 04:39:02.613: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-7nxw" in namespace "subpath-7994" to be "Succeeded or Failed" +Jan 14 04:39:02.616: INFO: Pod "pod-subpath-test-projected-7nxw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.902074ms +Jan 14 04:39:04.622: INFO: Pod "pod-subpath-test-projected-7nxw": Phase="Running", Reason="", readiness=true. Elapsed: 2.008458086s +Jan 14 04:39:06.622: INFO: Pod "pod-subpath-test-projected-7nxw": Phase="Running", Reason="", readiness=true. Elapsed: 4.009125937s +Jan 14 04:39:08.621: INFO: Pod "pod-subpath-test-projected-7nxw": Phase="Running", Reason="", readiness=true. Elapsed: 6.00746098s +Jan 14 04:39:10.621: INFO: Pod "pod-subpath-test-projected-7nxw": Phase="Running", Reason="", readiness=true. Elapsed: 8.007978817s +Jan 14 04:39:12.621: INFO: Pod "pod-subpath-test-projected-7nxw": Phase="Running", Reason="", readiness=true. Elapsed: 10.00730941s +Jan 14 04:39:14.623: INFO: Pod "pod-subpath-test-projected-7nxw": Phase="Running", Reason="", readiness=true. Elapsed: 12.009234679s +Jan 14 04:39:16.622: INFO: Pod "pod-subpath-test-projected-7nxw": Phase="Running", Reason="", readiness=true. Elapsed: 14.008945608s +Jan 14 04:39:18.621: INFO: Pod "pod-subpath-test-projected-7nxw": Phase="Running", Reason="", readiness=true. Elapsed: 16.007281505s +Jan 14 04:39:20.621: INFO: Pod "pod-subpath-test-projected-7nxw": Phase="Running", Reason="", readiness=true. Elapsed: 18.007382112s +Jan 14 04:39:22.621: INFO: Pod "pod-subpath-test-projected-7nxw": Phase="Running", Reason="", readiness=true. Elapsed: 20.008153604s +Jan 14 04:39:24.621: INFO: Pod "pod-subpath-test-projected-7nxw": Phase="Running", Reason="", readiness=false. Elapsed: 22.008064535s +Jan 14 04:39:26.622: INFO: Pod "pod-subpath-test-projected-7nxw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.008527007s +STEP: Saw pod success 01/14/23 04:39:26.622 +Jan 14 04:39:26.622: INFO: Pod "pod-subpath-test-projected-7nxw" satisfied condition "Succeeded or Failed" +Jan 14 04:39:26.625: INFO: Trying to get logs from node 10.0.1.106 pod pod-subpath-test-projected-7nxw container test-container-subpath-projected-7nxw: +STEP: delete the pod 01/14/23 04:39:26.64 +Jan 14 04:39:26.654: INFO: Waiting for pod pod-subpath-test-projected-7nxw to disappear +Jan 14 04:39:26.657: INFO: Pod pod-subpath-test-projected-7nxw no longer exists +STEP: Deleting pod pod-subpath-test-projected-7nxw 01/14/23 04:39:26.657 +Jan 14 04:39:26.657: INFO: Deleting pod "pod-subpath-test-projected-7nxw" in namespace "subpath-7994" +[AfterEach] [sig-storage] Subpath + test/e2e/framework/node/init/init.go:32 +Jan 14 04:39:26.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Subpath + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Subpath + tear down framework | framework.go:193 +STEP: Destroying namespace "subpath-7994" for this suite. 01/14/23 04:39:26.665 +------------------------------ +• [SLOW TEST] [24.103 seconds] +[sig-storage] Subpath +test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + test/e2e/storage/subpath.go:36 + should support subpaths with projected pod [Conformance] + test/e2e/storage/subpath.go:106 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Subpath + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:39:02.568 + Jan 14 04:39:02.568: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename subpath 01/14/23 04:39:02.569 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:39:02.59 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:39:02.592 + [BeforeEach] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 + STEP: Setting up data 01/14/23 04:39:02.595 + [It] should support subpaths with projected pod [Conformance] + test/e2e/storage/subpath.go:106 + STEP: Creating pod pod-subpath-test-projected-7nxw 01/14/23 04:39:02.603 + STEP: Creating a pod to test atomic-volume-subpath 01/14/23 04:39:02.603 + Jan 14 04:39:02.613: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-7nxw" in namespace "subpath-7994" to be "Succeeded or Failed" + Jan 14 04:39:02.616: INFO: Pod "pod-subpath-test-projected-7nxw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.902074ms + Jan 14 04:39:04.622: INFO: Pod "pod-subpath-test-projected-7nxw": Phase="Running", Reason="", readiness=true. Elapsed: 2.008458086s + Jan 14 04:39:06.622: INFO: Pod "pod-subpath-test-projected-7nxw": Phase="Running", Reason="", readiness=true. Elapsed: 4.009125937s + Jan 14 04:39:08.621: INFO: Pod "pod-subpath-test-projected-7nxw": Phase="Running", Reason="", readiness=true. Elapsed: 6.00746098s + Jan 14 04:39:10.621: INFO: Pod "pod-subpath-test-projected-7nxw": Phase="Running", Reason="", readiness=true. Elapsed: 8.007978817s + Jan 14 04:39:12.621: INFO: Pod "pod-subpath-test-projected-7nxw": Phase="Running", Reason="", readiness=true. Elapsed: 10.00730941s + Jan 14 04:39:14.623: INFO: Pod "pod-subpath-test-projected-7nxw": Phase="Running", Reason="", readiness=true. Elapsed: 12.009234679s + Jan 14 04:39:16.622: INFO: Pod "pod-subpath-test-projected-7nxw": Phase="Running", Reason="", readiness=true. Elapsed: 14.008945608s + Jan 14 04:39:18.621: INFO: Pod "pod-subpath-test-projected-7nxw": Phase="Running", Reason="", readiness=true. Elapsed: 16.007281505s + Jan 14 04:39:20.621: INFO: Pod "pod-subpath-test-projected-7nxw": Phase="Running", Reason="", readiness=true. Elapsed: 18.007382112s + Jan 14 04:39:22.621: INFO: Pod "pod-subpath-test-projected-7nxw": Phase="Running", Reason="", readiness=true. Elapsed: 20.008153604s + Jan 14 04:39:24.621: INFO: Pod "pod-subpath-test-projected-7nxw": Phase="Running", Reason="", readiness=false. Elapsed: 22.008064535s + Jan 14 04:39:26.622: INFO: Pod "pod-subpath-test-projected-7nxw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.008527007s + STEP: Saw pod success 01/14/23 04:39:26.622 + Jan 14 04:39:26.622: INFO: Pod "pod-subpath-test-projected-7nxw" satisfied condition "Succeeded or Failed" + Jan 14 04:39:26.625: INFO: Trying to get logs from node 10.0.1.106 pod pod-subpath-test-projected-7nxw container test-container-subpath-projected-7nxw: + STEP: delete the pod 01/14/23 04:39:26.64 + Jan 14 04:39:26.654: INFO: Waiting for pod pod-subpath-test-projected-7nxw to disappear + Jan 14 04:39:26.657: INFO: Pod pod-subpath-test-projected-7nxw no longer exists + STEP: Deleting pod pod-subpath-test-projected-7nxw 01/14/23 04:39:26.657 + Jan 14 04:39:26.657: INFO: Deleting pod "pod-subpath-test-projected-7nxw" in namespace "subpath-7994" + [AfterEach] [sig-storage] Subpath + test/e2e/framework/node/init/init.go:32 + Jan 14 04:39:26.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Subpath + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Subpath + tear down framework | framework.go:193 + STEP: Destroying namespace "subpath-7994" for this suite. 01/14/23 04:39:26.665 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] IngressClass API + should support creating IngressClass API operations [Conformance] + test/e2e/network/ingressclass.go:223 +[BeforeEach] [sig-network] IngressClass API + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:39:26.672 +Jan 14 04:39:26.672: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename ingressclass 01/14/23 04:39:26.673 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:39:26.687 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:39:26.689 +[BeforeEach] [sig-network] IngressClass API + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] IngressClass API + test/e2e/network/ingressclass.go:211 +[It] should support creating IngressClass API operations [Conformance] + test/e2e/network/ingressclass.go:223 +STEP: getting /apis 01/14/23 04:39:26.691 +STEP: getting /apis/networking.k8s.io 01/14/23 04:39:26.693 +STEP: getting /apis/networking.k8s.iov1 01/14/23 04:39:26.694 +STEP: creating 01/14/23 04:39:26.695 +STEP: getting 01/14/23 04:39:26.706 +STEP: listing 01/14/23 04:39:26.708 +STEP: watching 01/14/23 04:39:26.71 +Jan 14 04:39:26.710: INFO: starting watch +STEP: patching 01/14/23 04:39:26.711 +STEP: updating 01/14/23 04:39:26.715 +Jan 14 04:39:26.719: INFO: waiting for watch events with expected annotations +Jan 14 04:39:26.719: INFO: saw patched and updated annotations +STEP: deleting 01/14/23 04:39:26.719 +STEP: deleting a collection 01/14/23 04:39:26.728 +[AfterEach] [sig-network] IngressClass API + test/e2e/framework/node/init/init.go:32 +Jan 14 04:39:26.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] IngressClass API + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] IngressClass API + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] IngressClass API + tear down framework | framework.go:193 +STEP: Destroying namespace "ingressclass-801" for this suite. 01/14/23 04:39:26.743 +------------------------------ +• [0.076 seconds] +[sig-network] IngressClass API +test/e2e/network/common/framework.go:23 + should support creating IngressClass API operations [Conformance] + test/e2e/network/ingressclass.go:223 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] IngressClass API + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:39:26.672 + Jan 14 04:39:26.672: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename ingressclass 01/14/23 04:39:26.673 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:39:26.687 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:39:26.689 + [BeforeEach] [sig-network] IngressClass API + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] IngressClass API + test/e2e/network/ingressclass.go:211 + [It] should support creating IngressClass API operations [Conformance] + test/e2e/network/ingressclass.go:223 + STEP: getting /apis 01/14/23 04:39:26.691 + STEP: getting /apis/networking.k8s.io 01/14/23 04:39:26.693 + STEP: getting /apis/networking.k8s.iov1 01/14/23 04:39:26.694 + STEP: creating 01/14/23 04:39:26.695 + STEP: getting 01/14/23 04:39:26.706 + STEP: listing 01/14/23 04:39:26.708 + STEP: watching 01/14/23 04:39:26.71 + Jan 14 04:39:26.710: INFO: starting watch + STEP: patching 01/14/23 04:39:26.711 + STEP: updating 01/14/23 04:39:26.715 + Jan 14 04:39:26.719: INFO: waiting for watch events with expected annotations + Jan 14 04:39:26.719: INFO: saw patched and updated annotations + STEP: deleting 01/14/23 04:39:26.719 + STEP: deleting a collection 01/14/23 04:39:26.728 + [AfterEach] [sig-network] IngressClass API + test/e2e/framework/node/init/init.go:32 + Jan 14 04:39:26.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] IngressClass API + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] IngressClass API + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] IngressClass API + tear down framework | framework.go:193 + STEP: Destroying namespace "ingressclass-801" for this suite. 01/14/23 04:39:26.743 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Proxy version v1 + A set of valid responses are returned for both pod and service Proxy [Conformance] + test/e2e/network/proxy.go:380 +[BeforeEach] version v1 + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:39:26.75 +Jan 14 04:39:26.750: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename proxy 01/14/23 04:39:26.751 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:39:26.764 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:39:26.766 +[BeforeEach] version v1 + test/e2e/framework/metrics/init/init.go:31 +[It] A set of valid responses are returned for both pod and service Proxy [Conformance] + test/e2e/network/proxy.go:380 +Jan 14 04:39:26.768: INFO: Creating pod... +Jan 14 04:39:26.777: INFO: Waiting up to 5m0s for pod "agnhost" in namespace "proxy-2916" to be "running" +Jan 14 04:39:26.780: INFO: Pod "agnhost": Phase="Pending", Reason="", readiness=false. Elapsed: 3.017893ms +Jan 14 04:39:28.785: INFO: Pod "agnhost": Phase="Running", Reason="", readiness=true. Elapsed: 2.00751598s +Jan 14 04:39:28.785: INFO: Pod "agnhost" satisfied condition "running" +Jan 14 04:39:28.785: INFO: Creating service... +Jan 14 04:39:28.795: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-2916/pods/agnhost/proxy?method=DELETE +Jan 14 04:39:28.799: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE +Jan 14 04:39:28.799: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-2916/pods/agnhost/proxy?method=OPTIONS +Jan 14 04:39:28.803: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS +Jan 14 04:39:28.803: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-2916/pods/agnhost/proxy?method=PATCH +Jan 14 04:39:28.806: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH +Jan 14 04:39:28.806: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-2916/pods/agnhost/proxy?method=POST +Jan 14 04:39:28.809: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST +Jan 14 04:39:28.810: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-2916/pods/agnhost/proxy?method=PUT +Jan 14 04:39:28.813: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT +Jan 14 04:39:28.813: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-2916/services/e2e-proxy-test-service/proxy?method=DELETE +Jan 14 04:39:28.817: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE +Jan 14 04:39:28.817: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-2916/services/e2e-proxy-test-service/proxy?method=OPTIONS +Jan 14 04:39:28.822: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS +Jan 14 04:39:28.822: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-2916/services/e2e-proxy-test-service/proxy?method=PATCH +Jan 14 04:39:28.827: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH +Jan 14 04:39:28.827: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-2916/services/e2e-proxy-test-service/proxy?method=POST +Jan 14 04:39:28.831: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST +Jan 14 04:39:28.831: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-2916/services/e2e-proxy-test-service/proxy?method=PUT +Jan 14 04:39:28.836: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT +Jan 14 04:39:28.836: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-2916/pods/agnhost/proxy?method=GET +Jan 14 04:39:28.838: INFO: http.Client request:GET StatusCode:301 +Jan 14 04:39:28.838: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-2916/services/e2e-proxy-test-service/proxy?method=GET +Jan 14 04:39:28.842: INFO: http.Client request:GET StatusCode:301 +Jan 14 04:39:28.842: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-2916/pods/agnhost/proxy?method=HEAD +Jan 14 04:39:28.845: INFO: http.Client request:HEAD StatusCode:301 +Jan 14 04:39:28.845: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-2916/services/e2e-proxy-test-service/proxy?method=HEAD +Jan 14 04:39:28.851: INFO: http.Client request:HEAD StatusCode:301 +[AfterEach] version v1 + test/e2e/framework/node/init/init.go:32 +Jan 14 04:39:28.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] version v1 + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] version v1 + dump namespaces | framework.go:196 +[DeferCleanup (Each)] version v1 + tear down framework | framework.go:193 +STEP: Destroying namespace "proxy-2916" for this suite. 01/14/23 04:39:28.856 +------------------------------ +• [2.111 seconds] +[sig-network] Proxy +test/e2e/network/common/framework.go:23 + version v1 + test/e2e/network/proxy.go:74 + A set of valid responses are returned for both pod and service Proxy [Conformance] + test/e2e/network/proxy.go:380 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] version v1 + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:39:26.75 + Jan 14 04:39:26.750: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename proxy 01/14/23 04:39:26.751 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:39:26.764 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:39:26.766 + [BeforeEach] version v1 + test/e2e/framework/metrics/init/init.go:31 + [It] A set of valid responses are returned for both pod and service Proxy [Conformance] + test/e2e/network/proxy.go:380 + Jan 14 04:39:26.768: INFO: Creating pod... + Jan 14 04:39:26.777: INFO: Waiting up to 5m0s for pod "agnhost" in namespace "proxy-2916" to be "running" + Jan 14 04:39:26.780: INFO: Pod "agnhost": Phase="Pending", Reason="", readiness=false. Elapsed: 3.017893ms + Jan 14 04:39:28.785: INFO: Pod "agnhost": Phase="Running", Reason="", readiness=true. Elapsed: 2.00751598s + Jan 14 04:39:28.785: INFO: Pod "agnhost" satisfied condition "running" + Jan 14 04:39:28.785: INFO: Creating service... + Jan 14 04:39:28.795: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-2916/pods/agnhost/proxy?method=DELETE + Jan 14 04:39:28.799: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE + Jan 14 04:39:28.799: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-2916/pods/agnhost/proxy?method=OPTIONS + Jan 14 04:39:28.803: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS + Jan 14 04:39:28.803: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-2916/pods/agnhost/proxy?method=PATCH + Jan 14 04:39:28.806: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH + Jan 14 04:39:28.806: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-2916/pods/agnhost/proxy?method=POST + Jan 14 04:39:28.809: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST + Jan 14 04:39:28.810: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-2916/pods/agnhost/proxy?method=PUT + Jan 14 04:39:28.813: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT + Jan 14 04:39:28.813: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-2916/services/e2e-proxy-test-service/proxy?method=DELETE + Jan 14 04:39:28.817: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE + Jan 14 04:39:28.817: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-2916/services/e2e-proxy-test-service/proxy?method=OPTIONS + Jan 14 04:39:28.822: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS + Jan 14 04:39:28.822: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-2916/services/e2e-proxy-test-service/proxy?method=PATCH + Jan 14 04:39:28.827: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH + Jan 14 04:39:28.827: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-2916/services/e2e-proxy-test-service/proxy?method=POST + Jan 14 04:39:28.831: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST + Jan 14 04:39:28.831: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-2916/services/e2e-proxy-test-service/proxy?method=PUT + Jan 14 04:39:28.836: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT + Jan 14 04:39:28.836: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-2916/pods/agnhost/proxy?method=GET + Jan 14 04:39:28.838: INFO: http.Client request:GET StatusCode:301 + Jan 14 04:39:28.838: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-2916/services/e2e-proxy-test-service/proxy?method=GET + Jan 14 04:39:28.842: INFO: http.Client request:GET StatusCode:301 + Jan 14 04:39:28.842: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-2916/pods/agnhost/proxy?method=HEAD + Jan 14 04:39:28.845: INFO: http.Client request:HEAD StatusCode:301 + Jan 14 04:39:28.845: INFO: Starting http.Client for https://10.55.252.1:443/api/v1/namespaces/proxy-2916/services/e2e-proxy-test-service/proxy?method=HEAD + Jan 14 04:39:28.851: INFO: http.Client request:HEAD StatusCode:301 + [AfterEach] version v1 + test/e2e/framework/node/init/init.go:32 + Jan 14 04:39:28.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] version v1 + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] version v1 + dump namespaces | framework.go:196 + [DeferCleanup (Each)] version v1 + tear down framework | framework.go:193 + STEP: Destroying namespace "proxy-2916" for this suite. 01/14/23 04:39:28.856 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-node] Pods + should be submitted and removed [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:226 +[BeforeEach] [sig-node] Pods + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:39:28.861 +Jan 14 04:39:28.862: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename pods 01/14/23 04:39:28.862 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:39:28.878 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:39:28.881 +[BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 +[It] should be submitted and removed [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:226 +STEP: creating the pod 01/14/23 04:39:28.883 +STEP: setting up watch 01/14/23 04:39:28.883 +STEP: submitting the pod to kubernetes 01/14/23 04:39:28.986 +STEP: verifying the pod is in kubernetes 01/14/23 04:39:29.001 +STEP: verifying pod creation was observed 01/14/23 04:39:29.004 +Jan 14 04:39:29.005: INFO: Waiting up to 5m0s for pod "pod-submit-remove-ce53eaf2-4efc-45ed-b3bf-d42a47633b3b" in namespace "pods-1476" to be "running" +Jan 14 04:39:29.011: INFO: Pod "pod-submit-remove-ce53eaf2-4efc-45ed-b3bf-d42a47633b3b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.212165ms +Jan 14 04:39:31.016: INFO: Pod "pod-submit-remove-ce53eaf2-4efc-45ed-b3bf-d42a47633b3b": Phase="Running", Reason="", readiness=true. Elapsed: 2.011001316s +Jan 14 04:39:31.016: INFO: Pod "pod-submit-remove-ce53eaf2-4efc-45ed-b3bf-d42a47633b3b" satisfied condition "running" +STEP: deleting the pod gracefully 01/14/23 04:39:31.019 +STEP: verifying pod deletion was observed 01/14/23 04:39:31.026 +[AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 +Jan 14 04:39:33.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 +STEP: Destroying namespace "pods-1476" for this suite. 01/14/23 04:39:33.396 +------------------------------ +• [4.540 seconds] +[sig-node] Pods +test/e2e/common/node/framework.go:23 + should be submitted and removed [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:226 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Pods + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:39:28.861 + Jan 14 04:39:28.862: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename pods 01/14/23 04:39:28.862 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:39:28.878 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:39:28.881 + [BeforeEach] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Pods + test/e2e/common/node/pods.go:194 + [It] should be submitted and removed [NodeConformance] [Conformance] + test/e2e/common/node/pods.go:226 + STEP: creating the pod 01/14/23 04:39:28.883 + STEP: setting up watch 01/14/23 04:39:28.883 + STEP: submitting the pod to kubernetes 01/14/23 04:39:28.986 + STEP: verifying the pod is in kubernetes 01/14/23 04:39:29.001 + STEP: verifying pod creation was observed 01/14/23 04:39:29.004 + Jan 14 04:39:29.005: INFO: Waiting up to 5m0s for pod "pod-submit-remove-ce53eaf2-4efc-45ed-b3bf-d42a47633b3b" in namespace "pods-1476" to be "running" + Jan 14 04:39:29.011: INFO: Pod "pod-submit-remove-ce53eaf2-4efc-45ed-b3bf-d42a47633b3b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.212165ms + Jan 14 04:39:31.016: INFO: Pod "pod-submit-remove-ce53eaf2-4efc-45ed-b3bf-d42a47633b3b": Phase="Running", Reason="", readiness=true. Elapsed: 2.011001316s + Jan 14 04:39:31.016: INFO: Pod "pod-submit-remove-ce53eaf2-4efc-45ed-b3bf-d42a47633b3b" satisfied condition "running" + STEP: deleting the pod gracefully 01/14/23 04:39:31.019 + STEP: verifying pod deletion was observed 01/14/23 04:39:31.026 + [AfterEach] [sig-node] Pods + test/e2e/framework/node/init/init.go:32 + Jan 14 04:39:33.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Pods + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Pods + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Pods + tear down framework | framework.go:193 + STEP: Destroying namespace "pods-1476" for this suite. 01/14/23 04:39:33.396 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-node] Probing container + with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:72 +[BeforeEach] [sig-node] Probing container + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:39:33.402 +Jan 14 04:39:33.402: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename container-probe 01/14/23 04:39:33.403 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:39:33.417 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:39:33.419 +[BeforeEach] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 +[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:72 +Jan 14 04:39:33.431: INFO: Waiting up to 5m0s for pod "test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78" in namespace "container-probe-8045" to be "running and ready" +Jan 14 04:39:33.434: INFO: Pod "test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78": Phase="Pending", Reason="", readiness=false. Elapsed: 3.125955ms +Jan 14 04:39:33.434: INFO: The phase of Pod test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78 is Pending, waiting for it to be Running (with Ready = true) +Jan 14 04:39:35.439: INFO: Pod "test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78": Phase="Running", Reason="", readiness=false. Elapsed: 2.007689823s +Jan 14 04:39:35.439: INFO: The phase of Pod test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78 is Running (Ready = false) +Jan 14 04:39:37.440: INFO: Pod "test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78": Phase="Running", Reason="", readiness=false. Elapsed: 4.008678584s +Jan 14 04:39:37.440: INFO: The phase of Pod test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78 is Running (Ready = false) +Jan 14 04:39:39.439: INFO: Pod "test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78": Phase="Running", Reason="", readiness=false. Elapsed: 6.008609416s +Jan 14 04:39:39.439: INFO: The phase of Pod test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78 is Running (Ready = false) +Jan 14 04:39:41.439: INFO: Pod "test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78": Phase="Running", Reason="", readiness=false. Elapsed: 8.008114584s +Jan 14 04:39:41.439: INFO: The phase of Pod test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78 is Running (Ready = false) +Jan 14 04:39:43.438: INFO: Pod "test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78": Phase="Running", Reason="", readiness=false. Elapsed: 10.007324625s +Jan 14 04:39:43.438: INFO: The phase of Pod test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78 is Running (Ready = false) +Jan 14 04:39:45.438: INFO: Pod "test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78": Phase="Running", Reason="", readiness=false. Elapsed: 12.007542706s +Jan 14 04:39:45.438: INFO: The phase of Pod test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78 is Running (Ready = false) +Jan 14 04:39:47.441: INFO: Pod "test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78": Phase="Running", Reason="", readiness=false. Elapsed: 14.009935596s +Jan 14 04:39:47.441: INFO: The phase of Pod test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78 is Running (Ready = false) +Jan 14 04:39:49.440: INFO: Pod "test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78": Phase="Running", Reason="", readiness=false. Elapsed: 16.009186527s +Jan 14 04:39:49.440: INFO: The phase of Pod test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78 is Running (Ready = false) +Jan 14 04:39:51.439: INFO: Pod "test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78": Phase="Running", Reason="", readiness=false. Elapsed: 18.007721171s +Jan 14 04:39:51.439: INFO: The phase of Pod test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78 is Running (Ready = false) +Jan 14 04:39:53.438: INFO: Pod "test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78": Phase="Running", Reason="", readiness=false. Elapsed: 20.007542501s +Jan 14 04:39:53.438: INFO: The phase of Pod test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78 is Running (Ready = false) +Jan 14 04:39:55.438: INFO: Pod "test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78": Phase="Running", Reason="", readiness=true. Elapsed: 22.007441615s +Jan 14 04:39:55.438: INFO: The phase of Pod test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78 is Running (Ready = true) +Jan 14 04:39:55.438: INFO: Pod "test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78" satisfied condition "running and ready" +Jan 14 04:39:55.443: INFO: Container started at 2023-01-14 04:39:33 +0000 UTC, pod became ready at 2023-01-14 04:39:53 +0000 UTC +[AfterEach] [sig-node] Probing container + test/e2e/framework/node/init/init.go:32 +Jan 14 04:39:55.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Probing container + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Probing container + tear down framework | framework.go:193 +STEP: Destroying namespace "container-probe-8045" for this suite. 01/14/23 04:39:55.447 +------------------------------ +• [SLOW TEST] [22.051 seconds] +[sig-node] Probing container +test/e2e/common/node/framework.go:23 + with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:72 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Probing container + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:39:33.402 + Jan 14 04:39:33.402: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename container-probe 01/14/23 04:39:33.403 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:39:33.417 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:39:33.419 + [BeforeEach] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 + [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:72 + Jan 14 04:39:33.431: INFO: Waiting up to 5m0s for pod "test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78" in namespace "container-probe-8045" to be "running and ready" + Jan 14 04:39:33.434: INFO: Pod "test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78": Phase="Pending", Reason="", readiness=false. Elapsed: 3.125955ms + Jan 14 04:39:33.434: INFO: The phase of Pod test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78 is Pending, waiting for it to be Running (with Ready = true) + Jan 14 04:39:35.439: INFO: Pod "test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78": Phase="Running", Reason="", readiness=false. Elapsed: 2.007689823s + Jan 14 04:39:35.439: INFO: The phase of Pod test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78 is Running (Ready = false) + Jan 14 04:39:37.440: INFO: Pod "test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78": Phase="Running", Reason="", readiness=false. Elapsed: 4.008678584s + Jan 14 04:39:37.440: INFO: The phase of Pod test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78 is Running (Ready = false) + Jan 14 04:39:39.439: INFO: Pod "test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78": Phase="Running", Reason="", readiness=false. Elapsed: 6.008609416s + Jan 14 04:39:39.439: INFO: The phase of Pod test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78 is Running (Ready = false) + Jan 14 04:39:41.439: INFO: Pod "test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78": Phase="Running", Reason="", readiness=false. Elapsed: 8.008114584s + Jan 14 04:39:41.439: INFO: The phase of Pod test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78 is Running (Ready = false) + Jan 14 04:39:43.438: INFO: Pod "test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78": Phase="Running", Reason="", readiness=false. Elapsed: 10.007324625s + Jan 14 04:39:43.438: INFO: The phase of Pod test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78 is Running (Ready = false) + Jan 14 04:39:45.438: INFO: Pod "test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78": Phase="Running", Reason="", readiness=false. Elapsed: 12.007542706s + Jan 14 04:39:45.438: INFO: The phase of Pod test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78 is Running (Ready = false) + Jan 14 04:39:47.441: INFO: Pod "test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78": Phase="Running", Reason="", readiness=false. Elapsed: 14.009935596s + Jan 14 04:39:47.441: INFO: The phase of Pod test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78 is Running (Ready = false) + Jan 14 04:39:49.440: INFO: Pod "test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78": Phase="Running", Reason="", readiness=false. Elapsed: 16.009186527s + Jan 14 04:39:49.440: INFO: The phase of Pod test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78 is Running (Ready = false) + Jan 14 04:39:51.439: INFO: Pod "test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78": Phase="Running", Reason="", readiness=false. Elapsed: 18.007721171s + Jan 14 04:39:51.439: INFO: The phase of Pod test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78 is Running (Ready = false) + Jan 14 04:39:53.438: INFO: Pod "test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78": Phase="Running", Reason="", readiness=false. Elapsed: 20.007542501s + Jan 14 04:39:53.438: INFO: The phase of Pod test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78 is Running (Ready = false) + Jan 14 04:39:55.438: INFO: Pod "test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78": Phase="Running", Reason="", readiness=true. Elapsed: 22.007441615s + Jan 14 04:39:55.438: INFO: The phase of Pod test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78 is Running (Ready = true) + Jan 14 04:39:55.438: INFO: Pod "test-webserver-a1c3f22a-97ac-4ccf-9951-eb363ec3ba78" satisfied condition "running and ready" + Jan 14 04:39:55.443: INFO: Container started at 2023-01-14 04:39:33 +0000 UTC, pod became ready at 2023-01-14 04:39:53 +0000 UTC + [AfterEach] [sig-node] Probing container + test/e2e/framework/node/init/init.go:32 + Jan 14 04:39:55.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Probing container + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Probing container + tear down framework | framework.go:193 + STEP: Destroying namespace "container-probe-8045" for this suite. 01/14/23 04:39:55.447 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/network/service.go:2191 +[BeforeEach] [sig-network] Services + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:39:55.454 +Jan 14 04:39:55.454: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename services 01/14/23 04:39:55.455 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:39:55.469 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:39:55.471 +[BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/network/service.go:2191 +STEP: creating service in namespace services-228 01/14/23 04:39:55.473 +STEP: creating service affinity-clusterip in namespace services-228 01/14/23 04:39:55.473 +STEP: creating replication controller affinity-clusterip in namespace services-228 01/14/23 04:39:55.482 +I0114 04:39:55.490175 25 runners.go:193] Created replication controller with name: affinity-clusterip, namespace: services-228, replica count: 3 +I0114 04:39:58.541717 25 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Jan 14 04:39:58.547: INFO: Creating new exec pod +Jan 14 04:39:58.557: INFO: Waiting up to 5m0s for pod "execpod-affinityczlmk" in namespace "services-228" to be "running" +Jan 14 04:39:58.560: INFO: Pod "execpod-affinityczlmk": Phase="Pending", Reason="", readiness=false. Elapsed: 3.368272ms +Jan 14 04:40:00.565: INFO: Pod "execpod-affinityczlmk": Phase="Running", Reason="", readiness=true. Elapsed: 2.008092751s +Jan 14 04:40:00.565: INFO: Pod "execpod-affinityczlmk" satisfied condition "running" +Jan 14 04:40:01.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-228 exec execpod-affinityczlmk -- /bin/sh -x -c nc -v -z -w 2 affinity-clusterip 80' +Jan 14 04:40:01.680: INFO: stderr: "+ nc -v -z -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" +Jan 14 04:40:01.680: INFO: stdout: "" +Jan 14 04:40:01.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-228 exec execpod-affinityczlmk -- /bin/sh -x -c nc -v -z -w 2 10.55.255.10 80' +Jan 14 04:40:01.795: INFO: stderr: "+ nc -v -z -w 2 10.55.255.10 80\nConnection to 10.55.255.10 80 port [tcp/http] succeeded!\n" +Jan 14 04:40:01.795: INFO: stdout: "" +Jan 14 04:40:01.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-228 exec execpod-affinityczlmk -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.55.255.10:80/ ; done' +Jan 14 04:40:01.977: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.255.10:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.255.10:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.255.10:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.255.10:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.255.10:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.255.10:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.255.10:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.255.10:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.255.10:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.255.10:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.255.10:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.255.10:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.255.10:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.255.10:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.255.10:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.255.10:80/\n" +Jan 14 04:40:01.977: INFO: stdout: "\naffinity-clusterip-9vxf9\naffinity-clusterip-9vxf9\naffinity-clusterip-9vxf9\naffinity-clusterip-9vxf9\naffinity-clusterip-9vxf9\naffinity-clusterip-9vxf9\naffinity-clusterip-9vxf9\naffinity-clusterip-9vxf9\naffinity-clusterip-9vxf9\naffinity-clusterip-9vxf9\naffinity-clusterip-9vxf9\naffinity-clusterip-9vxf9\naffinity-clusterip-9vxf9\naffinity-clusterip-9vxf9\naffinity-clusterip-9vxf9\naffinity-clusterip-9vxf9" +Jan 14 04:40:01.977: INFO: Received response from host: affinity-clusterip-9vxf9 +Jan 14 04:40:01.977: INFO: Received response from host: affinity-clusterip-9vxf9 +Jan 14 04:40:01.977: INFO: Received response from host: affinity-clusterip-9vxf9 +Jan 14 04:40:01.977: INFO: Received response from host: affinity-clusterip-9vxf9 +Jan 14 04:40:01.977: INFO: Received response from host: affinity-clusterip-9vxf9 +Jan 14 04:40:01.977: INFO: Received response from host: affinity-clusterip-9vxf9 +Jan 14 04:40:01.977: INFO: Received response from host: affinity-clusterip-9vxf9 +Jan 14 04:40:01.977: INFO: Received response from host: affinity-clusterip-9vxf9 +Jan 14 04:40:01.977: INFO: Received response from host: affinity-clusterip-9vxf9 +Jan 14 04:40:01.977: INFO: Received response from host: affinity-clusterip-9vxf9 +Jan 14 04:40:01.977: INFO: Received response from host: affinity-clusterip-9vxf9 +Jan 14 04:40:01.977: INFO: Received response from host: affinity-clusterip-9vxf9 +Jan 14 04:40:01.977: INFO: Received response from host: affinity-clusterip-9vxf9 +Jan 14 04:40:01.977: INFO: Received response from host: affinity-clusterip-9vxf9 +Jan 14 04:40:01.977: INFO: Received response from host: affinity-clusterip-9vxf9 +Jan 14 04:40:01.977: INFO: Received response from host: affinity-clusterip-9vxf9 +Jan 14 04:40:01.977: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip in namespace services-228, will wait for the garbage collector to delete the pods 01/14/23 04:40:01.994 +Jan 14 04:40:02.056: INFO: Deleting ReplicationController affinity-clusterip took: 7.447469ms +Jan 14 04:40:02.156: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.450088ms +[AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 +Jan 14 04:40:04.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 +STEP: Destroying namespace "services-228" for this suite. 01/14/23 04:40:04.279 +------------------------------ +• [SLOW TEST] [8.831 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/network/service.go:2191 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:39:55.454 + Jan 14 04:39:55.454: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename services 01/14/23 04:39:55.455 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:39:55.469 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:39:55.471 + [BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/network/service.go:2191 + STEP: creating service in namespace services-228 01/14/23 04:39:55.473 + STEP: creating service affinity-clusterip in namespace services-228 01/14/23 04:39:55.473 + STEP: creating replication controller affinity-clusterip in namespace services-228 01/14/23 04:39:55.482 + I0114 04:39:55.490175 25 runners.go:193] Created replication controller with name: affinity-clusterip, namespace: services-228, replica count: 3 + I0114 04:39:58.541717 25 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Jan 14 04:39:58.547: INFO: Creating new exec pod + Jan 14 04:39:58.557: INFO: Waiting up to 5m0s for pod "execpod-affinityczlmk" in namespace "services-228" to be "running" + Jan 14 04:39:58.560: INFO: Pod "execpod-affinityczlmk": Phase="Pending", Reason="", readiness=false. Elapsed: 3.368272ms + Jan 14 04:40:00.565: INFO: Pod "execpod-affinityczlmk": Phase="Running", Reason="", readiness=true. Elapsed: 2.008092751s + Jan 14 04:40:00.565: INFO: Pod "execpod-affinityczlmk" satisfied condition "running" + Jan 14 04:40:01.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-228 exec execpod-affinityczlmk -- /bin/sh -x -c nc -v -z -w 2 affinity-clusterip 80' + Jan 14 04:40:01.680: INFO: stderr: "+ nc -v -z -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" + Jan 14 04:40:01.680: INFO: stdout: "" + Jan 14 04:40:01.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-228 exec execpod-affinityczlmk -- /bin/sh -x -c nc -v -z -w 2 10.55.255.10 80' + Jan 14 04:40:01.795: INFO: stderr: "+ nc -v -z -w 2 10.55.255.10 80\nConnection to 10.55.255.10 80 port [tcp/http] succeeded!\n" + Jan 14 04:40:01.795: INFO: stdout: "" + Jan 14 04:40:01.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-228 exec execpod-affinityczlmk -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.55.255.10:80/ ; done' + Jan 14 04:40:01.977: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.255.10:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.255.10:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.255.10:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.255.10:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.255.10:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.255.10:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.255.10:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.255.10:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.255.10:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.255.10:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.255.10:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.255.10:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.255.10:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.255.10:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.255.10:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.255.10:80/\n" + Jan 14 04:40:01.977: INFO: stdout: "\naffinity-clusterip-9vxf9\naffinity-clusterip-9vxf9\naffinity-clusterip-9vxf9\naffinity-clusterip-9vxf9\naffinity-clusterip-9vxf9\naffinity-clusterip-9vxf9\naffinity-clusterip-9vxf9\naffinity-clusterip-9vxf9\naffinity-clusterip-9vxf9\naffinity-clusterip-9vxf9\naffinity-clusterip-9vxf9\naffinity-clusterip-9vxf9\naffinity-clusterip-9vxf9\naffinity-clusterip-9vxf9\naffinity-clusterip-9vxf9\naffinity-clusterip-9vxf9" + Jan 14 04:40:01.977: INFO: Received response from host: affinity-clusterip-9vxf9 + Jan 14 04:40:01.977: INFO: Received response from host: affinity-clusterip-9vxf9 + Jan 14 04:40:01.977: INFO: Received response from host: affinity-clusterip-9vxf9 + Jan 14 04:40:01.977: INFO: Received response from host: affinity-clusterip-9vxf9 + Jan 14 04:40:01.977: INFO: Received response from host: affinity-clusterip-9vxf9 + Jan 14 04:40:01.977: INFO: Received response from host: affinity-clusterip-9vxf9 + Jan 14 04:40:01.977: INFO: Received response from host: affinity-clusterip-9vxf9 + Jan 14 04:40:01.977: INFO: Received response from host: affinity-clusterip-9vxf9 + Jan 14 04:40:01.977: INFO: Received response from host: affinity-clusterip-9vxf9 + Jan 14 04:40:01.977: INFO: Received response from host: affinity-clusterip-9vxf9 + Jan 14 04:40:01.977: INFO: Received response from host: affinity-clusterip-9vxf9 + Jan 14 04:40:01.977: INFO: Received response from host: affinity-clusterip-9vxf9 + Jan 14 04:40:01.977: INFO: Received response from host: affinity-clusterip-9vxf9 + Jan 14 04:40:01.977: INFO: Received response from host: affinity-clusterip-9vxf9 + Jan 14 04:40:01.977: INFO: Received response from host: affinity-clusterip-9vxf9 + Jan 14 04:40:01.977: INFO: Received response from host: affinity-clusterip-9vxf9 + Jan 14 04:40:01.977: INFO: Cleaning up the exec pod + STEP: deleting ReplicationController affinity-clusterip in namespace services-228, will wait for the garbage collector to delete the pods 01/14/23 04:40:01.994 + Jan 14 04:40:02.056: INFO: Deleting ReplicationController affinity-clusterip took: 7.447469ms + Jan 14 04:40:02.156: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.450088ms + [AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 + Jan 14 04:40:04.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 + STEP: Destroying namespace "services-228" for this suite. 01/14/23 04:40:04.279 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should fail to create ConfigMap with empty key [Conformance] + test/e2e/common/node/configmap.go:138 +[BeforeEach] [sig-node] ConfigMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:40:04.286 +Jan 14 04:40:04.286: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename configmap 01/14/23 04:40:04.287 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:40:04.3 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:40:04.302 +[BeforeEach] [sig-node] ConfigMap + test/e2e/framework/metrics/init/init.go:31 +[It] should fail to create ConfigMap with empty key [Conformance] + test/e2e/common/node/configmap.go:138 +STEP: Creating configMap that has name configmap-test-emptyKey-434cf582-87bb-472a-98b3-7cac8a798721 01/14/23 04:40:04.304 +[AfterEach] [sig-node] ConfigMap + test/e2e/framework/node/init/init.go:32 +Jan 14 04:40:04.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] ConfigMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] ConfigMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] ConfigMap + tear down framework | framework.go:193 +STEP: Destroying namespace "configmap-4739" for this suite. 01/14/23 04:40:04.31 +------------------------------ +• [0.030 seconds] +[sig-node] ConfigMap +test/e2e/common/node/framework.go:23 + should fail to create ConfigMap with empty key [Conformance] + test/e2e/common/node/configmap.go:138 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] ConfigMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:40:04.286 + Jan 14 04:40:04.286: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename configmap 01/14/23 04:40:04.287 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:40:04.3 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:40:04.302 + [BeforeEach] [sig-node] ConfigMap + test/e2e/framework/metrics/init/init.go:31 + [It] should fail to create ConfigMap with empty key [Conformance] + test/e2e/common/node/configmap.go:138 + STEP: Creating configMap that has name configmap-test-emptyKey-434cf582-87bb-472a-98b3-7cac8a798721 01/14/23 04:40:04.304 + [AfterEach] [sig-node] ConfigMap + test/e2e/framework/node/init/init.go:32 + Jan 14 04:40:04.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] ConfigMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] ConfigMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] ConfigMap + tear down framework | framework.go:193 + STEP: Destroying namespace "configmap-4739" for this suite. 01/14/23 04:40:04.31 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Containers + should use the image defaults if command and args are blank [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:39 +[BeforeEach] [sig-node] Containers + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:40:04.317 +Jan 14 04:40:04.317: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename containers 01/14/23 04:40:04.318 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:40:04.33 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:40:04.332 +[BeforeEach] [sig-node] Containers + test/e2e/framework/metrics/init/init.go:31 +[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:39 +Jan 14 04:40:04.343: INFO: Waiting up to 5m0s for pod "client-containers-84fe69ac-a0dc-4a62-ada0-f81fdc9ae63b" in namespace "containers-1438" to be "running" +Jan 14 04:40:04.346: INFO: Pod "client-containers-84fe69ac-a0dc-4a62-ada0-f81fdc9ae63b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.642902ms +Jan 14 04:40:06.351: INFO: Pod "client-containers-84fe69ac-a0dc-4a62-ada0-f81fdc9ae63b": Phase="Running", Reason="", readiness=true. Elapsed: 2.007642016s +Jan 14 04:40:06.351: INFO: Pod "client-containers-84fe69ac-a0dc-4a62-ada0-f81fdc9ae63b" satisfied condition "running" +[AfterEach] [sig-node] Containers + test/e2e/framework/node/init/init.go:32 +Jan 14 04:40:06.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Containers + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Containers + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Containers + tear down framework | framework.go:193 +STEP: Destroying namespace "containers-1438" for this suite. 01/14/23 04:40:06.362 +------------------------------ +• [2.050 seconds] +[sig-node] Containers +test/e2e/common/node/framework.go:23 + should use the image defaults if command and args are blank [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:39 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Containers + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:40:04.317 + Jan 14 04:40:04.317: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename containers 01/14/23 04:40:04.318 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:40:04.33 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:40:04.332 + [BeforeEach] [sig-node] Containers + test/e2e/framework/metrics/init/init.go:31 + [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:39 + Jan 14 04:40:04.343: INFO: Waiting up to 5m0s for pod "client-containers-84fe69ac-a0dc-4a62-ada0-f81fdc9ae63b" in namespace "containers-1438" to be "running" + Jan 14 04:40:04.346: INFO: Pod "client-containers-84fe69ac-a0dc-4a62-ada0-f81fdc9ae63b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.642902ms + Jan 14 04:40:06.351: INFO: Pod "client-containers-84fe69ac-a0dc-4a62-ada0-f81fdc9ae63b": Phase="Running", Reason="", readiness=true. Elapsed: 2.007642016s + Jan 14 04:40:06.351: INFO: Pod "client-containers-84fe69ac-a0dc-4a62-ada0-f81fdc9ae63b" satisfied condition "running" + [AfterEach] [sig-node] Containers + test/e2e/framework/node/init/init.go:32 + Jan 14 04:40:06.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Containers + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Containers + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Containers + tear down framework | framework.go:193 + STEP: Destroying namespace "containers-1438" for this suite. 01/14/23 04:40:06.362 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-apps] ReplicaSet + should serve a basic image on each replica with a public image [Conformance] + test/e2e/apps/replica_set.go:111 +[BeforeEach] [sig-apps] ReplicaSet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:40:06.368 +Jan 14 04:40:06.368: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename replicaset 01/14/23 04:40:06.368 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:40:06.386 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:40:06.389 +[BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:31 +[It] should serve a basic image on each replica with a public image [Conformance] + test/e2e/apps/replica_set.go:111 +Jan 14 04:40:06.391: INFO: Creating ReplicaSet my-hostname-basic-8a943961-8893-49c5-9897-08d3d7976926 +Jan 14 04:40:06.401: INFO: Pod name my-hostname-basic-8a943961-8893-49c5-9897-08d3d7976926: Found 0 pods out of 1 +Jan 14 04:40:11.406: INFO: Pod name my-hostname-basic-8a943961-8893-49c5-9897-08d3d7976926: Found 1 pods out of 1 +Jan 14 04:40:11.406: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-8a943961-8893-49c5-9897-08d3d7976926" is running +Jan 14 04:40:11.406: INFO: Waiting up to 5m0s for pod "my-hostname-basic-8a943961-8893-49c5-9897-08d3d7976926-ph5kl" in namespace "replicaset-7355" to be "running" +Jan 14 04:40:11.409: INFO: Pod "my-hostname-basic-8a943961-8893-49c5-9897-08d3d7976926-ph5kl": Phase="Running", Reason="", readiness=true. Elapsed: 3.301216ms +Jan 14 04:40:11.409: INFO: Pod "my-hostname-basic-8a943961-8893-49c5-9897-08d3d7976926-ph5kl" satisfied condition "running" +Jan 14 04:40:11.409: INFO: Pod "my-hostname-basic-8a943961-8893-49c5-9897-08d3d7976926-ph5kl" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-14 04:40:06 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-14 04:40:07 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-14 04:40:07 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-14 04:40:06 +0000 UTC Reason: Message:}]) +Jan 14 04:40:11.409: INFO: Trying to dial the pod +Jan 14 04:40:16.420: INFO: Controller my-hostname-basic-8a943961-8893-49c5-9897-08d3d7976926: Got expected result from replica 1 [my-hostname-basic-8a943961-8893-49c5-9897-08d3d7976926-ph5kl]: "my-hostname-basic-8a943961-8893-49c5-9897-08d3d7976926-ph5kl", 1 of 1 required successes so far +[AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/node/init/init.go:32 +Jan 14 04:40:16.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] ReplicaSet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] ReplicaSet + tear down framework | framework.go:193 +STEP: Destroying namespace "replicaset-7355" for this suite. 01/14/23 04:40:16.425 +------------------------------ +• [SLOW TEST] [10.063 seconds] +[sig-apps] ReplicaSet +test/e2e/apps/framework.go:23 + should serve a basic image on each replica with a public image [Conformance] + test/e2e/apps/replica_set.go:111 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ReplicaSet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:40:06.368 + Jan 14 04:40:06.368: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename replicaset 01/14/23 04:40:06.368 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:40:06.386 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:40:06.389 + [BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:31 + [It] should serve a basic image on each replica with a public image [Conformance] + test/e2e/apps/replica_set.go:111 + Jan 14 04:40:06.391: INFO: Creating ReplicaSet my-hostname-basic-8a943961-8893-49c5-9897-08d3d7976926 + Jan 14 04:40:06.401: INFO: Pod name my-hostname-basic-8a943961-8893-49c5-9897-08d3d7976926: Found 0 pods out of 1 + Jan 14 04:40:11.406: INFO: Pod name my-hostname-basic-8a943961-8893-49c5-9897-08d3d7976926: Found 1 pods out of 1 + Jan 14 04:40:11.406: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-8a943961-8893-49c5-9897-08d3d7976926" is running + Jan 14 04:40:11.406: INFO: Waiting up to 5m0s for pod "my-hostname-basic-8a943961-8893-49c5-9897-08d3d7976926-ph5kl" in namespace "replicaset-7355" to be "running" + Jan 14 04:40:11.409: INFO: Pod "my-hostname-basic-8a943961-8893-49c5-9897-08d3d7976926-ph5kl": Phase="Running", Reason="", readiness=true. Elapsed: 3.301216ms + Jan 14 04:40:11.409: INFO: Pod "my-hostname-basic-8a943961-8893-49c5-9897-08d3d7976926-ph5kl" satisfied condition "running" + Jan 14 04:40:11.409: INFO: Pod "my-hostname-basic-8a943961-8893-49c5-9897-08d3d7976926-ph5kl" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-14 04:40:06 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-14 04:40:07 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-14 04:40:07 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-14 04:40:06 +0000 UTC Reason: Message:}]) + Jan 14 04:40:11.409: INFO: Trying to dial the pod + Jan 14 04:40:16.420: INFO: Controller my-hostname-basic-8a943961-8893-49c5-9897-08d3d7976926: Got expected result from replica 1 [my-hostname-basic-8a943961-8893-49c5-9897-08d3d7976926-ph5kl]: "my-hostname-basic-8a943961-8893-49c5-9897-08d3d7976926-ph5kl", 1 of 1 required successes so far + [AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/node/init/init.go:32 + Jan 14 04:40:16.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] ReplicaSet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] ReplicaSet + tear down framework | framework.go:193 + STEP: Destroying namespace "replicaset-7355" for this suite. 01/14/23 04:40:16.425 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Ephemeral Containers [NodeConformance] + will start an ephemeral container in an existing pod [Conformance] + test/e2e/common/node/ephemeral_containers.go:45 +[BeforeEach] [sig-node] Ephemeral Containers [NodeConformance] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:40:16.431 +Jan 14 04:40:16.431: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename ephemeral-containers-test 01/14/23 04:40:16.432 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:40:16.447 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:40:16.449 +[BeforeEach] [sig-node] Ephemeral Containers [NodeConformance] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Ephemeral Containers [NodeConformance] + test/e2e/common/node/ephemeral_containers.go:38 +[It] will start an ephemeral container in an existing pod [Conformance] + test/e2e/common/node/ephemeral_containers.go:45 +STEP: creating a target pod 01/14/23 04:40:16.451 +Jan 14 04:40:16.462: INFO: Waiting up to 5m0s for pod "ephemeral-containers-target-pod" in namespace "ephemeral-containers-test-1394" to be "running and ready" +Jan 14 04:40:16.466: INFO: Pod "ephemeral-containers-target-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.929188ms +Jan 14 04:40:16.466: INFO: The phase of Pod ephemeral-containers-target-pod is Pending, waiting for it to be Running (with Ready = true) +Jan 14 04:40:18.471: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.009011361s +Jan 14 04:40:18.471: INFO: The phase of Pod ephemeral-containers-target-pod is Running (Ready = true) +Jan 14 04:40:18.471: INFO: Pod "ephemeral-containers-target-pod" satisfied condition "running and ready" +STEP: adding an ephemeral container 01/14/23 04:40:18.474 +Jan 14 04:40:18.486: INFO: Waiting up to 1m0s for pod "ephemeral-containers-target-pod" in namespace "ephemeral-containers-test-1394" to be "container debugger running" +Jan 14 04:40:18.489: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3.065245ms +Jan 14 04:40:20.493: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.007191215s +Jan 14 04:40:20.493: INFO: Pod "ephemeral-containers-target-pod" satisfied condition "container debugger running" +STEP: checking pod container endpoints 01/14/23 04:40:20.493 +Jan 14 04:40:20.493: INFO: ExecWithOptions {Command:[/bin/echo marco] Namespace:ephemeral-containers-test-1394 PodName:ephemeral-containers-target-pod ContainerName:debugger Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jan 14 04:40:20.493: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +Jan 14 04:40:20.494: INFO: ExecWithOptions: Clientset creation +Jan 14 04:40:20.494: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/ephemeral-containers-test-1394/pods/ephemeral-containers-target-pod/exec?command=%2Fbin%2Fecho&command=marco&container=debugger&container=debugger&stderr=true&stdout=true) +Jan 14 04:40:20.535: INFO: Exec stderr: "" +[AfterEach] [sig-node] Ephemeral Containers [NodeConformance] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:40:20.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Ephemeral Containers [NodeConformance] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Ephemeral Containers [NodeConformance] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Ephemeral Containers [NodeConformance] + tear down framework | framework.go:193 +STEP: Destroying namespace "ephemeral-containers-test-1394" for this suite. 01/14/23 04:40:20.546 +------------------------------ +• [4.121 seconds] +[sig-node] Ephemeral Containers [NodeConformance] +test/e2e/common/node/framework.go:23 + will start an ephemeral container in an existing pod [Conformance] + test/e2e/common/node/ephemeral_containers.go:45 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Ephemeral Containers [NodeConformance] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:40:16.431 + Jan 14 04:40:16.431: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename ephemeral-containers-test 01/14/23 04:40:16.432 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:40:16.447 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:40:16.449 + [BeforeEach] [sig-node] Ephemeral Containers [NodeConformance] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Ephemeral Containers [NodeConformance] + test/e2e/common/node/ephemeral_containers.go:38 + [It] will start an ephemeral container in an existing pod [Conformance] + test/e2e/common/node/ephemeral_containers.go:45 + STEP: creating a target pod 01/14/23 04:40:16.451 + Jan 14 04:40:16.462: INFO: Waiting up to 5m0s for pod "ephemeral-containers-target-pod" in namespace "ephemeral-containers-test-1394" to be "running and ready" + Jan 14 04:40:16.466: INFO: Pod "ephemeral-containers-target-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.929188ms + Jan 14 04:40:16.466: INFO: The phase of Pod ephemeral-containers-target-pod is Pending, waiting for it to be Running (with Ready = true) + Jan 14 04:40:18.471: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.009011361s + Jan 14 04:40:18.471: INFO: The phase of Pod ephemeral-containers-target-pod is Running (Ready = true) + Jan 14 04:40:18.471: INFO: Pod "ephemeral-containers-target-pod" satisfied condition "running and ready" + STEP: adding an ephemeral container 01/14/23 04:40:18.474 + Jan 14 04:40:18.486: INFO: Waiting up to 1m0s for pod "ephemeral-containers-target-pod" in namespace "ephemeral-containers-test-1394" to be "container debugger running" + Jan 14 04:40:18.489: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 3.065245ms + Jan 14 04:40:20.493: INFO: Pod "ephemeral-containers-target-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.007191215s + Jan 14 04:40:20.493: INFO: Pod "ephemeral-containers-target-pod" satisfied condition "container debugger running" + STEP: checking pod container endpoints 01/14/23 04:40:20.493 + Jan 14 04:40:20.493: INFO: ExecWithOptions {Command:[/bin/echo marco] Namespace:ephemeral-containers-test-1394 PodName:ephemeral-containers-target-pod ContainerName:debugger Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jan 14 04:40:20.493: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + Jan 14 04:40:20.494: INFO: ExecWithOptions: Clientset creation + Jan 14 04:40:20.494: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/ephemeral-containers-test-1394/pods/ephemeral-containers-target-pod/exec?command=%2Fbin%2Fecho&command=marco&container=debugger&container=debugger&stderr=true&stdout=true) + Jan 14 04:40:20.535: INFO: Exec stderr: "" + [AfterEach] [sig-node] Ephemeral Containers [NodeConformance] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:40:20.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Ephemeral Containers [NodeConformance] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Ephemeral Containers [NodeConformance] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Ephemeral Containers [NodeConformance] + tear down framework | framework.go:193 + STEP: Destroying namespace "ephemeral-containers-test-1394" for this suite. 01/14/23 04:40:20.546 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a pod. [Conformance] + test/e2e/apimachinery/resource_quota.go:230 +[BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:40:20.552 +Jan 14 04:40:20.552: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename resourcequota 01/14/23 04:40:20.553 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:40:20.568 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:40:20.57 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 +[It] should create a ResourceQuota and capture the life of a pod. [Conformance] + test/e2e/apimachinery/resource_quota.go:230 +STEP: Counting existing ResourceQuota 01/14/23 04:40:20.572 +STEP: Creating a ResourceQuota 01/14/23 04:40:25.578 +STEP: Ensuring resource quota status is calculated 01/14/23 04:40:25.583 +STEP: Creating a Pod that fits quota 01/14/23 04:40:27.587 +STEP: Ensuring ResourceQuota status captures the pod usage 01/14/23 04:40:27.603 +STEP: Not allowing a pod to be created that exceeds remaining quota 01/14/23 04:40:29.607 +STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) 01/14/23 04:40:29.611 +STEP: Ensuring a pod cannot update its resource requirements 01/14/23 04:40:29.614 +STEP: Ensuring attempts to update pod resource requirements did not change quota usage 01/14/23 04:40:29.618 +STEP: Deleting the pod 01/14/23 04:40:31.622 +STEP: Ensuring resource quota status released the pod usage 01/14/23 04:40:31.634 +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 +Jan 14 04:40:33.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 +STEP: Destroying namespace "resourcequota-5956" for this suite. 01/14/23 04:40:33.644 +------------------------------ +• [SLOW TEST] [13.097 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a pod. [Conformance] + test/e2e/apimachinery/resource_quota.go:230 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:40:20.552 + Jan 14 04:40:20.552: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename resourcequota 01/14/23 04:40:20.553 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:40:20.568 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:40:20.57 + [BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 + [It] should create a ResourceQuota and capture the life of a pod. [Conformance] + test/e2e/apimachinery/resource_quota.go:230 + STEP: Counting existing ResourceQuota 01/14/23 04:40:20.572 + STEP: Creating a ResourceQuota 01/14/23 04:40:25.578 + STEP: Ensuring resource quota status is calculated 01/14/23 04:40:25.583 + STEP: Creating a Pod that fits quota 01/14/23 04:40:27.587 + STEP: Ensuring ResourceQuota status captures the pod usage 01/14/23 04:40:27.603 + STEP: Not allowing a pod to be created that exceeds remaining quota 01/14/23 04:40:29.607 + STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) 01/14/23 04:40:29.611 + STEP: Ensuring a pod cannot update its resource requirements 01/14/23 04:40:29.614 + STEP: Ensuring attempts to update pod resource requirements did not change quota usage 01/14/23 04:40:29.618 + STEP: Deleting the pod 01/14/23 04:40:31.622 + STEP: Ensuring resource quota status released the pod usage 01/14/23 04:40:31.634 + [AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 + Jan 14 04:40:33.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 + STEP: Destroying namespace "resourcequota-5956" for this suite. 01/14/23 04:40:33.644 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] CSIInlineVolumes + should support CSIVolumeSource in Pod API [Conformance] + test/e2e/storage/csi_inline.go:131 +[BeforeEach] [sig-storage] CSIInlineVolumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:40:33.65 +Jan 14 04:40:33.650: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename csiinlinevolumes 01/14/23 04:40:33.651 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:40:33.666 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:40:33.668 +[BeforeEach] [sig-storage] CSIInlineVolumes + test/e2e/framework/metrics/init/init.go:31 +[It] should support CSIVolumeSource in Pod API [Conformance] + test/e2e/storage/csi_inline.go:131 +STEP: creating 01/14/23 04:40:33.67 +STEP: getting 01/14/23 04:40:33.689 +STEP: listing in namespace 01/14/23 04:40:33.692 +STEP: patching 01/14/23 04:40:33.697 +STEP: deleting 01/14/23 04:40:33.704 +[AfterEach] [sig-storage] CSIInlineVolumes + test/e2e/framework/node/init/init.go:32 +Jan 14 04:40:33.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] CSIInlineVolumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] CSIInlineVolumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] CSIInlineVolumes + tear down framework | framework.go:193 +STEP: Destroying namespace "csiinlinevolumes-6846" for this suite. 01/14/23 04:40:33.718 +------------------------------ +• [0.072 seconds] +[sig-storage] CSIInlineVolumes +test/e2e/storage/utils/framework.go:23 + should support CSIVolumeSource in Pod API [Conformance] + test/e2e/storage/csi_inline.go:131 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] CSIInlineVolumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:40:33.65 + Jan 14 04:40:33.650: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename csiinlinevolumes 01/14/23 04:40:33.651 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:40:33.666 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:40:33.668 + [BeforeEach] [sig-storage] CSIInlineVolumes + test/e2e/framework/metrics/init/init.go:31 + [It] should support CSIVolumeSource in Pod API [Conformance] + test/e2e/storage/csi_inline.go:131 + STEP: creating 01/14/23 04:40:33.67 + STEP: getting 01/14/23 04:40:33.689 + STEP: listing in namespace 01/14/23 04:40:33.692 + STEP: patching 01/14/23 04:40:33.697 + STEP: deleting 01/14/23 04:40:33.704 + [AfterEach] [sig-storage] CSIInlineVolumes + test/e2e/framework/node/init/init.go:32 + Jan 14 04:40:33.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] CSIInlineVolumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] CSIInlineVolumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] CSIInlineVolumes + tear down framework | framework.go:193 + STEP: Destroying namespace "csiinlinevolumes-6846" for this suite. 01/14/23 04:40:33.718 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-storage] Projected configMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:174 +[BeforeEach] [sig-storage] Projected configMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:40:33.723 +Jan 14 04:40:33.723: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename projected 01/14/23 04:40:33.724 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:40:33.738 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:40:33.74 +[BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:31 +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:174 +STEP: Creating configMap with name cm-test-opt-del-142d0526-b1a1-4a5c-befe-76ce87702357 01/14/23 04:40:33.746 +STEP: Creating configMap with name cm-test-opt-upd-5535a944-3863-4205-86e1-0b7fe9031e39 01/14/23 04:40:33.75 +STEP: Creating the pod 01/14/23 04:40:33.754 +Jan 14 04:40:33.764: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-210b09fe-21e2-4aef-bef9-1d802f2820a2" in namespace "projected-3737" to be "running and ready" +Jan 14 04:40:33.767: INFO: Pod "pod-projected-configmaps-210b09fe-21e2-4aef-bef9-1d802f2820a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.908414ms +Jan 14 04:40:33.767: INFO: The phase of Pod pod-projected-configmaps-210b09fe-21e2-4aef-bef9-1d802f2820a2 is Pending, waiting for it to be Running (with Ready = true) +Jan 14 04:40:35.774: INFO: Pod "pod-projected-configmaps-210b09fe-21e2-4aef-bef9-1d802f2820a2": Phase="Running", Reason="", readiness=true. Elapsed: 2.009343981s +Jan 14 04:40:35.774: INFO: The phase of Pod pod-projected-configmaps-210b09fe-21e2-4aef-bef9-1d802f2820a2 is Running (Ready = true) +Jan 14 04:40:35.774: INFO: Pod "pod-projected-configmaps-210b09fe-21e2-4aef-bef9-1d802f2820a2" satisfied condition "running and ready" +STEP: Deleting configmap cm-test-opt-del-142d0526-b1a1-4a5c-befe-76ce87702357 01/14/23 04:40:35.802 +STEP: Updating configmap cm-test-opt-upd-5535a944-3863-4205-86e1-0b7fe9031e39 01/14/23 04:40:35.807 +STEP: Creating configMap with name cm-test-opt-create-0252b87e-abf1-4035-9779-294f8f2c6b4a 01/14/23 04:40:35.815 +STEP: waiting to observe update in volume 01/14/23 04:40:35.819 +[AfterEach] [sig-storage] Projected configMap + test/e2e/framework/node/init/init.go:32 +Jan 14 04:40:39.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected configMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected configMap + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-3737" for this suite. 01/14/23 04:40:39.858 +------------------------------ +• [SLOW TEST] [6.144 seconds] +[sig-storage] Projected configMap +test/e2e/common/storage/framework.go:23 + optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:174 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected configMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:40:33.723 + Jan 14 04:40:33.723: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename projected 01/14/23 04:40:33.724 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:40:33.738 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:40:33.74 + [BeforeEach] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:31 + [It] optional updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/projected_configmap.go:174 + STEP: Creating configMap with name cm-test-opt-del-142d0526-b1a1-4a5c-befe-76ce87702357 01/14/23 04:40:33.746 + STEP: Creating configMap with name cm-test-opt-upd-5535a944-3863-4205-86e1-0b7fe9031e39 01/14/23 04:40:33.75 + STEP: Creating the pod 01/14/23 04:40:33.754 + Jan 14 04:40:33.764: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-210b09fe-21e2-4aef-bef9-1d802f2820a2" in namespace "projected-3737" to be "running and ready" + Jan 14 04:40:33.767: INFO: Pod "pod-projected-configmaps-210b09fe-21e2-4aef-bef9-1d802f2820a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.908414ms + Jan 14 04:40:33.767: INFO: The phase of Pod pod-projected-configmaps-210b09fe-21e2-4aef-bef9-1d802f2820a2 is Pending, waiting for it to be Running (with Ready = true) + Jan 14 04:40:35.774: INFO: Pod "pod-projected-configmaps-210b09fe-21e2-4aef-bef9-1d802f2820a2": Phase="Running", Reason="", readiness=true. Elapsed: 2.009343981s + Jan 14 04:40:35.774: INFO: The phase of Pod pod-projected-configmaps-210b09fe-21e2-4aef-bef9-1d802f2820a2 is Running (Ready = true) + Jan 14 04:40:35.774: INFO: Pod "pod-projected-configmaps-210b09fe-21e2-4aef-bef9-1d802f2820a2" satisfied condition "running and ready" + STEP: Deleting configmap cm-test-opt-del-142d0526-b1a1-4a5c-befe-76ce87702357 01/14/23 04:40:35.802 + STEP: Updating configmap cm-test-opt-upd-5535a944-3863-4205-86e1-0b7fe9031e39 01/14/23 04:40:35.807 + STEP: Creating configMap with name cm-test-opt-create-0252b87e-abf1-4035-9779-294f8f2c6b4a 01/14/23 04:40:35.815 + STEP: waiting to observe update in volume 01/14/23 04:40:35.819 + [AfterEach] [sig-storage] Projected configMap + test/e2e/framework/node/init/init.go:32 + Jan 14 04:40:39.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected configMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected configMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected configMap + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-3737" for this suite. 01/14/23 04:40:39.858 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + updates the published spec when one version gets renamed [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:391 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:40:39.867 +Jan 14 04:40:39.867: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename crd-publish-openapi 01/14/23 04:40:39.868 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:40:39.882 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:40:39.884 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[It] updates the published spec when one version gets renamed [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:391 +STEP: set up a multi version CRD 01/14/23 04:40:39.886 +Jan 14 04:40:39.887: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: rename a version 01/14/23 04:40:43.719 +STEP: check the new version name is served 01/14/23 04:40:43.734 +STEP: check the old version name is removed 01/14/23 04:40:45.55 +STEP: check the other version is not changed 01/14/23 04:40:46.223 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:40:49.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "crd-publish-openapi-1061" for this suite. 01/14/23 04:40:49.354 +------------------------------ +• [SLOW TEST] [9.492 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + updates the published spec when one version gets renamed [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:391 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:40:39.867 + Jan 14 04:40:39.867: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename crd-publish-openapi 01/14/23 04:40:39.868 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:40:39.882 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:40:39.884 + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [It] updates the published spec when one version gets renamed [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:391 + STEP: set up a multi version CRD 01/14/23 04:40:39.886 + Jan 14 04:40:39.887: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: rename a version 01/14/23 04:40:43.719 + STEP: check the new version name is served 01/14/23 04:40:43.734 + STEP: check the old version name is removed 01/14/23 04:40:45.55 + STEP: check the other version is not changed 01/14/23 04:40:46.223 + [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:40:49.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "crd-publish-openapi-1061" for this suite. 01/14/23 04:40:49.354 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSlice + should have Endpoints and EndpointSlices pointing to API Server [Conformance] + test/e2e/network/endpointslice.go:66 +[BeforeEach] [sig-network] EndpointSlice + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:40:49.36 +Jan 14 04:40:49.360: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename endpointslice 01/14/23 04:40:49.361 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:40:49.388 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:40:49.39 +[BeforeEach] [sig-network] EndpointSlice + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] EndpointSlice + test/e2e/network/endpointslice.go:52 +[It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] + test/e2e/network/endpointslice.go:66 +Jan 14 04:40:49.477: INFO: Endpoints addresses: [169.254.128.5] , ports: [60002] +Jan 14 04:40:49.477: INFO: EndpointSlices addresses: [169.254.128.5] , ports: [60002] +[AfterEach] [sig-network] EndpointSlice + test/e2e/framework/node/init/init.go:32 +Jan 14 04:40:49.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] EndpointSlice + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] EndpointSlice + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] EndpointSlice + tear down framework | framework.go:193 +STEP: Destroying namespace "endpointslice-1549" for this suite. 01/14/23 04:40:49.482 +------------------------------ +• [0.128 seconds] +[sig-network] EndpointSlice +test/e2e/network/common/framework.go:23 + should have Endpoints and EndpointSlices pointing to API Server [Conformance] + test/e2e/network/endpointslice.go:66 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] EndpointSlice + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:40:49.36 + Jan 14 04:40:49.360: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename endpointslice 01/14/23 04:40:49.361 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:40:49.388 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:40:49.39 + [BeforeEach] [sig-network] EndpointSlice + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] EndpointSlice + test/e2e/network/endpointslice.go:52 + [It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] + test/e2e/network/endpointslice.go:66 + Jan 14 04:40:49.477: INFO: Endpoints addresses: [169.254.128.5] , ports: [60002] + Jan 14 04:40:49.477: INFO: EndpointSlices addresses: [169.254.128.5] , ports: [60002] + [AfterEach] [sig-network] EndpointSlice + test/e2e/framework/node/init/init.go:32 + Jan 14 04:40:49.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] EndpointSlice + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] EndpointSlice + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] EndpointSlice + tear down framework | framework.go:193 + STEP: Destroying namespace "endpointslice-1549" for this suite. 01/14/23 04:40:49.482 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for intra-pod communication: http [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:82 +[BeforeEach] [sig-network] Networking + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:40:49.489 +Jan 14 04:40:49.489: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename pod-network-test 01/14/23 04:40:49.49 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:40:49.509 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:40:49.511 +[BeforeEach] [sig-network] Networking + test/e2e/framework/metrics/init/init.go:31 +[It] should function for intra-pod communication: http [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:82 +STEP: Performing setup for networking test in namespace pod-network-test-1528 01/14/23 04:40:49.513 +STEP: creating a selector 01/14/23 04:40:49.513 +STEP: Creating the service pods in kubernetes 01/14/23 04:40:49.513 +Jan 14 04:40:49.513: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +Jan 14 04:40:49.568: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-1528" to be "running and ready" +Jan 14 04:40:49.571: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.862244ms +Jan 14 04:40:49.571: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) +Jan 14 04:40:51.576: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.007485159s +Jan 14 04:40:51.576: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jan 14 04:40:53.575: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.007421332s +Jan 14 04:40:53.575: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jan 14 04:40:55.576: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.008417297s +Jan 14 04:40:55.576: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jan 14 04:40:57.576: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.008442852s +Jan 14 04:40:57.577: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jan 14 04:40:59.576: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.007762022s +Jan 14 04:40:59.576: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jan 14 04:41:01.577: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.008910231s +Jan 14 04:41:01.577: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jan 14 04:41:03.575: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.007118319s +Jan 14 04:41:03.575: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jan 14 04:41:05.575: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.007222833s +Jan 14 04:41:05.575: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jan 14 04:41:07.576: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.00816928s +Jan 14 04:41:07.576: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jan 14 04:41:09.576: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.007943787s +Jan 14 04:41:09.576: INFO: The phase of Pod netserver-0 is Running (Ready = false) +Jan 14 04:41:11.575: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 22.007419207s +Jan 14 04:41:11.576: INFO: The phase of Pod netserver-0 is Running (Ready = true) +Jan 14 04:41:11.576: INFO: Pod "netserver-0" satisfied condition "running and ready" +Jan 14 04:41:11.579: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-1528" to be "running and ready" +Jan 14 04:41:11.583: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 4.362265ms +Jan 14 04:41:11.583: INFO: The phase of Pod netserver-1 is Running (Ready = true) +Jan 14 04:41:11.583: INFO: Pod "netserver-1" satisfied condition "running and ready" +Jan 14 04:41:11.586: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-1528" to be "running and ready" +Jan 14 04:41:11.589: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 2.672847ms +Jan 14 04:41:11.589: INFO: The phase of Pod netserver-2 is Running (Ready = true) +Jan 14 04:41:11.589: INFO: Pod "netserver-2" satisfied condition "running and ready" +STEP: Creating test pods 01/14/23 04:41:11.591 +Jan 14 04:41:11.599: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-1528" to be "running" +Jan 14 04:41:11.601: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.758576ms +Jan 14 04:41:13.606: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.007619244s +Jan 14 04:41:13.606: INFO: Pod "test-container-pod" satisfied condition "running" +Jan 14 04:41:13.609: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 +Jan 14 04:41:13.609: INFO: Breadth first check of 10.52.1.57 on host 10.0.1.106... +Jan 14 04:41:13.612: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.52.1.58:9080/dial?request=hostname&protocol=http&host=10.52.1.57&port=8083&tries=1'] Namespace:pod-network-test-1528 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jan 14 04:41:13.612: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +Jan 14 04:41:13.613: INFO: ExecWithOptions: Clientset creation +Jan 14 04:41:13.613: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/pod-network-test-1528/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.52.1.58%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D10.52.1.57%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) +Jan 14 04:41:13.666: INFO: Waiting for responses: map[] +Jan 14 04:41:13.666: INFO: reached 10.52.1.57 after 0/1 tries +Jan 14 04:41:13.666: INFO: Breadth first check of 10.52.0.252 on host 10.0.1.212... +Jan 14 04:41:13.669: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.52.1.58:9080/dial?request=hostname&protocol=http&host=10.52.0.252&port=8083&tries=1'] Namespace:pod-network-test-1528 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jan 14 04:41:13.669: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +Jan 14 04:41:13.670: INFO: ExecWithOptions: Clientset creation +Jan 14 04:41:13.670: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/pod-network-test-1528/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.52.1.58%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D10.52.0.252%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) +Jan 14 04:41:13.717: INFO: Waiting for responses: map[] +Jan 14 04:41:13.717: INFO: reached 10.52.0.252 after 0/1 tries +Jan 14 04:41:13.717: INFO: Breadth first check of 10.52.1.96 on host 10.0.1.99... +Jan 14 04:41:13.721: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.52.1.58:9080/dial?request=hostname&protocol=http&host=10.52.1.96&port=8083&tries=1'] Namespace:pod-network-test-1528 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jan 14 04:41:13.721: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +Jan 14 04:41:13.721: INFO: ExecWithOptions: Clientset creation +Jan 14 04:41:13.721: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/pod-network-test-1528/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.52.1.58%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D10.52.1.96%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) +Jan 14 04:41:13.769: INFO: Waiting for responses: map[] +Jan 14 04:41:13.770: INFO: reached 10.52.1.96 after 0/1 tries +Jan 14 04:41:13.770: INFO: Going to retry 0 out of 3 pods.... +[AfterEach] [sig-network] Networking + test/e2e/framework/node/init/init.go:32 +Jan 14 04:41:13.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Networking + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Networking + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Networking + tear down framework | framework.go:193 +STEP: Destroying namespace "pod-network-test-1528" for this suite. 01/14/23 04:41:13.775 +------------------------------ +• [SLOW TEST] [24.292 seconds] +[sig-network] Networking +test/e2e/common/network/framework.go:23 + Granular Checks: Pods + test/e2e/common/network/networking.go:32 + should function for intra-pod communication: http [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:82 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Networking + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:40:49.489 + Jan 14 04:40:49.489: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename pod-network-test 01/14/23 04:40:49.49 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:40:49.509 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:40:49.511 + [BeforeEach] [sig-network] Networking + test/e2e/framework/metrics/init/init.go:31 + [It] should function for intra-pod communication: http [NodeConformance] [Conformance] + test/e2e/common/network/networking.go:82 + STEP: Performing setup for networking test in namespace pod-network-test-1528 01/14/23 04:40:49.513 + STEP: creating a selector 01/14/23 04:40:49.513 + STEP: Creating the service pods in kubernetes 01/14/23 04:40:49.513 + Jan 14 04:40:49.513: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable + Jan 14 04:40:49.568: INFO: Waiting up to 5m0s for pod "netserver-0" in namespace "pod-network-test-1528" to be "running and ready" + Jan 14 04:40:49.571: INFO: Pod "netserver-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.862244ms + Jan 14 04:40:49.571: INFO: The phase of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) + Jan 14 04:40:51.576: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.007485159s + Jan 14 04:40:51.576: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jan 14 04:40:53.575: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.007421332s + Jan 14 04:40:53.575: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jan 14 04:40:55.576: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.008417297s + Jan 14 04:40:55.576: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jan 14 04:40:57.576: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.008442852s + Jan 14 04:40:57.577: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jan 14 04:40:59.576: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.007762022s + Jan 14 04:40:59.576: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jan 14 04:41:01.577: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.008910231s + Jan 14 04:41:01.577: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jan 14 04:41:03.575: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.007118319s + Jan 14 04:41:03.575: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jan 14 04:41:05.575: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.007222833s + Jan 14 04:41:05.575: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jan 14 04:41:07.576: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.00816928s + Jan 14 04:41:07.576: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jan 14 04:41:09.576: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.007943787s + Jan 14 04:41:09.576: INFO: The phase of Pod netserver-0 is Running (Ready = false) + Jan 14 04:41:11.575: INFO: Pod "netserver-0": Phase="Running", Reason="", readiness=true. Elapsed: 22.007419207s + Jan 14 04:41:11.576: INFO: The phase of Pod netserver-0 is Running (Ready = true) + Jan 14 04:41:11.576: INFO: Pod "netserver-0" satisfied condition "running and ready" + Jan 14 04:41:11.579: INFO: Waiting up to 5m0s for pod "netserver-1" in namespace "pod-network-test-1528" to be "running and ready" + Jan 14 04:41:11.583: INFO: Pod "netserver-1": Phase="Running", Reason="", readiness=true. Elapsed: 4.362265ms + Jan 14 04:41:11.583: INFO: The phase of Pod netserver-1 is Running (Ready = true) + Jan 14 04:41:11.583: INFO: Pod "netserver-1" satisfied condition "running and ready" + Jan 14 04:41:11.586: INFO: Waiting up to 5m0s for pod "netserver-2" in namespace "pod-network-test-1528" to be "running and ready" + Jan 14 04:41:11.589: INFO: Pod "netserver-2": Phase="Running", Reason="", readiness=true. Elapsed: 2.672847ms + Jan 14 04:41:11.589: INFO: The phase of Pod netserver-2 is Running (Ready = true) + Jan 14 04:41:11.589: INFO: Pod "netserver-2" satisfied condition "running and ready" + STEP: Creating test pods 01/14/23 04:41:11.591 + Jan 14 04:41:11.599: INFO: Waiting up to 5m0s for pod "test-container-pod" in namespace "pod-network-test-1528" to be "running" + Jan 14 04:41:11.601: INFO: Pod "test-container-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.758576ms + Jan 14 04:41:13.606: INFO: Pod "test-container-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.007619244s + Jan 14 04:41:13.606: INFO: Pod "test-container-pod" satisfied condition "running" + Jan 14 04:41:13.609: INFO: Setting MaxTries for pod polling to 39 for networking test based on endpoint count 3 + Jan 14 04:41:13.609: INFO: Breadth first check of 10.52.1.57 on host 10.0.1.106... + Jan 14 04:41:13.612: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.52.1.58:9080/dial?request=hostname&protocol=http&host=10.52.1.57&port=8083&tries=1'] Namespace:pod-network-test-1528 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jan 14 04:41:13.612: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + Jan 14 04:41:13.613: INFO: ExecWithOptions: Clientset creation + Jan 14 04:41:13.613: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/pod-network-test-1528/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.52.1.58%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D10.52.1.57%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) + Jan 14 04:41:13.666: INFO: Waiting for responses: map[] + Jan 14 04:41:13.666: INFO: reached 10.52.1.57 after 0/1 tries + Jan 14 04:41:13.666: INFO: Breadth first check of 10.52.0.252 on host 10.0.1.212... + Jan 14 04:41:13.669: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.52.1.58:9080/dial?request=hostname&protocol=http&host=10.52.0.252&port=8083&tries=1'] Namespace:pod-network-test-1528 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jan 14 04:41:13.669: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + Jan 14 04:41:13.670: INFO: ExecWithOptions: Clientset creation + Jan 14 04:41:13.670: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/pod-network-test-1528/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.52.1.58%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D10.52.0.252%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) + Jan 14 04:41:13.717: INFO: Waiting for responses: map[] + Jan 14 04:41:13.717: INFO: reached 10.52.0.252 after 0/1 tries + Jan 14 04:41:13.717: INFO: Breadth first check of 10.52.1.96 on host 10.0.1.99... + Jan 14 04:41:13.721: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.52.1.58:9080/dial?request=hostname&protocol=http&host=10.52.1.96&port=8083&tries=1'] Namespace:pod-network-test-1528 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jan 14 04:41:13.721: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + Jan 14 04:41:13.721: INFO: ExecWithOptions: Clientset creation + Jan 14 04:41:13.721: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/pod-network-test-1528/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F10.52.1.58%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D10.52.1.96%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true) + Jan 14 04:41:13.769: INFO: Waiting for responses: map[] + Jan 14 04:41:13.770: INFO: reached 10.52.1.96 after 0/1 tries + Jan 14 04:41:13.770: INFO: Going to retry 0 out of 3 pods.... + [AfterEach] [sig-network] Networking + test/e2e/framework/node/init/init.go:32 + Jan 14 04:41:13.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Networking + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Networking + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Networking + tear down framework | framework.go:193 + STEP: Destroying namespace "pod-network-test-1528" for this suite. 01/14/23 04:41:13.775 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-node] Secrets + should patch a secret [Conformance] + test/e2e/common/node/secrets.go:154 +[BeforeEach] [sig-node] Secrets + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:41:13.782 +Jan 14 04:41:13.782: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename secrets 01/14/23 04:41:13.783 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:41:13.796 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:41:13.798 +[BeforeEach] [sig-node] Secrets + test/e2e/framework/metrics/init/init.go:31 +[It] should patch a secret [Conformance] + test/e2e/common/node/secrets.go:154 +STEP: creating a secret 01/14/23 04:41:13.8 +STEP: listing secrets in all namespaces to ensure that there are more than zero 01/14/23 04:41:13.804 +STEP: patching the secret 01/14/23 04:41:13.808 +STEP: deleting the secret using a LabelSelector 01/14/23 04:41:13.815 +STEP: listing secrets in all namespaces, searching for label name and value in patch 01/14/23 04:41:13.822 +[AfterEach] [sig-node] Secrets + test/e2e/framework/node/init/init.go:32 +Jan 14 04:41:13.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Secrets + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Secrets + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Secrets + tear down framework | framework.go:193 +STEP: Destroying namespace "secrets-1866" for this suite. 01/14/23 04:41:13.83 +------------------------------ +• [0.059 seconds] +[sig-node] Secrets +test/e2e/common/node/framework.go:23 + should patch a secret [Conformance] + test/e2e/common/node/secrets.go:154 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Secrets + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:41:13.782 + Jan 14 04:41:13.782: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename secrets 01/14/23 04:41:13.783 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:41:13.796 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:41:13.798 + [BeforeEach] [sig-node] Secrets + test/e2e/framework/metrics/init/init.go:31 + [It] should patch a secret [Conformance] + test/e2e/common/node/secrets.go:154 + STEP: creating a secret 01/14/23 04:41:13.8 + STEP: listing secrets in all namespaces to ensure that there are more than zero 01/14/23 04:41:13.804 + STEP: patching the secret 01/14/23 04:41:13.808 + STEP: deleting the secret using a LabelSelector 01/14/23 04:41:13.815 + STEP: listing secrets in all namespaces, searching for label name and value in patch 01/14/23 04:41:13.822 + [AfterEach] [sig-node] Secrets + test/e2e/framework/node/init/init.go:32 + Jan 14 04:41:13.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Secrets + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Secrets + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Secrets + tear down framework | framework.go:193 + STEP: Destroying namespace "secrets-1866" for this suite. 01/14/23 04:41:13.83 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-node] Downward API + should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:217 +[BeforeEach] [sig-node] Downward API + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:41:13.841 +Jan 14 04:41:13.841: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename downward-api 01/14/23 04:41:13.841 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:41:13.861 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:41:13.863 +[BeforeEach] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:31 +[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:217 +STEP: Creating a pod to test downward api env vars 01/14/23 04:41:13.866 +Jan 14 04:41:13.876: INFO: Waiting up to 5m0s for pod "downward-api-8ce0775c-c988-4144-b9a8-c4ef577d0a13" in namespace "downward-api-3562" to be "Succeeded or Failed" +Jan 14 04:41:13.879: INFO: Pod "downward-api-8ce0775c-c988-4144-b9a8-c4ef577d0a13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.875685ms +Jan 14 04:41:15.883: INFO: Pod "downward-api-8ce0775c-c988-4144-b9a8-c4ef577d0a13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00721848s +Jan 14 04:41:17.883: INFO: Pod "downward-api-8ce0775c-c988-4144-b9a8-c4ef577d0a13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006998893s +STEP: Saw pod success 01/14/23 04:41:17.883 +Jan 14 04:41:17.883: INFO: Pod "downward-api-8ce0775c-c988-4144-b9a8-c4ef577d0a13" satisfied condition "Succeeded or Failed" +Jan 14 04:41:17.886: INFO: Trying to get logs from node 10.0.1.99 pod downward-api-8ce0775c-c988-4144-b9a8-c4ef577d0a13 container dapi-container: +STEP: delete the pod 01/14/23 04:41:17.898 +Jan 14 04:41:17.911: INFO: Waiting for pod downward-api-8ce0775c-c988-4144-b9a8-c4ef577d0a13 to disappear +Jan 14 04:41:17.914: INFO: Pod downward-api-8ce0775c-c988-4144-b9a8-c4ef577d0a13 no longer exists +[AfterEach] [sig-node] Downward API + test/e2e/framework/node/init/init.go:32 +Jan 14 04:41:17.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Downward API + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Downward API + tear down framework | framework.go:193 +STEP: Destroying namespace "downward-api-3562" for this suite. 01/14/23 04:41:17.919 +------------------------------ +• [4.084 seconds] +[sig-node] Downward API +test/e2e/common/node/framework.go:23 + should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:217 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Downward API + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:41:13.841 + Jan 14 04:41:13.841: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename downward-api 01/14/23 04:41:13.841 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:41:13.861 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:41:13.863 + [BeforeEach] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:31 + [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:217 + STEP: Creating a pod to test downward api env vars 01/14/23 04:41:13.866 + Jan 14 04:41:13.876: INFO: Waiting up to 5m0s for pod "downward-api-8ce0775c-c988-4144-b9a8-c4ef577d0a13" in namespace "downward-api-3562" to be "Succeeded or Failed" + Jan 14 04:41:13.879: INFO: Pod "downward-api-8ce0775c-c988-4144-b9a8-c4ef577d0a13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.875685ms + Jan 14 04:41:15.883: INFO: Pod "downward-api-8ce0775c-c988-4144-b9a8-c4ef577d0a13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00721848s + Jan 14 04:41:17.883: INFO: Pod "downward-api-8ce0775c-c988-4144-b9a8-c4ef577d0a13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006998893s + STEP: Saw pod success 01/14/23 04:41:17.883 + Jan 14 04:41:17.883: INFO: Pod "downward-api-8ce0775c-c988-4144-b9a8-c4ef577d0a13" satisfied condition "Succeeded or Failed" + Jan 14 04:41:17.886: INFO: Trying to get logs from node 10.0.1.99 pod downward-api-8ce0775c-c988-4144-b9a8-c4ef577d0a13 container dapi-container: + STEP: delete the pod 01/14/23 04:41:17.898 + Jan 14 04:41:17.911: INFO: Waiting for pod downward-api-8ce0775c-c988-4144-b9a8-c4ef577d0a13 to disappear + Jan 14 04:41:17.914: INFO: Pod downward-api-8ce0775c-c988-4144-b9a8-c4ef577d0a13 no longer exists + [AfterEach] [sig-node] Downward API + test/e2e/framework/node/init/init.go:32 + Jan 14 04:41:17.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Downward API + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Downward API + tear down framework | framework.go:193 + STEP: Destroying namespace "downward-api-3562" for this suite. 01/14/23 04:41:17.919 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for services [Conformance] + test/e2e/network/dns.go:137 +[BeforeEach] [sig-network] DNS + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:41:17.928 +Jan 14 04:41:17.928: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename dns 01/14/23 04:41:17.929 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:41:17.94 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:41:17.942 +[BeforeEach] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:31 +[It] should provide DNS for services [Conformance] + test/e2e/network/dns.go:137 +STEP: Creating a test headless service 01/14/23 04:41:17.944 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4490.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4490.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4490.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4490.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4490.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4490.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4490.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4490.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4490.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4490.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4490.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4490.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 53.254.55.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.55.254.53_udp@PTR;check="$$(dig +tcp +noall +answer +search 53.254.55.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.55.254.53_tcp@PTR;sleep 1; done + 01/14/23 04:41:17.961 +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4490.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4490.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4490.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4490.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4490.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4490.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4490.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4490.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4490.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4490.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4490.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4490.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 53.254.55.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.55.254.53_udp@PTR;check="$$(dig +tcp +noall +answer +search 53.254.55.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.55.254.53_tcp@PTR;sleep 1; done + 01/14/23 04:41:17.961 +STEP: creating a pod to probe DNS 01/14/23 04:41:17.961 +STEP: submitting the pod to kubernetes 01/14/23 04:41:17.961 +Jan 14 04:41:17.974: INFO: Waiting up to 15m0s for pod "dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d" in namespace "dns-4490" to be "running" +Jan 14 04:41:17.977: INFO: Pod "dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.241496ms +Jan 14 04:41:19.982: INFO: Pod "dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d": Phase="Running", Reason="", readiness=true. Elapsed: 2.007985494s +Jan 14 04:41:19.982: INFO: Pod "dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d" satisfied condition "running" +STEP: retrieving the pod 01/14/23 04:41:19.982 +STEP: looking for the results for each expected name from probers 01/14/23 04:41:19.985 +Jan 14 04:41:19.995: INFO: Unable to read wheezy_udp@dns-test-service.dns-4490.svc.cluster.local from pod dns-4490/dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d: the server could not find the requested resource (get pods dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d) +Jan 14 04:41:19.998: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4490.svc.cluster.local from pod dns-4490/dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d: the server could not find the requested resource (get pods dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d) +Jan 14 04:41:20.002: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4490.svc.cluster.local from pod dns-4490/dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d: the server could not find the requested resource (get pods dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d) +Jan 14 04:41:20.005: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4490.svc.cluster.local from pod dns-4490/dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d: the server could not find the requested resource (get pods dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d) +Jan 14 04:41:20.021: INFO: Unable to read jessie_udp@dns-test-service.dns-4490.svc.cluster.local from pod dns-4490/dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d: the server could not find the requested resource (get pods dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d) +Jan 14 04:41:20.023: INFO: Unable to read jessie_tcp@dns-test-service.dns-4490.svc.cluster.local from pod dns-4490/dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d: the server could not find the requested resource (get pods dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d) +Jan 14 04:41:20.026: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4490.svc.cluster.local from pod dns-4490/dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d: the server could not find the requested resource (get pods dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d) +Jan 14 04:41:20.029: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4490.svc.cluster.local from pod dns-4490/dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d: the server could not find the requested resource (get pods dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d) +Jan 14 04:41:20.041: INFO: Lookups using dns-4490/dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d failed for: [wheezy_udp@dns-test-service.dns-4490.svc.cluster.local wheezy_tcp@dns-test-service.dns-4490.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4490.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4490.svc.cluster.local jessie_udp@dns-test-service.dns-4490.svc.cluster.local jessie_tcp@dns-test-service.dns-4490.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4490.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4490.svc.cluster.local] + +Jan 14 04:41:25.097: INFO: DNS probes using dns-4490/dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d succeeded + +STEP: deleting the pod 01/14/23 04:41:25.097 +STEP: deleting the test service 01/14/23 04:41:25.114 +STEP: deleting the test headless service 01/14/23 04:41:25.293 +[AfterEach] [sig-network] DNS + test/e2e/framework/node/init/init.go:32 +Jan 14 04:41:25.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] DNS + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] DNS + tear down framework | framework.go:193 +STEP: Destroying namespace "dns-4490" for this suite. 01/14/23 04:41:25.335 +------------------------------ +• [SLOW TEST] [7.413 seconds] +[sig-network] DNS +test/e2e/network/common/framework.go:23 + should provide DNS for services [Conformance] + test/e2e/network/dns.go:137 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] DNS + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:41:17.928 + Jan 14 04:41:17.928: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename dns 01/14/23 04:41:17.929 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:41:17.94 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:41:17.942 + [BeforeEach] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:31 + [It] should provide DNS for services [Conformance] + test/e2e/network/dns.go:137 + STEP: Creating a test headless service 01/14/23 04:41:17.944 + STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4490.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4490.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4490.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4490.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4490.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4490.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4490.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4490.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4490.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4490.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4490.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4490.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 53.254.55.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.55.254.53_udp@PTR;check="$$(dig +tcp +noall +answer +search 53.254.55.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.55.254.53_tcp@PTR;sleep 1; done + 01/14/23 04:41:17.961 + STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4490.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4490.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4490.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4490.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4490.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4490.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4490.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4490.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4490.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4490.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4490.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4490.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 53.254.55.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.55.254.53_udp@PTR;check="$$(dig +tcp +noall +answer +search 53.254.55.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.55.254.53_tcp@PTR;sleep 1; done + 01/14/23 04:41:17.961 + STEP: creating a pod to probe DNS 01/14/23 04:41:17.961 + STEP: submitting the pod to kubernetes 01/14/23 04:41:17.961 + Jan 14 04:41:17.974: INFO: Waiting up to 15m0s for pod "dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d" in namespace "dns-4490" to be "running" + Jan 14 04:41:17.977: INFO: Pod "dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.241496ms + Jan 14 04:41:19.982: INFO: Pod "dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d": Phase="Running", Reason="", readiness=true. Elapsed: 2.007985494s + Jan 14 04:41:19.982: INFO: Pod "dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d" satisfied condition "running" + STEP: retrieving the pod 01/14/23 04:41:19.982 + STEP: looking for the results for each expected name from probers 01/14/23 04:41:19.985 + Jan 14 04:41:19.995: INFO: Unable to read wheezy_udp@dns-test-service.dns-4490.svc.cluster.local from pod dns-4490/dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d: the server could not find the requested resource (get pods dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d) + Jan 14 04:41:19.998: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4490.svc.cluster.local from pod dns-4490/dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d: the server could not find the requested resource (get pods dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d) + Jan 14 04:41:20.002: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4490.svc.cluster.local from pod dns-4490/dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d: the server could not find the requested resource (get pods dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d) + Jan 14 04:41:20.005: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4490.svc.cluster.local from pod dns-4490/dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d: the server could not find the requested resource (get pods dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d) + Jan 14 04:41:20.021: INFO: Unable to read jessie_udp@dns-test-service.dns-4490.svc.cluster.local from pod dns-4490/dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d: the server could not find the requested resource (get pods dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d) + Jan 14 04:41:20.023: INFO: Unable to read jessie_tcp@dns-test-service.dns-4490.svc.cluster.local from pod dns-4490/dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d: the server could not find the requested resource (get pods dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d) + Jan 14 04:41:20.026: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4490.svc.cluster.local from pod dns-4490/dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d: the server could not find the requested resource (get pods dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d) + Jan 14 04:41:20.029: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4490.svc.cluster.local from pod dns-4490/dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d: the server could not find the requested resource (get pods dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d) + Jan 14 04:41:20.041: INFO: Lookups using dns-4490/dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d failed for: [wheezy_udp@dns-test-service.dns-4490.svc.cluster.local wheezy_tcp@dns-test-service.dns-4490.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4490.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4490.svc.cluster.local jessie_udp@dns-test-service.dns-4490.svc.cluster.local jessie_tcp@dns-test-service.dns-4490.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4490.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4490.svc.cluster.local] + + Jan 14 04:41:25.097: INFO: DNS probes using dns-4490/dns-test-93890cbf-4f31-4dc7-8d3f-5992e6ef3e0d succeeded + + STEP: deleting the pod 01/14/23 04:41:25.097 + STEP: deleting the test service 01/14/23 04:41:25.114 + STEP: deleting the test headless service 01/14/23 04:41:25.293 + [AfterEach] [sig-network] DNS + test/e2e/framework/node/init/init.go:32 + Jan 14 04:41:25.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] DNS + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] DNS + tear down framework | framework.go:193 + STEP: Destroying namespace "dns-4490" for this suite. 01/14/23 04:41:25.335 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should succeed in writing subpaths in container [Slow] [Conformance] + test/e2e/common/node/expansion.go:297 +[BeforeEach] [sig-node] Variable Expansion + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:41:25.342 +Jan 14 04:41:25.342: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename var-expansion 01/14/23 04:41:25.343 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:41:25.419 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:41:25.421 +[BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:31 +[It] should succeed in writing subpaths in container [Slow] [Conformance] + test/e2e/common/node/expansion.go:297 +STEP: creating the pod 01/14/23 04:41:25.424 +STEP: waiting for pod running 01/14/23 04:41:25.433 +Jan 14 04:41:25.433: INFO: Waiting up to 2m0s for pod "var-expansion-2682cebd-511e-45c5-a7b7-996b5bf748e1" in namespace "var-expansion-2712" to be "running" +Jan 14 04:41:25.436: INFO: Pod "var-expansion-2682cebd-511e-45c5-a7b7-996b5bf748e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.923135ms +Jan 14 04:41:27.441: INFO: Pod "var-expansion-2682cebd-511e-45c5-a7b7-996b5bf748e1": Phase="Running", Reason="", readiness=true. Elapsed: 2.007889555s +Jan 14 04:41:27.441: INFO: Pod "var-expansion-2682cebd-511e-45c5-a7b7-996b5bf748e1" satisfied condition "running" +STEP: creating a file in subpath 01/14/23 04:41:27.441 +Jan 14 04:41:27.444: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-2712 PodName:var-expansion-2682cebd-511e-45c5-a7b7-996b5bf748e1 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jan 14 04:41:27.444: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +Jan 14 04:41:27.445: INFO: ExecWithOptions: Clientset creation +Jan 14 04:41:27.445: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/var-expansion-2712/pods/var-expansion-2682cebd-511e-45c5-a7b7-996b5bf748e1/exec?command=%2Fbin%2Fsh&command=-c&command=touch+%2Fvolume_mount%2Fmypath%2Ffoo%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) +STEP: test for file in mounted path 01/14/23 04:41:27.486 +Jan 14 04:41:27.489: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-2712 PodName:var-expansion-2682cebd-511e-45c5-a7b7-996b5bf748e1 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jan 14 04:41:27.489: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +Jan 14 04:41:27.490: INFO: ExecWithOptions: Clientset creation +Jan 14 04:41:27.490: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/var-expansion-2712/pods/var-expansion-2682cebd-511e-45c5-a7b7-996b5bf748e1/exec?command=%2Fbin%2Fsh&command=-c&command=test+-f+%2Fsubpath_mount%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) +STEP: updating the annotation value 01/14/23 04:41:27.534 +Jan 14 04:41:28.046: INFO: Successfully updated pod "var-expansion-2682cebd-511e-45c5-a7b7-996b5bf748e1" +STEP: waiting for annotated pod running 01/14/23 04:41:28.046 +Jan 14 04:41:28.046: INFO: Waiting up to 2m0s for pod "var-expansion-2682cebd-511e-45c5-a7b7-996b5bf748e1" in namespace "var-expansion-2712" to be "running" +Jan 14 04:41:28.050: INFO: Pod "var-expansion-2682cebd-511e-45c5-a7b7-996b5bf748e1": Phase="Running", Reason="", readiness=true. Elapsed: 3.771977ms +Jan 14 04:41:28.050: INFO: Pod "var-expansion-2682cebd-511e-45c5-a7b7-996b5bf748e1" satisfied condition "running" +STEP: deleting the pod gracefully 01/14/23 04:41:28.05 +Jan 14 04:41:28.050: INFO: Deleting pod "var-expansion-2682cebd-511e-45c5-a7b7-996b5bf748e1" in namespace "var-expansion-2712" +Jan 14 04:41:28.057: INFO: Wait up to 5m0s for pod "var-expansion-2682cebd-511e-45c5-a7b7-996b5bf748e1" to be fully deleted +[AfterEach] [sig-node] Variable Expansion + test/e2e/framework/node/init/init.go:32 +Jan 14 04:42:02.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Variable Expansion + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Variable Expansion + tear down framework | framework.go:193 +STEP: Destroying namespace "var-expansion-2712" for this suite. 01/14/23 04:42:02.071 +------------------------------ +• [SLOW TEST] [36.735 seconds] +[sig-node] Variable Expansion +test/e2e/common/node/framework.go:23 + should succeed in writing subpaths in container [Slow] [Conformance] + test/e2e/common/node/expansion.go:297 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Variable Expansion + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:41:25.342 + Jan 14 04:41:25.342: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename var-expansion 01/14/23 04:41:25.343 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:41:25.419 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:41:25.421 + [BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:31 + [It] should succeed in writing subpaths in container [Slow] [Conformance] + test/e2e/common/node/expansion.go:297 + STEP: creating the pod 01/14/23 04:41:25.424 + STEP: waiting for pod running 01/14/23 04:41:25.433 + Jan 14 04:41:25.433: INFO: Waiting up to 2m0s for pod "var-expansion-2682cebd-511e-45c5-a7b7-996b5bf748e1" in namespace "var-expansion-2712" to be "running" + Jan 14 04:41:25.436: INFO: Pod "var-expansion-2682cebd-511e-45c5-a7b7-996b5bf748e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.923135ms + Jan 14 04:41:27.441: INFO: Pod "var-expansion-2682cebd-511e-45c5-a7b7-996b5bf748e1": Phase="Running", Reason="", readiness=true. Elapsed: 2.007889555s + Jan 14 04:41:27.441: INFO: Pod "var-expansion-2682cebd-511e-45c5-a7b7-996b5bf748e1" satisfied condition "running" + STEP: creating a file in subpath 01/14/23 04:41:27.441 + Jan 14 04:41:27.444: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-2712 PodName:var-expansion-2682cebd-511e-45c5-a7b7-996b5bf748e1 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jan 14 04:41:27.444: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + Jan 14 04:41:27.445: INFO: ExecWithOptions: Clientset creation + Jan 14 04:41:27.445: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/var-expansion-2712/pods/var-expansion-2682cebd-511e-45c5-a7b7-996b5bf748e1/exec?command=%2Fbin%2Fsh&command=-c&command=touch+%2Fvolume_mount%2Fmypath%2Ffoo%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) + STEP: test for file in mounted path 01/14/23 04:41:27.486 + Jan 14 04:41:27.489: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-2712 PodName:var-expansion-2682cebd-511e-45c5-a7b7-996b5bf748e1 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jan 14 04:41:27.489: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + Jan 14 04:41:27.490: INFO: ExecWithOptions: Clientset creation + Jan 14 04:41:27.490: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/var-expansion-2712/pods/var-expansion-2682cebd-511e-45c5-a7b7-996b5bf748e1/exec?command=%2Fbin%2Fsh&command=-c&command=test+-f+%2Fsubpath_mount%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true) + STEP: updating the annotation value 01/14/23 04:41:27.534 + Jan 14 04:41:28.046: INFO: Successfully updated pod "var-expansion-2682cebd-511e-45c5-a7b7-996b5bf748e1" + STEP: waiting for annotated pod running 01/14/23 04:41:28.046 + Jan 14 04:41:28.046: INFO: Waiting up to 2m0s for pod "var-expansion-2682cebd-511e-45c5-a7b7-996b5bf748e1" in namespace "var-expansion-2712" to be "running" + Jan 14 04:41:28.050: INFO: Pod "var-expansion-2682cebd-511e-45c5-a7b7-996b5bf748e1": Phase="Running", Reason="", readiness=true. Elapsed: 3.771977ms + Jan 14 04:41:28.050: INFO: Pod "var-expansion-2682cebd-511e-45c5-a7b7-996b5bf748e1" satisfied condition "running" + STEP: deleting the pod gracefully 01/14/23 04:41:28.05 + Jan 14 04:41:28.050: INFO: Deleting pod "var-expansion-2682cebd-511e-45c5-a7b7-996b5bf748e1" in namespace "var-expansion-2712" + Jan 14 04:41:28.057: INFO: Wait up to 5m0s for pod "var-expansion-2682cebd-511e-45c5-a7b7-996b5bf748e1" to be fully deleted + [AfterEach] [sig-node] Variable Expansion + test/e2e/framework/node/init/init.go:32 + Jan 14 04:42:02.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Variable Expansion + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Variable Expansion + tear down framework | framework.go:193 + STEP: Destroying namespace "var-expansion-2712" for this suite. 01/14/23 04:42:02.071 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + patching/updating a mutating webhook should work [Conformance] + test/e2e/apimachinery/webhook.go:508 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:42:02.078 +Jan 14 04:42:02.078: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename webhook 01/14/23 04:42:02.078 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:42:02.093 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:42:02.095 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 01/14/23 04:42:02.112 +STEP: Create role binding to let webhook read extension-apiserver-authentication 01/14/23 04:42:02.801 +STEP: Deploying the webhook pod 01/14/23 04:42:02.855 +STEP: Wait for the deployment to be ready 01/14/23 04:42:02.869 +Jan 14 04:42:02.997: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created +STEP: Deploying the webhook service 01/14/23 04:42:05.008 +STEP: Verifying the service has paired with the endpoint 01/14/23 04:42:05.019 +Jan 14 04:42:06.019: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] patching/updating a mutating webhook should work [Conformance] + test/e2e/apimachinery/webhook.go:508 +STEP: Creating a mutating webhook configuration 01/14/23 04:42:06.023 +STEP: Updating a mutating webhook configuration's rules to not include the create operation 01/14/23 04:42:06.041 +STEP: Creating a configMap that should not be mutated 01/14/23 04:42:06.05 +STEP: Patching a mutating webhook configuration's rules to include the create operation 01/14/23 04:42:06.06 +STEP: Creating a configMap that should be mutated 01/14/23 04:42:06.07 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:42:06.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "webhook-2676" for this suite. 01/14/23 04:42:06.135 +STEP: Destroying namespace "webhook-2676-markers" for this suite. 01/14/23 04:42:06.144 +------------------------------ +• [4.072 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + patching/updating a mutating webhook should work [Conformance] + test/e2e/apimachinery/webhook.go:508 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:42:02.078 + Jan 14 04:42:02.078: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename webhook 01/14/23 04:42:02.078 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:42:02.093 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:42:02.095 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 01/14/23 04:42:02.112 + STEP: Create role binding to let webhook read extension-apiserver-authentication 01/14/23 04:42:02.801 + STEP: Deploying the webhook pod 01/14/23 04:42:02.855 + STEP: Wait for the deployment to be ready 01/14/23 04:42:02.869 + Jan 14 04:42:02.997: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created + STEP: Deploying the webhook service 01/14/23 04:42:05.008 + STEP: Verifying the service has paired with the endpoint 01/14/23 04:42:05.019 + Jan 14 04:42:06.019: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] patching/updating a mutating webhook should work [Conformance] + test/e2e/apimachinery/webhook.go:508 + STEP: Creating a mutating webhook configuration 01/14/23 04:42:06.023 + STEP: Updating a mutating webhook configuration's rules to not include the create operation 01/14/23 04:42:06.041 + STEP: Creating a configMap that should not be mutated 01/14/23 04:42:06.05 + STEP: Patching a mutating webhook configuration's rules to include the create operation 01/14/23 04:42:06.06 + STEP: Creating a configMap that should be mutated 01/14/23 04:42:06.07 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:42:06.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "webhook-2676" for this suite. 01/14/23 04:42:06.135 + STEP: Destroying namespace "webhook-2676-markers" for this suite. 01/14/23 04:42:06.144 + << End Captured GinkgoWriter Output +------------------------------ +[sig-api-machinery] ResourceQuota + should create a ResourceQuota and capture the life of a replication controller. [Conformance] + test/e2e/apimachinery/resource_quota.go:392 +[BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:42:06.15 +Jan 14 04:42:06.150: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename resourcequota 01/14/23 04:42:06.151 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:42:06.165 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:42:06.167 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 +[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] + test/e2e/apimachinery/resource_quota.go:392 +STEP: Counting existing ResourceQuota 01/14/23 04:42:06.17 +STEP: Creating a ResourceQuota 01/14/23 04:42:11.173 +STEP: Ensuring resource quota status is calculated 01/14/23 04:42:11.178 +STEP: Creating a ReplicationController 01/14/23 04:42:13.183 +STEP: Ensuring resource quota status captures replication controller creation 01/14/23 04:42:13.196 +STEP: Deleting a ReplicationController 01/14/23 04:42:15.201 +STEP: Ensuring resource quota status released usage 01/14/23 04:42:15.207 +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 +Jan 14 04:42:17.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 +STEP: Destroying namespace "resourcequota-6586" for this suite. 01/14/23 04:42:17.217 +------------------------------ +• [SLOW TEST] [11.073 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should create a ResourceQuota and capture the life of a replication controller. [Conformance] + test/e2e/apimachinery/resource_quota.go:392 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:42:06.15 + Jan 14 04:42:06.150: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename resourcequota 01/14/23 04:42:06.151 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:42:06.165 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:42:06.167 + [BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 + [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] + test/e2e/apimachinery/resource_quota.go:392 + STEP: Counting existing ResourceQuota 01/14/23 04:42:06.17 + STEP: Creating a ResourceQuota 01/14/23 04:42:11.173 + STEP: Ensuring resource quota status is calculated 01/14/23 04:42:11.178 + STEP: Creating a ReplicationController 01/14/23 04:42:13.183 + STEP: Ensuring resource quota status captures replication controller creation 01/14/23 04:42:13.196 + STEP: Deleting a ReplicationController 01/14/23 04:42:15.201 + STEP: Ensuring resource quota status released usage 01/14/23 04:42:15.207 + [AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 + Jan 14 04:42:17.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 + STEP: Destroying namespace "resourcequota-6586" for this suite. 01/14/23 04:42:17.217 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSS +------------------------------ +[sig-cli] Kubectl client Update Demo + should create and stop a replication controller [Conformance] + test/e2e/kubectl/kubectl.go:339 +[BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:42:17.224 +Jan 14 04:42:17.224: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename kubectl 01/14/23 04:42:17.225 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:42:17.238 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:42:17.24 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[BeforeEach] Update Demo + test/e2e/kubectl/kubectl.go:326 +[It] should create and stop a replication controller [Conformance] + test/e2e/kubectl/kubectl.go:339 +STEP: creating a replication controller 01/14/23 04:42:17.242 +Jan 14 04:42:17.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-7926 create -f -' +Jan 14 04:42:17.800: INFO: stderr: "" +Jan 14 04:42:17.800: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. 01/14/23 04:42:17.8 +Jan 14 04:42:17.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-7926 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Jan 14 04:42:17.867: INFO: stderr: "" +Jan 14 04:42:17.867: INFO: stdout: "update-demo-nautilus-mdpm2 update-demo-nautilus-xx5cv " +Jan 14 04:42:17.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-7926 get pods update-demo-nautilus-mdpm2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Jan 14 04:42:17.930: INFO: stderr: "" +Jan 14 04:42:17.930: INFO: stdout: "" +Jan 14 04:42:17.930: INFO: update-demo-nautilus-mdpm2 is created but not running +Jan 14 04:42:22.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-7926 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' +Jan 14 04:42:23.001: INFO: stderr: "" +Jan 14 04:42:23.001: INFO: stdout: "update-demo-nautilus-mdpm2 update-demo-nautilus-xx5cv " +Jan 14 04:42:23.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-7926 get pods update-demo-nautilus-mdpm2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Jan 14 04:42:23.066: INFO: stderr: "" +Jan 14 04:42:23.066: INFO: stdout: "true" +Jan 14 04:42:23.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-7926 get pods update-demo-nautilus-mdpm2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Jan 14 04:42:23.130: INFO: stderr: "" +Jan 14 04:42:23.130: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" +Jan 14 04:42:23.130: INFO: validating pod update-demo-nautilus-mdpm2 +Jan 14 04:42:23.135: INFO: got data: { + "image": "nautilus.jpg" +} + +Jan 14 04:42:23.135: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jan 14 04:42:23.135: INFO: update-demo-nautilus-mdpm2 is verified up and running +Jan 14 04:42:23.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-7926 get pods update-demo-nautilus-xx5cv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' +Jan 14 04:42:23.197: INFO: stderr: "" +Jan 14 04:42:23.197: INFO: stdout: "true" +Jan 14 04:42:23.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-7926 get pods update-demo-nautilus-xx5cv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' +Jan 14 04:42:23.259: INFO: stderr: "" +Jan 14 04:42:23.259: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" +Jan 14 04:42:23.259: INFO: validating pod update-demo-nautilus-xx5cv +Jan 14 04:42:23.263: INFO: got data: { + "image": "nautilus.jpg" +} + +Jan 14 04:42:23.263: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jan 14 04:42:23.263: INFO: update-demo-nautilus-xx5cv is verified up and running +STEP: using delete to clean up resources 01/14/23 04:42:23.263 +Jan 14 04:42:23.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-7926 delete --grace-period=0 --force -f -' +Jan 14 04:42:23.334: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jan 14 04:42:23.334: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Jan 14 04:42:23.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-7926 get rc,svc -l name=update-demo --no-headers' +Jan 14 04:42:23.405: INFO: stderr: "No resources found in kubectl-7926 namespace.\n" +Jan 14 04:42:23.405: INFO: stdout: "" +Jan 14 04:42:23.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-7926 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Jan 14 04:42:23.470: INFO: stderr: "" +Jan 14 04:42:23.470: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 +Jan 14 04:42:23.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 +STEP: Destroying namespace "kubectl-7926" for this suite. 01/14/23 04:42:23.476 +------------------------------ +• [SLOW TEST] [6.259 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Update Demo + test/e2e/kubectl/kubectl.go:324 + should create and stop a replication controller [Conformance] + test/e2e/kubectl/kubectl.go:339 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:42:17.224 + Jan 14 04:42:17.224: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename kubectl 01/14/23 04:42:17.225 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:42:17.238 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:42:17.24 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [BeforeEach] Update Demo + test/e2e/kubectl/kubectl.go:326 + [It] should create and stop a replication controller [Conformance] + test/e2e/kubectl/kubectl.go:339 + STEP: creating a replication controller 01/14/23 04:42:17.242 + Jan 14 04:42:17.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-7926 create -f -' + Jan 14 04:42:17.800: INFO: stderr: "" + Jan 14 04:42:17.800: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" + STEP: waiting for all containers in name=update-demo pods to come up. 01/14/23 04:42:17.8 + Jan 14 04:42:17.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-7926 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Jan 14 04:42:17.867: INFO: stderr: "" + Jan 14 04:42:17.867: INFO: stdout: "update-demo-nautilus-mdpm2 update-demo-nautilus-xx5cv " + Jan 14 04:42:17.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-7926 get pods update-demo-nautilus-mdpm2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Jan 14 04:42:17.930: INFO: stderr: "" + Jan 14 04:42:17.930: INFO: stdout: "" + Jan 14 04:42:17.930: INFO: update-demo-nautilus-mdpm2 is created but not running + Jan 14 04:42:22.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-7926 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' + Jan 14 04:42:23.001: INFO: stderr: "" + Jan 14 04:42:23.001: INFO: stdout: "update-demo-nautilus-mdpm2 update-demo-nautilus-xx5cv " + Jan 14 04:42:23.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-7926 get pods update-demo-nautilus-mdpm2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Jan 14 04:42:23.066: INFO: stderr: "" + Jan 14 04:42:23.066: INFO: stdout: "true" + Jan 14 04:42:23.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-7926 get pods update-demo-nautilus-mdpm2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' + Jan 14 04:42:23.130: INFO: stderr: "" + Jan 14 04:42:23.130: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" + Jan 14 04:42:23.130: INFO: validating pod update-demo-nautilus-mdpm2 + Jan 14 04:42:23.135: INFO: got data: { + "image": "nautilus.jpg" + } + + Jan 14 04:42:23.135: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . + Jan 14 04:42:23.135: INFO: update-demo-nautilus-mdpm2 is verified up and running + Jan 14 04:42:23.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-7926 get pods update-demo-nautilus-xx5cv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' + Jan 14 04:42:23.197: INFO: stderr: "" + Jan 14 04:42:23.197: INFO: stdout: "true" + Jan 14 04:42:23.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-7926 get pods update-demo-nautilus-xx5cv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' + Jan 14 04:42:23.259: INFO: stderr: "" + Jan 14 04:42:23.259: INFO: stdout: "registry.k8s.io/e2e-test-images/nautilus:1.7" + Jan 14 04:42:23.259: INFO: validating pod update-demo-nautilus-xx5cv + Jan 14 04:42:23.263: INFO: got data: { + "image": "nautilus.jpg" + } + + Jan 14 04:42:23.263: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . + Jan 14 04:42:23.263: INFO: update-demo-nautilus-xx5cv is verified up and running + STEP: using delete to clean up resources 01/14/23 04:42:23.263 + Jan 14 04:42:23.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-7926 delete --grace-period=0 --force -f -' + Jan 14 04:42:23.334: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" + Jan 14 04:42:23.334: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" + Jan 14 04:42:23.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-7926 get rc,svc -l name=update-demo --no-headers' + Jan 14 04:42:23.405: INFO: stderr: "No resources found in kubectl-7926 namespace.\n" + Jan 14 04:42:23.405: INFO: stdout: "" + Jan 14 04:42:23.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-7926 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' + Jan 14 04:42:23.470: INFO: stderr: "" + Jan 14 04:42:23.470: INFO: stdout: "" + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 + Jan 14 04:42:23.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 + STEP: Destroying namespace "kubectl-7926" for this suite. 01/14/23 04:42:23.476 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:97 +[BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:42:23.483 +Jan 14 04:42:23.483: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename emptydir 01/14/23 04:42:23.484 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:42:23.497 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:42:23.5 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 +[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:97 +STEP: Creating a pod to test emptydir 0644 on tmpfs 01/14/23 04:42:23.502 +Jan 14 04:42:23.511: INFO: Waiting up to 5m0s for pod "pod-040b17df-40c2-42c9-b3eb-3ac5c7e7f26c" in namespace "emptydir-5676" to be "Succeeded or Failed" +Jan 14 04:42:23.514: INFO: Pod "pod-040b17df-40c2-42c9-b3eb-3ac5c7e7f26c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.907094ms +Jan 14 04:42:25.519: INFO: Pod "pod-040b17df-40c2-42c9-b3eb-3ac5c7e7f26c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007804608s +Jan 14 04:42:27.519: INFO: Pod "pod-040b17df-40c2-42c9-b3eb-3ac5c7e7f26c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008004659s +STEP: Saw pod success 01/14/23 04:42:27.519 +Jan 14 04:42:27.519: INFO: Pod "pod-040b17df-40c2-42c9-b3eb-3ac5c7e7f26c" satisfied condition "Succeeded or Failed" +Jan 14 04:42:27.522: INFO: Trying to get logs from node 10.0.1.106 pod pod-040b17df-40c2-42c9-b3eb-3ac5c7e7f26c container test-container: +STEP: delete the pod 01/14/23 04:42:27.534 +Jan 14 04:42:27.547: INFO: Waiting for pod pod-040b17df-40c2-42c9-b3eb-3ac5c7e7f26c to disappear +Jan 14 04:42:27.550: INFO: Pod pod-040b17df-40c2-42c9-b3eb-3ac5c7e7f26c no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 +Jan 14 04:42:27.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 +STEP: Destroying namespace "emptydir-5676" for this suite. 01/14/23 04:42:27.555 +------------------------------ +• [4.078 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:97 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:42:23.483 + Jan 14 04:42:23.483: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename emptydir 01/14/23 04:42:23.484 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:42:23.497 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:42:23.5 + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 + [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:97 + STEP: Creating a pod to test emptydir 0644 on tmpfs 01/14/23 04:42:23.502 + Jan 14 04:42:23.511: INFO: Waiting up to 5m0s for pod "pod-040b17df-40c2-42c9-b3eb-3ac5c7e7f26c" in namespace "emptydir-5676" to be "Succeeded or Failed" + Jan 14 04:42:23.514: INFO: Pod "pod-040b17df-40c2-42c9-b3eb-3ac5c7e7f26c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.907094ms + Jan 14 04:42:25.519: INFO: Pod "pod-040b17df-40c2-42c9-b3eb-3ac5c7e7f26c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007804608s + Jan 14 04:42:27.519: INFO: Pod "pod-040b17df-40c2-42c9-b3eb-3ac5c7e7f26c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008004659s + STEP: Saw pod success 01/14/23 04:42:27.519 + Jan 14 04:42:27.519: INFO: Pod "pod-040b17df-40c2-42c9-b3eb-3ac5c7e7f26c" satisfied condition "Succeeded or Failed" + Jan 14 04:42:27.522: INFO: Trying to get logs from node 10.0.1.106 pod pod-040b17df-40c2-42c9-b3eb-3ac5c7e7f26c container test-container: + STEP: delete the pod 01/14/23 04:42:27.534 + Jan 14 04:42:27.547: INFO: Waiting for pod pod-040b17df-40c2-42c9-b3eb-3ac5c7e7f26c to disappear + Jan 14 04:42:27.550: INFO: Pod pod-040b17df-40c2-42c9-b3eb-3ac5c7e7f26c no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 + Jan 14 04:42:27.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 + STEP: Destroying namespace "emptydir-5676" for this suite. 01/14/23 04:42:27.555 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:89 +[BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:42:27.562 +Jan 14 04:42:27.562: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename configmap 01/14/23 04:42:27.562 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:42:27.575 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:42:27.577 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:89 +STEP: Creating configMap with name configmap-test-volume-map-c9a2abb8-69c5-47ea-8aeb-ab426d233715 01/14/23 04:42:27.579 +STEP: Creating a pod to test consume configMaps 01/14/23 04:42:27.584 +Jan 14 04:42:27.598: INFO: Waiting up to 5m0s for pod "pod-configmaps-9a029959-702b-45ec-b9c2-90d95d9eeecd" in namespace "configmap-6166" to be "Succeeded or Failed" +Jan 14 04:42:27.601: INFO: Pod "pod-configmaps-9a029959-702b-45ec-b9c2-90d95d9eeecd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.039548ms +Jan 14 04:42:29.606: INFO: Pod "pod-configmaps-9a029959-702b-45ec-b9c2-90d95d9eeecd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008309458s +Jan 14 04:42:31.605: INFO: Pod "pod-configmaps-9a029959-702b-45ec-b9c2-90d95d9eeecd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007611961s +STEP: Saw pod success 01/14/23 04:42:31.605 +Jan 14 04:42:31.606: INFO: Pod "pod-configmaps-9a029959-702b-45ec-b9c2-90d95d9eeecd" satisfied condition "Succeeded or Failed" +Jan 14 04:42:31.609: INFO: Trying to get logs from node 10.0.1.106 pod pod-configmaps-9a029959-702b-45ec-b9c2-90d95d9eeecd container agnhost-container: +STEP: delete the pod 01/14/23 04:42:31.615 +Jan 14 04:42:31.626: INFO: Waiting for pod pod-configmaps-9a029959-702b-45ec-b9c2-90d95d9eeecd to disappear +Jan 14 04:42:31.629: INFO: Pod pod-configmaps-9a029959-702b-45ec-b9c2-90d95d9eeecd no longer exists +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 +Jan 14 04:42:31.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 +STEP: Destroying namespace "configmap-6166" for this suite. 01/14/23 04:42:31.634 +------------------------------ +• [4.079 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:89 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:42:27.562 + Jan 14 04:42:27.562: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename configmap 01/14/23 04:42:27.562 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:42:27.575 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:42:27.577 + [BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:89 + STEP: Creating configMap with name configmap-test-volume-map-c9a2abb8-69c5-47ea-8aeb-ab426d233715 01/14/23 04:42:27.579 + STEP: Creating a pod to test consume configMaps 01/14/23 04:42:27.584 + Jan 14 04:42:27.598: INFO: Waiting up to 5m0s for pod "pod-configmaps-9a029959-702b-45ec-b9c2-90d95d9eeecd" in namespace "configmap-6166" to be "Succeeded or Failed" + Jan 14 04:42:27.601: INFO: Pod "pod-configmaps-9a029959-702b-45ec-b9c2-90d95d9eeecd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.039548ms + Jan 14 04:42:29.606: INFO: Pod "pod-configmaps-9a029959-702b-45ec-b9c2-90d95d9eeecd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008309458s + Jan 14 04:42:31.605: INFO: Pod "pod-configmaps-9a029959-702b-45ec-b9c2-90d95d9eeecd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007611961s + STEP: Saw pod success 01/14/23 04:42:31.605 + Jan 14 04:42:31.606: INFO: Pod "pod-configmaps-9a029959-702b-45ec-b9c2-90d95d9eeecd" satisfied condition "Succeeded or Failed" + Jan 14 04:42:31.609: INFO: Trying to get logs from node 10.0.1.106 pod pod-configmaps-9a029959-702b-45ec-b9c2-90d95d9eeecd container agnhost-container: + STEP: delete the pod 01/14/23 04:42:31.615 + Jan 14 04:42:31.626: INFO: Waiting for pod pod-configmaps-9a029959-702b-45ec-b9c2-90d95d9eeecd to disappear + Jan 14 04:42:31.629: INFO: Pod pod-configmaps-9a029959-702b-45ec-b9c2-90d95d9eeecd no longer exists + [AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 + Jan 14 04:42:31.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 + STEP: Destroying namespace "configmap-6166" for this suite. 01/14/23 04:42:31.634 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:217 +[BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:42:31.641 +Jan 14 04:42:31.641: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename emptydir 01/14/23 04:42:31.642 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:42:31.659 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:42:31.661 +[BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 +[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:217 +STEP: Creating a pod to test emptydir 0777 on node default medium 01/14/23 04:42:31.663 +Jan 14 04:42:31.675: INFO: Waiting up to 5m0s for pod "pod-4e246d66-d712-423a-9e46-f4970f952e63" in namespace "emptydir-1695" to be "Succeeded or Failed" +Jan 14 04:42:31.679: INFO: Pod "pod-4e246d66-d712-423a-9e46-f4970f952e63": Phase="Pending", Reason="", readiness=false. Elapsed: 3.071152ms +Jan 14 04:42:33.683: INFO: Pod "pod-4e246d66-d712-423a-9e46-f4970f952e63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007635535s +Jan 14 04:42:35.683: INFO: Pod "pod-4e246d66-d712-423a-9e46-f4970f952e63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007609578s +STEP: Saw pod success 01/14/23 04:42:35.683 +Jan 14 04:42:35.683: INFO: Pod "pod-4e246d66-d712-423a-9e46-f4970f952e63" satisfied condition "Succeeded or Failed" +Jan 14 04:42:35.686: INFO: Trying to get logs from node 10.0.1.106 pod pod-4e246d66-d712-423a-9e46-f4970f952e63 container test-container: +STEP: delete the pod 01/14/23 04:42:35.692 +Jan 14 04:42:35.703: INFO: Waiting for pod pod-4e246d66-d712-423a-9e46-f4970f952e63 to disappear +Jan 14 04:42:35.706: INFO: Pod pod-4e246d66-d712-423a-9e46-f4970f952e63 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 +Jan 14 04:42:35.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 +STEP: Destroying namespace "emptydir-1695" for this suite. 01/14/23 04:42:35.711 +------------------------------ +• [4.075 seconds] +[sig-storage] EmptyDir volumes +test/e2e/common/storage/framework.go:23 + should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:217 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] EmptyDir volumes + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:42:31.641 + Jan 14 04:42:31.641: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename emptydir 01/14/23 04:42:31.642 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:42:31.659 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:42:31.661 + [BeforeEach] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:31 + [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/empty_dir.go:217 + STEP: Creating a pod to test emptydir 0777 on node default medium 01/14/23 04:42:31.663 + Jan 14 04:42:31.675: INFO: Waiting up to 5m0s for pod "pod-4e246d66-d712-423a-9e46-f4970f952e63" in namespace "emptydir-1695" to be "Succeeded or Failed" + Jan 14 04:42:31.679: INFO: Pod "pod-4e246d66-d712-423a-9e46-f4970f952e63": Phase="Pending", Reason="", readiness=false. Elapsed: 3.071152ms + Jan 14 04:42:33.683: INFO: Pod "pod-4e246d66-d712-423a-9e46-f4970f952e63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007635535s + Jan 14 04:42:35.683: INFO: Pod "pod-4e246d66-d712-423a-9e46-f4970f952e63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007609578s + STEP: Saw pod success 01/14/23 04:42:35.683 + Jan 14 04:42:35.683: INFO: Pod "pod-4e246d66-d712-423a-9e46-f4970f952e63" satisfied condition "Succeeded or Failed" + Jan 14 04:42:35.686: INFO: Trying to get logs from node 10.0.1.106 pod pod-4e246d66-d712-423a-9e46-f4970f952e63 container test-container: + STEP: delete the pod 01/14/23 04:42:35.692 + Jan 14 04:42:35.703: INFO: Waiting for pod pod-4e246d66-d712-423a-9e46-f4970f952e63 to disappear + Jan 14 04:42:35.706: INFO: Pod pod-4e246d66-d712-423a-9e46-f4970f952e63 no longer exists + [AfterEach] [sig-storage] EmptyDir volumes + test/e2e/framework/node/init/init.go:32 + Jan 14 04:42:35.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] EmptyDir volumes + tear down framework | framework.go:193 + STEP: Destroying namespace "emptydir-1695" for this suite. 01/14/23 04:42:35.711 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide pod UID as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:267 +[BeforeEach] [sig-node] Downward API + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:42:35.717 +Jan 14 04:42:35.717: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename downward-api 01/14/23 04:42:35.717 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:42:35.732 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:42:35.734 +[BeforeEach] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:31 +[It] should provide pod UID as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:267 +STEP: Creating a pod to test downward api env vars 01/14/23 04:42:35.736 +Jan 14 04:42:35.744: INFO: Waiting up to 5m0s for pod "downward-api-74d84d3b-3c37-482d-8c84-4d025fbf8872" in namespace "downward-api-822" to be "Succeeded or Failed" +Jan 14 04:42:35.748: INFO: Pod "downward-api-74d84d3b-3c37-482d-8c84-4d025fbf8872": Phase="Pending", Reason="", readiness=false. Elapsed: 3.327656ms +Jan 14 04:42:37.752: INFO: Pod "downward-api-74d84d3b-3c37-482d-8c84-4d025fbf8872": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007174344s +Jan 14 04:42:39.753: INFO: Pod "downward-api-74d84d3b-3c37-482d-8c84-4d025fbf8872": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008650464s +STEP: Saw pod success 01/14/23 04:42:39.753 +Jan 14 04:42:39.753: INFO: Pod "downward-api-74d84d3b-3c37-482d-8c84-4d025fbf8872" satisfied condition "Succeeded or Failed" +Jan 14 04:42:39.756: INFO: Trying to get logs from node 10.0.1.106 pod downward-api-74d84d3b-3c37-482d-8c84-4d025fbf8872 container dapi-container: +STEP: delete the pod 01/14/23 04:42:39.762 +Jan 14 04:42:39.776: INFO: Waiting for pod downward-api-74d84d3b-3c37-482d-8c84-4d025fbf8872 to disappear +Jan 14 04:42:39.779: INFO: Pod downward-api-74d84d3b-3c37-482d-8c84-4d025fbf8872 no longer exists +[AfterEach] [sig-node] Downward API + test/e2e/framework/node/init/init.go:32 +Jan 14 04:42:39.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Downward API + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Downward API + tear down framework | framework.go:193 +STEP: Destroying namespace "downward-api-822" for this suite. 01/14/23 04:42:39.784 +------------------------------ +• [4.074 seconds] +[sig-node] Downward API +test/e2e/common/node/framework.go:23 + should provide pod UID as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:267 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Downward API + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:42:35.717 + Jan 14 04:42:35.717: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename downward-api 01/14/23 04:42:35.717 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:42:35.732 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:42:35.734 + [BeforeEach] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:31 + [It] should provide pod UID as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:267 + STEP: Creating a pod to test downward api env vars 01/14/23 04:42:35.736 + Jan 14 04:42:35.744: INFO: Waiting up to 5m0s for pod "downward-api-74d84d3b-3c37-482d-8c84-4d025fbf8872" in namespace "downward-api-822" to be "Succeeded or Failed" + Jan 14 04:42:35.748: INFO: Pod "downward-api-74d84d3b-3c37-482d-8c84-4d025fbf8872": Phase="Pending", Reason="", readiness=false. Elapsed: 3.327656ms + Jan 14 04:42:37.752: INFO: Pod "downward-api-74d84d3b-3c37-482d-8c84-4d025fbf8872": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007174344s + Jan 14 04:42:39.753: INFO: Pod "downward-api-74d84d3b-3c37-482d-8c84-4d025fbf8872": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008650464s + STEP: Saw pod success 01/14/23 04:42:39.753 + Jan 14 04:42:39.753: INFO: Pod "downward-api-74d84d3b-3c37-482d-8c84-4d025fbf8872" satisfied condition "Succeeded or Failed" + Jan 14 04:42:39.756: INFO: Trying to get logs from node 10.0.1.106 pod downward-api-74d84d3b-3c37-482d-8c84-4d025fbf8872 container dapi-container: + STEP: delete the pod 01/14/23 04:42:39.762 + Jan 14 04:42:39.776: INFO: Waiting for pod downward-api-74d84d3b-3c37-482d-8c84-4d025fbf8872 to disappear + Jan 14 04:42:39.779: INFO: Pod downward-api-74d84d3b-3c37-482d-8c84-4d025fbf8872 no longer exists + [AfterEach] [sig-node] Downward API + test/e2e/framework/node/init/init.go:32 + Jan 14 04:42:39.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Downward API + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Downward API + tear down framework | framework.go:193 + STEP: Destroying namespace "downward-api-822" for this suite. 01/14/23 04:42:39.784 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should be able to update and delete ResourceQuota. [Conformance] + test/e2e/apimachinery/resource_quota.go:884 +[BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:42:39.791 +Jan 14 04:42:39.792: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename resourcequota 01/14/23 04:42:39.792 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:42:39.805 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:42:39.807 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 +[It] should be able to update and delete ResourceQuota. [Conformance] + test/e2e/apimachinery/resource_quota.go:884 +STEP: Creating a ResourceQuota 01/14/23 04:42:39.809 +STEP: Getting a ResourceQuota 01/14/23 04:42:39.814 +STEP: Updating a ResourceQuota 01/14/23 04:42:39.819 +STEP: Verifying a ResourceQuota was modified 01/14/23 04:42:39.823 +STEP: Deleting a ResourceQuota 01/14/23 04:42:39.826 +STEP: Verifying the deleted ResourceQuota 01/14/23 04:42:39.835 +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 +Jan 14 04:42:39.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 +STEP: Destroying namespace "resourcequota-9366" for this suite. 01/14/23 04:42:39.842 +------------------------------ +• [0.056 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should be able to update and delete ResourceQuota. [Conformance] + test/e2e/apimachinery/resource_quota.go:884 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:42:39.791 + Jan 14 04:42:39.792: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename resourcequota 01/14/23 04:42:39.792 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:42:39.805 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:42:39.807 + [BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 + [It] should be able to update and delete ResourceQuota. [Conformance] + test/e2e/apimachinery/resource_quota.go:884 + STEP: Creating a ResourceQuota 01/14/23 04:42:39.809 + STEP: Getting a ResourceQuota 01/14/23 04:42:39.814 + STEP: Updating a ResourceQuota 01/14/23 04:42:39.819 + STEP: Verifying a ResourceQuota was modified 01/14/23 04:42:39.823 + STEP: Deleting a ResourceQuota 01/14/23 04:42:39.826 + STEP: Verifying the deleted ResourceQuota 01/14/23 04:42:39.835 + [AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 + Jan 14 04:42:39.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 + STEP: Destroying namespace "resourcequota-9366" for this suite. 01/14/23 04:42:39.842 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should mutate custom resource with pruning [Conformance] + test/e2e/apimachinery/webhook.go:341 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:42:39.849 +Jan 14 04:42:39.850: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename webhook 01/14/23 04:42:39.85 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:42:39.864 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:42:39.866 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 01/14/23 04:42:39.879 +STEP: Create role binding to let webhook read extension-apiserver-authentication 01/14/23 04:42:40.404 +STEP: Deploying the webhook pod 01/14/23 04:42:40.411 +STEP: Wait for the deployment to be ready 01/14/23 04:42:40.422 +Jan 14 04:42:40.430: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 01/14/23 04:42:42.44 +STEP: Verifying the service has paired with the endpoint 01/14/23 04:42:42.45 +Jan 14 04:42:43.450: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should mutate custom resource with pruning [Conformance] + test/e2e/apimachinery/webhook.go:341 +Jan 14 04:42:43.454: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2265-crds.webhook.example.com via the AdmissionRegistration API 01/14/23 04:42:43.965 +STEP: Creating a custom resource that should be mutated by the webhook 01/14/23 04:42:43.98 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:42:46.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "webhook-9144" for this suite. 01/14/23 04:42:46.599 +STEP: Destroying namespace "webhook-9144-markers" for this suite. 01/14/23 04:42:46.605 +------------------------------ +• [SLOW TEST] [6.764 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should mutate custom resource with pruning [Conformance] + test/e2e/apimachinery/webhook.go:341 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:42:39.849 + Jan 14 04:42:39.850: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename webhook 01/14/23 04:42:39.85 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:42:39.864 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:42:39.866 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 01/14/23 04:42:39.879 + STEP: Create role binding to let webhook read extension-apiserver-authentication 01/14/23 04:42:40.404 + STEP: Deploying the webhook pod 01/14/23 04:42:40.411 + STEP: Wait for the deployment to be ready 01/14/23 04:42:40.422 + Jan 14 04:42:40.430: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 01/14/23 04:42:42.44 + STEP: Verifying the service has paired with the endpoint 01/14/23 04:42:42.45 + Jan 14 04:42:43.450: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should mutate custom resource with pruning [Conformance] + test/e2e/apimachinery/webhook.go:341 + Jan 14 04:42:43.454: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2265-crds.webhook.example.com via the AdmissionRegistration API 01/14/23 04:42:43.965 + STEP: Creating a custom resource that should be mutated by the webhook 01/14/23 04:42:43.98 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:42:46.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "webhook-9144" for this suite. 01/14/23 04:42:46.599 + STEP: Destroying namespace "webhook-9144-markers" for this suite. 01/14/23 04:42:46.605 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] LimitRange + should list, patch and delete a LimitRange by collection [Conformance] + test/e2e/scheduling/limit_range.go:239 +[BeforeEach] [sig-scheduling] LimitRange + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:42:46.615 +Jan 14 04:42:46.615: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename limitrange 01/14/23 04:42:46.616 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:42:46.628 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:42:46.63 +[BeforeEach] [sig-scheduling] LimitRange + test/e2e/framework/metrics/init/init.go:31 +[It] should list, patch and delete a LimitRange by collection [Conformance] + test/e2e/scheduling/limit_range.go:239 +STEP: Creating LimitRange "e2e-limitrange-clt8d" in namespace "limitrange-762" 01/14/23 04:42:46.632 +STEP: Creating another limitRange in another namespace 01/14/23 04:42:46.639 +Jan 14 04:42:46.651: INFO: Namespace "e2e-limitrange-clt8d-3670" created +Jan 14 04:42:46.651: INFO: Creating LimitRange "e2e-limitrange-clt8d" in namespace "e2e-limitrange-clt8d-3670" +STEP: Listing all LimitRanges with label "e2e-test=e2e-limitrange-clt8d" 01/14/23 04:42:46.656 +Jan 14 04:42:46.659: INFO: Found 2 limitRanges +STEP: Patching LimitRange "e2e-limitrange-clt8d" in "limitrange-762" namespace 01/14/23 04:42:46.659 +Jan 14 04:42:46.665: INFO: LimitRange "e2e-limitrange-clt8d" has been patched +STEP: Delete LimitRange "e2e-limitrange-clt8d" by Collection with labelSelector: "e2e-limitrange-clt8d=patched" 01/14/23 04:42:46.665 +STEP: Confirm that the limitRange "e2e-limitrange-clt8d" has been deleted 01/14/23 04:42:46.672 +Jan 14 04:42:46.672: INFO: Requesting list of LimitRange to confirm quantity +Jan 14 04:42:46.675: INFO: Found 0 LimitRange with label "e2e-limitrange-clt8d=patched" +Jan 14 04:42:46.675: INFO: LimitRange "e2e-limitrange-clt8d" has been deleted. +STEP: Confirm that a single LimitRange still exists with label "e2e-test=e2e-limitrange-clt8d" 01/14/23 04:42:46.675 +Jan 14 04:42:46.678: INFO: Found 1 limitRange +[AfterEach] [sig-scheduling] LimitRange + test/e2e/framework/node/init/init.go:32 +Jan 14 04:42:46.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-scheduling] LimitRange + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-scheduling] LimitRange + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-scheduling] LimitRange + tear down framework | framework.go:193 +STEP: Destroying namespace "limitrange-762" for this suite. 01/14/23 04:42:46.685 +STEP: Destroying namespace "e2e-limitrange-clt8d-3670" for this suite. 01/14/23 04:42:46.691 +------------------------------ +• [0.081 seconds] +[sig-scheduling] LimitRange +test/e2e/scheduling/framework.go:40 + should list, patch and delete a LimitRange by collection [Conformance] + test/e2e/scheduling/limit_range.go:239 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-scheduling] LimitRange + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:42:46.615 + Jan 14 04:42:46.615: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename limitrange 01/14/23 04:42:46.616 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:42:46.628 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:42:46.63 + [BeforeEach] [sig-scheduling] LimitRange + test/e2e/framework/metrics/init/init.go:31 + [It] should list, patch and delete a LimitRange by collection [Conformance] + test/e2e/scheduling/limit_range.go:239 + STEP: Creating LimitRange "e2e-limitrange-clt8d" in namespace "limitrange-762" 01/14/23 04:42:46.632 + STEP: Creating another limitRange in another namespace 01/14/23 04:42:46.639 + Jan 14 04:42:46.651: INFO: Namespace "e2e-limitrange-clt8d-3670" created + Jan 14 04:42:46.651: INFO: Creating LimitRange "e2e-limitrange-clt8d" in namespace "e2e-limitrange-clt8d-3670" + STEP: Listing all LimitRanges with label "e2e-test=e2e-limitrange-clt8d" 01/14/23 04:42:46.656 + Jan 14 04:42:46.659: INFO: Found 2 limitRanges + STEP: Patching LimitRange "e2e-limitrange-clt8d" in "limitrange-762" namespace 01/14/23 04:42:46.659 + Jan 14 04:42:46.665: INFO: LimitRange "e2e-limitrange-clt8d" has been patched + STEP: Delete LimitRange "e2e-limitrange-clt8d" by Collection with labelSelector: "e2e-limitrange-clt8d=patched" 01/14/23 04:42:46.665 + STEP: Confirm that the limitRange "e2e-limitrange-clt8d" has been deleted 01/14/23 04:42:46.672 + Jan 14 04:42:46.672: INFO: Requesting list of LimitRange to confirm quantity + Jan 14 04:42:46.675: INFO: Found 0 LimitRange with label "e2e-limitrange-clt8d=patched" + Jan 14 04:42:46.675: INFO: LimitRange "e2e-limitrange-clt8d" has been deleted. + STEP: Confirm that a single LimitRange still exists with label "e2e-test=e2e-limitrange-clt8d" 01/14/23 04:42:46.675 + Jan 14 04:42:46.678: INFO: Found 1 limitRange + [AfterEach] [sig-scheduling] LimitRange + test/e2e/framework/node/init/init.go:32 + Jan 14 04:42:46.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-scheduling] LimitRange + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-scheduling] LimitRange + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-scheduling] LimitRange + tear down framework | framework.go:193 + STEP: Destroying namespace "limitrange-762" for this suite. 01/14/23 04:42:46.685 + STEP: Destroying namespace "e2e-limitrange-clt8d-3670" for this suite. 01/14/23 04:42:46.691 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should run and stop simple daemon [Conformance] + test/e2e/apps/daemon_set.go:166 +[BeforeEach] [sig-apps] Daemon set [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:42:46.697 +Jan 14 04:42:46.697: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename daemonsets 01/14/23 04:42:46.698 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:42:46.71 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:42:46.712 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 +[It] should run and stop simple daemon [Conformance] + test/e2e/apps/daemon_set.go:166 +STEP: Creating simple DaemonSet "daemon-set" 01/14/23 04:42:46.735 +STEP: Check that daemon pods launch on every node of the cluster. 01/14/23 04:42:46.742 +Jan 14 04:42:46.747: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:42:46.748: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:42:46.748: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:42:46.751: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jan 14 04:42:46.751: INFO: Node 10.0.1.106 is running 0 daemon pod, expected 1 +Jan 14 04:42:47.757: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:42:47.757: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:42:47.757: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:42:47.760: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Jan 14 04:42:47.760: INFO: Node 10.0.1.106 is running 0 daemon pod, expected 1 +Jan 14 04:42:48.756: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:42:48.756: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:42:48.756: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:42:48.760: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 +Jan 14 04:42:48.760: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set +STEP: Stop a daemon pod, check that the daemon pod is revived. 01/14/23 04:42:48.763 +Jan 14 04:42:48.778: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:42:48.778: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:42:48.778: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:42:48.781: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Jan 14 04:42:48.781: INFO: Node 10.0.1.212 is running 0 daemon pod, expected 1 +Jan 14 04:42:49.787: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:42:49.787: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:42:49.787: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:42:49.791: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Jan 14 04:42:49.791: INFO: Node 10.0.1.212 is running 0 daemon pod, expected 1 +Jan 14 04:42:50.787: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:42:50.787: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:42:50.787: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:42:50.791: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Jan 14 04:42:50.791: INFO: Node 10.0.1.212 is running 0 daemon pod, expected 1 +Jan 14 04:42:51.787: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:42:51.787: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:42:51.787: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:42:51.790: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Jan 14 04:42:51.790: INFO: Node 10.0.1.212 is running 0 daemon pod, expected 1 +Jan 14 04:42:52.787: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:42:52.787: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:42:52.787: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:42:52.790: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 +Jan 14 04:42:52.790: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 +STEP: Deleting DaemonSet "daemon-set" 01/14/23 04:42:52.793 +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5848, will wait for the garbage collector to delete the pods 01/14/23 04:42:52.793 +Jan 14 04:42:52.852: INFO: Deleting DaemonSet.extensions daemon-set took: 6.423677ms +Jan 14 04:42:52.953: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.541944ms +Jan 14 04:42:55.357: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jan 14 04:42:55.357: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Jan 14 04:42:55.361: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"450583"},"items":null} + +Jan 14 04:42:55.364: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450583"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:42:55.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "daemonsets-5848" for this suite. 01/14/23 04:42:55.383 +------------------------------ +• [SLOW TEST] [8.691 seconds] +[sig-apps] Daemon set [Serial] +test/e2e/apps/framework.go:23 + should run and stop simple daemon [Conformance] + test/e2e/apps/daemon_set.go:166 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Daemon set [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:42:46.697 + Jan 14 04:42:46.697: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename daemonsets 01/14/23 04:42:46.698 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:42:46.71 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:42:46.712 + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 + [It] should run and stop simple daemon [Conformance] + test/e2e/apps/daemon_set.go:166 + STEP: Creating simple DaemonSet "daemon-set" 01/14/23 04:42:46.735 + STEP: Check that daemon pods launch on every node of the cluster. 01/14/23 04:42:46.742 + Jan 14 04:42:46.747: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:42:46.748: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:42:46.748: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:42:46.751: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jan 14 04:42:46.751: INFO: Node 10.0.1.106 is running 0 daemon pod, expected 1 + Jan 14 04:42:47.757: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:42:47.757: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:42:47.757: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:42:47.760: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 + Jan 14 04:42:47.760: INFO: Node 10.0.1.106 is running 0 daemon pod, expected 1 + Jan 14 04:42:48.756: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:42:48.756: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:42:48.756: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:42:48.760: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 + Jan 14 04:42:48.760: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set + STEP: Stop a daemon pod, check that the daemon pod is revived. 01/14/23 04:42:48.763 + Jan 14 04:42:48.778: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:42:48.778: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:42:48.778: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:42:48.781: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Jan 14 04:42:48.781: INFO: Node 10.0.1.212 is running 0 daemon pod, expected 1 + Jan 14 04:42:49.787: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:42:49.787: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:42:49.787: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:42:49.791: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Jan 14 04:42:49.791: INFO: Node 10.0.1.212 is running 0 daemon pod, expected 1 + Jan 14 04:42:50.787: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:42:50.787: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:42:50.787: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:42:50.791: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Jan 14 04:42:50.791: INFO: Node 10.0.1.212 is running 0 daemon pod, expected 1 + Jan 14 04:42:51.787: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:42:51.787: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:42:51.787: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:42:51.790: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Jan 14 04:42:51.790: INFO: Node 10.0.1.212 is running 0 daemon pod, expected 1 + Jan 14 04:42:52.787: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:42:52.787: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:42:52.787: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:42:52.790: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 + Jan 14 04:42:52.790: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 + STEP: Deleting DaemonSet "daemon-set" 01/14/23 04:42:52.793 + STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5848, will wait for the garbage collector to delete the pods 01/14/23 04:42:52.793 + Jan 14 04:42:52.852: INFO: Deleting DaemonSet.extensions daemon-set took: 6.423677ms + Jan 14 04:42:52.953: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.541944ms + Jan 14 04:42:55.357: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jan 14 04:42:55.357: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set + Jan 14 04:42:55.361: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"450583"},"items":null} + + Jan 14 04:42:55.364: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450583"},"items":null} + + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:42:55.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "daemonsets-5848" for this suite. 01/14/23 04:42:55.383 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with downward pod [Conformance] + test/e2e/storage/subpath.go:92 +[BeforeEach] [sig-storage] Subpath + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:42:55.389 +Jan 14 04:42:55.389: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename subpath 01/14/23 04:42:55.39 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:42:55.403 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:42:55.406 +[BeforeEach] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 +STEP: Setting up data 01/14/23 04:42:55.408 +[It] should support subpaths with downward pod [Conformance] + test/e2e/storage/subpath.go:92 +STEP: Creating pod pod-subpath-test-downwardapi-6snl 01/14/23 04:42:55.416 +STEP: Creating a pod to test atomic-volume-subpath 01/14/23 04:42:55.416 +Jan 14 04:42:55.423: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-6snl" in namespace "subpath-7881" to be "Succeeded or Failed" +Jan 14 04:42:55.426: INFO: Pod "pod-subpath-test-downwardapi-6snl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.812938ms +Jan 14 04:42:57.430: INFO: Pod "pod-subpath-test-downwardapi-6snl": Phase="Running", Reason="", readiness=true. Elapsed: 2.006891239s +Jan 14 04:42:59.431: INFO: Pod "pod-subpath-test-downwardapi-6snl": Phase="Running", Reason="", readiness=true. Elapsed: 4.007590809s +Jan 14 04:43:01.432: INFO: Pod "pod-subpath-test-downwardapi-6snl": Phase="Running", Reason="", readiness=true. Elapsed: 6.008134466s +Jan 14 04:43:03.431: INFO: Pod "pod-subpath-test-downwardapi-6snl": Phase="Running", Reason="", readiness=true. Elapsed: 8.007938741s +Jan 14 04:43:05.431: INFO: Pod "pod-subpath-test-downwardapi-6snl": Phase="Running", Reason="", readiness=true. Elapsed: 10.007725823s +Jan 14 04:43:07.432: INFO: Pod "pod-subpath-test-downwardapi-6snl": Phase="Running", Reason="", readiness=true. Elapsed: 12.008666841s +Jan 14 04:43:09.432: INFO: Pod "pod-subpath-test-downwardapi-6snl": Phase="Running", Reason="", readiness=true. Elapsed: 14.008574356s +Jan 14 04:43:11.431: INFO: Pod "pod-subpath-test-downwardapi-6snl": Phase="Running", Reason="", readiness=true. Elapsed: 16.00805201s +Jan 14 04:43:13.431: INFO: Pod "pod-subpath-test-downwardapi-6snl": Phase="Running", Reason="", readiness=true. Elapsed: 18.00777372s +Jan 14 04:43:15.431: INFO: Pod "pod-subpath-test-downwardapi-6snl": Phase="Running", Reason="", readiness=true. Elapsed: 20.007421115s +Jan 14 04:43:17.431: INFO: Pod "pod-subpath-test-downwardapi-6snl": Phase="Running", Reason="", readiness=false. Elapsed: 22.007188383s +Jan 14 04:43:19.431: INFO: Pod "pod-subpath-test-downwardapi-6snl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.008022118s +STEP: Saw pod success 01/14/23 04:43:19.431 +Jan 14 04:43:19.432: INFO: Pod "pod-subpath-test-downwardapi-6snl" satisfied condition "Succeeded or Failed" +Jan 14 04:43:19.435: INFO: Trying to get logs from node 10.0.1.106 pod pod-subpath-test-downwardapi-6snl container test-container-subpath-downwardapi-6snl: +STEP: delete the pod 01/14/23 04:43:19.444 +Jan 14 04:43:19.457: INFO: Waiting for pod pod-subpath-test-downwardapi-6snl to disappear +Jan 14 04:43:19.460: INFO: Pod pod-subpath-test-downwardapi-6snl no longer exists +STEP: Deleting pod pod-subpath-test-downwardapi-6snl 01/14/23 04:43:19.46 +Jan 14 04:43:19.460: INFO: Deleting pod "pod-subpath-test-downwardapi-6snl" in namespace "subpath-7881" +[AfterEach] [sig-storage] Subpath + test/e2e/framework/node/init/init.go:32 +Jan 14 04:43:19.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Subpath + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Subpath + tear down framework | framework.go:193 +STEP: Destroying namespace "subpath-7881" for this suite. 01/14/23 04:43:19.467 +------------------------------ +• [SLOW TEST] [24.083 seconds] +[sig-storage] Subpath +test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + test/e2e/storage/subpath.go:36 + should support subpaths with downward pod [Conformance] + test/e2e/storage/subpath.go:92 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Subpath + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:42:55.389 + Jan 14 04:42:55.389: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename subpath 01/14/23 04:42:55.39 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:42:55.403 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:42:55.406 + [BeforeEach] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 + STEP: Setting up data 01/14/23 04:42:55.408 + [It] should support subpaths with downward pod [Conformance] + test/e2e/storage/subpath.go:92 + STEP: Creating pod pod-subpath-test-downwardapi-6snl 01/14/23 04:42:55.416 + STEP: Creating a pod to test atomic-volume-subpath 01/14/23 04:42:55.416 + Jan 14 04:42:55.423: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-6snl" in namespace "subpath-7881" to be "Succeeded or Failed" + Jan 14 04:42:55.426: INFO: Pod "pod-subpath-test-downwardapi-6snl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.812938ms + Jan 14 04:42:57.430: INFO: Pod "pod-subpath-test-downwardapi-6snl": Phase="Running", Reason="", readiness=true. Elapsed: 2.006891239s + Jan 14 04:42:59.431: INFO: Pod "pod-subpath-test-downwardapi-6snl": Phase="Running", Reason="", readiness=true. Elapsed: 4.007590809s + Jan 14 04:43:01.432: INFO: Pod "pod-subpath-test-downwardapi-6snl": Phase="Running", Reason="", readiness=true. Elapsed: 6.008134466s + Jan 14 04:43:03.431: INFO: Pod "pod-subpath-test-downwardapi-6snl": Phase="Running", Reason="", readiness=true. Elapsed: 8.007938741s + Jan 14 04:43:05.431: INFO: Pod "pod-subpath-test-downwardapi-6snl": Phase="Running", Reason="", readiness=true. Elapsed: 10.007725823s + Jan 14 04:43:07.432: INFO: Pod "pod-subpath-test-downwardapi-6snl": Phase="Running", Reason="", readiness=true. Elapsed: 12.008666841s + Jan 14 04:43:09.432: INFO: Pod "pod-subpath-test-downwardapi-6snl": Phase="Running", Reason="", readiness=true. Elapsed: 14.008574356s + Jan 14 04:43:11.431: INFO: Pod "pod-subpath-test-downwardapi-6snl": Phase="Running", Reason="", readiness=true. Elapsed: 16.00805201s + Jan 14 04:43:13.431: INFO: Pod "pod-subpath-test-downwardapi-6snl": Phase="Running", Reason="", readiness=true. Elapsed: 18.00777372s + Jan 14 04:43:15.431: INFO: Pod "pod-subpath-test-downwardapi-6snl": Phase="Running", Reason="", readiness=true. Elapsed: 20.007421115s + Jan 14 04:43:17.431: INFO: Pod "pod-subpath-test-downwardapi-6snl": Phase="Running", Reason="", readiness=false. Elapsed: 22.007188383s + Jan 14 04:43:19.431: INFO: Pod "pod-subpath-test-downwardapi-6snl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.008022118s + STEP: Saw pod success 01/14/23 04:43:19.431 + Jan 14 04:43:19.432: INFO: Pod "pod-subpath-test-downwardapi-6snl" satisfied condition "Succeeded or Failed" + Jan 14 04:43:19.435: INFO: Trying to get logs from node 10.0.1.106 pod pod-subpath-test-downwardapi-6snl container test-container-subpath-downwardapi-6snl: + STEP: delete the pod 01/14/23 04:43:19.444 + Jan 14 04:43:19.457: INFO: Waiting for pod pod-subpath-test-downwardapi-6snl to disappear + Jan 14 04:43:19.460: INFO: Pod pod-subpath-test-downwardapi-6snl no longer exists + STEP: Deleting pod pod-subpath-test-downwardapi-6snl 01/14/23 04:43:19.46 + Jan 14 04:43:19.460: INFO: Deleting pod "pod-subpath-test-downwardapi-6snl" in namespace "subpath-7881" + [AfterEach] [sig-storage] Subpath + test/e2e/framework/node/init/init.go:32 + Jan 14 04:43:19.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Subpath + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Subpath + tear down framework | framework.go:193 + STEP: Destroying namespace "subpath-7881" for this suite. 01/14/23 04:43:19.467 + << End Captured GinkgoWriter Output +------------------------------ +SS +------------------------------ +[sig-apps] ControllerRevision [Serial] + should manage the lifecycle of a ControllerRevision [Conformance] + test/e2e/apps/controller_revision.go:124 +[BeforeEach] [sig-apps] ControllerRevision [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:43:19.473 +Jan 14 04:43:19.473: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename controllerrevisions 01/14/23 04:43:19.474 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:43:19.485 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:43:19.487 +[BeforeEach] [sig-apps] ControllerRevision [Serial] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] ControllerRevision [Serial] + test/e2e/apps/controller_revision.go:93 +[It] should manage the lifecycle of a ControllerRevision [Conformance] + test/e2e/apps/controller_revision.go:124 +STEP: Creating DaemonSet "e2e-rl5lp-daemon-set" 01/14/23 04:43:19.507 +STEP: Check that daemon pods launch on every node of the cluster. 01/14/23 04:43:19.513 +Jan 14 04:43:19.517: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:43:19.517: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:43:19.517: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:43:19.520: INFO: Number of nodes with available pods controlled by daemonset e2e-rl5lp-daemon-set: 0 +Jan 14 04:43:19.520: INFO: Node 10.0.1.106 is running 0 daemon pod, expected 1 +Jan 14 04:43:20.527: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:43:20.527: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:43:20.527: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:43:20.532: INFO: Number of nodes with available pods controlled by daemonset e2e-rl5lp-daemon-set: 1 +Jan 14 04:43:20.532: INFO: Node 10.0.1.106 is running 0 daemon pod, expected 1 +Jan 14 04:43:21.526: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:43:21.526: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:43:21.526: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:43:21.529: INFO: Number of nodes with available pods controlled by daemonset e2e-rl5lp-daemon-set: 3 +Jan 14 04:43:21.529: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset e2e-rl5lp-daemon-set +STEP: Confirm DaemonSet "e2e-rl5lp-daemon-set" successfully created with "daemonset-name=e2e-rl5lp-daemon-set" label 01/14/23 04:43:21.532 +STEP: Listing all ControllerRevisions with label "daemonset-name=e2e-rl5lp-daemon-set" 01/14/23 04:43:21.538 +Jan 14 04:43:21.542: INFO: Located ControllerRevision: "e2e-rl5lp-daemon-set-569b4f9794" +STEP: Patching ControllerRevision "e2e-rl5lp-daemon-set-569b4f9794" 01/14/23 04:43:21.544 +Jan 14 04:43:21.550: INFO: e2e-rl5lp-daemon-set-569b4f9794 has been patched +STEP: Create a new ControllerRevision 01/14/23 04:43:21.55 +Jan 14 04:43:21.556: INFO: Created ControllerRevision: e2e-rl5lp-daemon-set-6d95d89dd8 +STEP: Confirm that there are two ControllerRevisions 01/14/23 04:43:21.556 +Jan 14 04:43:21.556: INFO: Requesting list of ControllerRevisions to confirm quantity +Jan 14 04:43:21.559: INFO: Found 2 ControllerRevisions +STEP: Deleting ControllerRevision "e2e-rl5lp-daemon-set-569b4f9794" 01/14/23 04:43:21.559 +STEP: Confirm that there is only one ControllerRevision 01/14/23 04:43:21.565 +Jan 14 04:43:21.565: INFO: Requesting list of ControllerRevisions to confirm quantity +Jan 14 04:43:21.567: INFO: Found 1 ControllerRevisions +STEP: Updating ControllerRevision "e2e-rl5lp-daemon-set-6d95d89dd8" 01/14/23 04:43:21.57 +Jan 14 04:43:21.578: INFO: e2e-rl5lp-daemon-set-6d95d89dd8 has been updated +STEP: Generate another ControllerRevision by patching the Daemonset 01/14/23 04:43:21.578 +W0114 04:43:21.587692 25 warnings.go:70] unknown field "updateStrategy" +STEP: Confirm that there are two ControllerRevisions 01/14/23 04:43:21.587 +Jan 14 04:43:21.587: INFO: Requesting list of ControllerRevisions to confirm quantity +Jan 14 04:43:22.590: INFO: Requesting list of ControllerRevisions to confirm quantity +Jan 14 04:43:22.594: INFO: Found 2 ControllerRevisions +STEP: Removing a ControllerRevision via 'DeleteCollection' with labelSelector: "e2e-rl5lp-daemon-set-6d95d89dd8=updated" 01/14/23 04:43:22.594 +STEP: Confirm that there is only one ControllerRevision 01/14/23 04:43:22.602 +Jan 14 04:43:22.602: INFO: Requesting list of ControllerRevisions to confirm quantity +Jan 14 04:43:22.605: INFO: Found 1 ControllerRevisions +Jan 14 04:43:22.607: INFO: ControllerRevision "e2e-rl5lp-daemon-set-687b775f77" has revision 3 +[AfterEach] [sig-apps] ControllerRevision [Serial] + test/e2e/apps/controller_revision.go:58 +STEP: Deleting DaemonSet "e2e-rl5lp-daemon-set" 01/14/23 04:43:22.61 +STEP: deleting DaemonSet.extensions e2e-rl5lp-daemon-set in namespace controllerrevisions-1242, will wait for the garbage collector to delete the pods 01/14/23 04:43:22.61 +Jan 14 04:43:22.670: INFO: Deleting DaemonSet.extensions e2e-rl5lp-daemon-set took: 6.407236ms +Jan 14 04:43:22.770: INFO: Terminating DaemonSet.extensions e2e-rl5lp-daemon-set pods took: 100.399956ms +Jan 14 04:43:23.874: INFO: Number of nodes with available pods controlled by daemonset e2e-rl5lp-daemon-set: 0 +Jan 14 04:43:23.874: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset e2e-rl5lp-daemon-set +Jan 14 04:43:23.877: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"450833"},"items":null} + +Jan 14 04:43:23.879: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450833"},"items":null} + +[AfterEach] [sig-apps] ControllerRevision [Serial] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:43:23.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "controllerrevisions-1242" for this suite. 01/14/23 04:43:23.898 +------------------------------ +• [4.431 seconds] +[sig-apps] ControllerRevision [Serial] +test/e2e/apps/framework.go:23 + should manage the lifecycle of a ControllerRevision [Conformance] + test/e2e/apps/controller_revision.go:124 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ControllerRevision [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:43:19.473 + Jan 14 04:43:19.473: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename controllerrevisions 01/14/23 04:43:19.474 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:43:19.485 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:43:19.487 + [BeforeEach] [sig-apps] ControllerRevision [Serial] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] ControllerRevision [Serial] + test/e2e/apps/controller_revision.go:93 + [It] should manage the lifecycle of a ControllerRevision [Conformance] + test/e2e/apps/controller_revision.go:124 + STEP: Creating DaemonSet "e2e-rl5lp-daemon-set" 01/14/23 04:43:19.507 + STEP: Check that daemon pods launch on every node of the cluster. 01/14/23 04:43:19.513 + Jan 14 04:43:19.517: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:43:19.517: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:43:19.517: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:43:19.520: INFO: Number of nodes with available pods controlled by daemonset e2e-rl5lp-daemon-set: 0 + Jan 14 04:43:19.520: INFO: Node 10.0.1.106 is running 0 daemon pod, expected 1 + Jan 14 04:43:20.527: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:43:20.527: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:43:20.527: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:43:20.532: INFO: Number of nodes with available pods controlled by daemonset e2e-rl5lp-daemon-set: 1 + Jan 14 04:43:20.532: INFO: Node 10.0.1.106 is running 0 daemon pod, expected 1 + Jan 14 04:43:21.526: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:43:21.526: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:43:21.526: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:43:21.529: INFO: Number of nodes with available pods controlled by daemonset e2e-rl5lp-daemon-set: 3 + Jan 14 04:43:21.529: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset e2e-rl5lp-daemon-set + STEP: Confirm DaemonSet "e2e-rl5lp-daemon-set" successfully created with "daemonset-name=e2e-rl5lp-daemon-set" label 01/14/23 04:43:21.532 + STEP: Listing all ControllerRevisions with label "daemonset-name=e2e-rl5lp-daemon-set" 01/14/23 04:43:21.538 + Jan 14 04:43:21.542: INFO: Located ControllerRevision: "e2e-rl5lp-daemon-set-569b4f9794" + STEP: Patching ControllerRevision "e2e-rl5lp-daemon-set-569b4f9794" 01/14/23 04:43:21.544 + Jan 14 04:43:21.550: INFO: e2e-rl5lp-daemon-set-569b4f9794 has been patched + STEP: Create a new ControllerRevision 01/14/23 04:43:21.55 + Jan 14 04:43:21.556: INFO: Created ControllerRevision: e2e-rl5lp-daemon-set-6d95d89dd8 + STEP: Confirm that there are two ControllerRevisions 01/14/23 04:43:21.556 + Jan 14 04:43:21.556: INFO: Requesting list of ControllerRevisions to confirm quantity + Jan 14 04:43:21.559: INFO: Found 2 ControllerRevisions + STEP: Deleting ControllerRevision "e2e-rl5lp-daemon-set-569b4f9794" 01/14/23 04:43:21.559 + STEP: Confirm that there is only one ControllerRevision 01/14/23 04:43:21.565 + Jan 14 04:43:21.565: INFO: Requesting list of ControllerRevisions to confirm quantity + Jan 14 04:43:21.567: INFO: Found 1 ControllerRevisions + STEP: Updating ControllerRevision "e2e-rl5lp-daemon-set-6d95d89dd8" 01/14/23 04:43:21.57 + Jan 14 04:43:21.578: INFO: e2e-rl5lp-daemon-set-6d95d89dd8 has been updated + STEP: Generate another ControllerRevision by patching the Daemonset 01/14/23 04:43:21.578 + W0114 04:43:21.587692 25 warnings.go:70] unknown field "updateStrategy" + STEP: Confirm that there are two ControllerRevisions 01/14/23 04:43:21.587 + Jan 14 04:43:21.587: INFO: Requesting list of ControllerRevisions to confirm quantity + Jan 14 04:43:22.590: INFO: Requesting list of ControllerRevisions to confirm quantity + Jan 14 04:43:22.594: INFO: Found 2 ControllerRevisions + STEP: Removing a ControllerRevision via 'DeleteCollection' with labelSelector: "e2e-rl5lp-daemon-set-6d95d89dd8=updated" 01/14/23 04:43:22.594 + STEP: Confirm that there is only one ControllerRevision 01/14/23 04:43:22.602 + Jan 14 04:43:22.602: INFO: Requesting list of ControllerRevisions to confirm quantity + Jan 14 04:43:22.605: INFO: Found 1 ControllerRevisions + Jan 14 04:43:22.607: INFO: ControllerRevision "e2e-rl5lp-daemon-set-687b775f77" has revision 3 + [AfterEach] [sig-apps] ControllerRevision [Serial] + test/e2e/apps/controller_revision.go:58 + STEP: Deleting DaemonSet "e2e-rl5lp-daemon-set" 01/14/23 04:43:22.61 + STEP: deleting DaemonSet.extensions e2e-rl5lp-daemon-set in namespace controllerrevisions-1242, will wait for the garbage collector to delete the pods 01/14/23 04:43:22.61 + Jan 14 04:43:22.670: INFO: Deleting DaemonSet.extensions e2e-rl5lp-daemon-set took: 6.407236ms + Jan 14 04:43:22.770: INFO: Terminating DaemonSet.extensions e2e-rl5lp-daemon-set pods took: 100.399956ms + Jan 14 04:43:23.874: INFO: Number of nodes with available pods controlled by daemonset e2e-rl5lp-daemon-set: 0 + Jan 14 04:43:23.874: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset e2e-rl5lp-daemon-set + Jan 14 04:43:23.877: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"450833"},"items":null} + + Jan 14 04:43:23.879: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450833"},"items":null} + + [AfterEach] [sig-apps] ControllerRevision [Serial] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:43:23.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] ControllerRevision [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "controllerrevisions-1242" for this suite. 01/14/23 04:43:23.898 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/network/service.go:2213 +[BeforeEach] [sig-network] Services + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:43:23.904 +Jan 14 04:43:23.904: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename services 01/14/23 04:43:23.905 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:43:23.916 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:43:23.918 +[BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/network/service.go:2213 +STEP: creating service in namespace services-6172 01/14/23 04:43:23.92 +STEP: creating service affinity-clusterip-transition in namespace services-6172 01/14/23 04:43:23.92 +STEP: creating replication controller affinity-clusterip-transition in namespace services-6172 01/14/23 04:43:23.931 +I0114 04:43:23.945605 25 runners.go:193] Created replication controller with name: affinity-clusterip-transition, namespace: services-6172, replica count: 3 +I0114 04:43:26.997838 25 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Jan 14 04:43:27.004: INFO: Creating new exec pod +Jan 14 04:43:27.011: INFO: Waiting up to 5m0s for pod "execpod-affinitygtfvd" in namespace "services-6172" to be "running" +Jan 14 04:43:27.014: INFO: Pod "execpod-affinitygtfvd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.896007ms +Jan 14 04:43:29.017: INFO: Pod "execpod-affinitygtfvd": Phase="Running", Reason="", readiness=true. Elapsed: 2.006412445s +Jan 14 04:43:29.018: INFO: Pod "execpod-affinitygtfvd" satisfied condition "running" +Jan 14 04:43:30.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-6172 exec execpod-affinitygtfvd -- /bin/sh -x -c nc -v -z -w 2 affinity-clusterip-transition 80' +Jan 14 04:43:30.135: INFO: stderr: "+ nc -v -z -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" +Jan 14 04:43:30.135: INFO: stdout: "" +Jan 14 04:43:30.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-6172 exec execpod-affinitygtfvd -- /bin/sh -x -c nc -v -z -w 2 10.55.252.137 80' +Jan 14 04:43:30.243: INFO: stderr: "+ nc -v -z -w 2 10.55.252.137 80\nConnection to 10.55.252.137 80 port [tcp/http] succeeded!\n" +Jan 14 04:43:30.243: INFO: stdout: "" +Jan 14 04:43:30.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-6172 exec execpod-affinitygtfvd -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.55.252.137:80/ ; done' +Jan 14 04:43:30.418: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n" +Jan 14 04:43:30.418: INFO: stdout: "\naffinity-clusterip-transition-lgw4r\naffinity-clusterip-transition-lgw4r\naffinity-clusterip-transition-lgw4r\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-dgcvb\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-lgw4r\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-dgcvb\naffinity-clusterip-transition-lgw4r\naffinity-clusterip-transition-lgw4r\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-lgw4r" +Jan 14 04:43:30.418: INFO: Received response from host: affinity-clusterip-transition-lgw4r +Jan 14 04:43:30.418: INFO: Received response from host: affinity-clusterip-transition-lgw4r +Jan 14 04:43:30.418: INFO: Received response from host: affinity-clusterip-transition-lgw4r +Jan 14 04:43:30.418: INFO: Received response from host: affinity-clusterip-transition-p7cvv +Jan 14 04:43:30.418: INFO: Received response from host: affinity-clusterip-transition-p7cvv +Jan 14 04:43:30.418: INFO: Received response from host: affinity-clusterip-transition-dgcvb +Jan 14 04:43:30.418: INFO: Received response from host: affinity-clusterip-transition-p7cvv +Jan 14 04:43:30.418: INFO: Received response from host: affinity-clusterip-transition-lgw4r +Jan 14 04:43:30.418: INFO: Received response from host: affinity-clusterip-transition-p7cvv +Jan 14 04:43:30.418: INFO: Received response from host: affinity-clusterip-transition-p7cvv +Jan 14 04:43:30.418: INFO: Received response from host: affinity-clusterip-transition-dgcvb +Jan 14 04:43:30.418: INFO: Received response from host: affinity-clusterip-transition-lgw4r +Jan 14 04:43:30.418: INFO: Received response from host: affinity-clusterip-transition-lgw4r +Jan 14 04:43:30.418: INFO: Received response from host: affinity-clusterip-transition-p7cvv +Jan 14 04:43:30.418: INFO: Received response from host: affinity-clusterip-transition-p7cvv +Jan 14 04:43:30.418: INFO: Received response from host: affinity-clusterip-transition-lgw4r +Jan 14 04:43:30.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-6172 exec execpod-affinitygtfvd -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.55.252.137:80/ ; done' +Jan 14 04:43:30.590: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n" +Jan 14 04:43:30.590: INFO: stdout: "\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-p7cvv" +Jan 14 04:43:30.590: INFO: Received response from host: affinity-clusterip-transition-p7cvv +Jan 14 04:43:30.590: INFO: Received response from host: affinity-clusterip-transition-p7cvv +Jan 14 04:43:30.590: INFO: Received response from host: affinity-clusterip-transition-p7cvv +Jan 14 04:43:30.590: INFO: Received response from host: affinity-clusterip-transition-p7cvv +Jan 14 04:43:30.590: INFO: Received response from host: affinity-clusterip-transition-p7cvv +Jan 14 04:43:30.590: INFO: Received response from host: affinity-clusterip-transition-p7cvv +Jan 14 04:43:30.590: INFO: Received response from host: affinity-clusterip-transition-p7cvv +Jan 14 04:43:30.590: INFO: Received response from host: affinity-clusterip-transition-p7cvv +Jan 14 04:43:30.590: INFO: Received response from host: affinity-clusterip-transition-p7cvv +Jan 14 04:43:30.590: INFO: Received response from host: affinity-clusterip-transition-p7cvv +Jan 14 04:43:30.590: INFO: Received response from host: affinity-clusterip-transition-p7cvv +Jan 14 04:43:30.590: INFO: Received response from host: affinity-clusterip-transition-p7cvv +Jan 14 04:43:30.590: INFO: Received response from host: affinity-clusterip-transition-p7cvv +Jan 14 04:43:30.590: INFO: Received response from host: affinity-clusterip-transition-p7cvv +Jan 14 04:43:30.590: INFO: Received response from host: affinity-clusterip-transition-p7cvv +Jan 14 04:43:30.590: INFO: Received response from host: affinity-clusterip-transition-p7cvv +Jan 14 04:43:30.590: INFO: Cleaning up the exec pod +STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-6172, will wait for the garbage collector to delete the pods 01/14/23 04:43:30.602 +Jan 14 04:43:30.663: INFO: Deleting ReplicationController affinity-clusterip-transition took: 7.027365ms +Jan 14 04:43:30.764: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 101.136598ms +[AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 +Jan 14 04:43:32.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 +STEP: Destroying namespace "services-6172" for this suite. 01/14/23 04:43:32.687 +------------------------------ +• [SLOW TEST] [8.788 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/network/service.go:2213 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:43:23.904 + Jan 14 04:43:23.904: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename services 01/14/23 04:43:23.905 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:43:23.916 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:43:23.918 + [BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] + test/e2e/network/service.go:2213 + STEP: creating service in namespace services-6172 01/14/23 04:43:23.92 + STEP: creating service affinity-clusterip-transition in namespace services-6172 01/14/23 04:43:23.92 + STEP: creating replication controller affinity-clusterip-transition in namespace services-6172 01/14/23 04:43:23.931 + I0114 04:43:23.945605 25 runners.go:193] Created replication controller with name: affinity-clusterip-transition, namespace: services-6172, replica count: 3 + I0114 04:43:26.997838 25 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Jan 14 04:43:27.004: INFO: Creating new exec pod + Jan 14 04:43:27.011: INFO: Waiting up to 5m0s for pod "execpod-affinitygtfvd" in namespace "services-6172" to be "running" + Jan 14 04:43:27.014: INFO: Pod "execpod-affinitygtfvd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.896007ms + Jan 14 04:43:29.017: INFO: Pod "execpod-affinitygtfvd": Phase="Running", Reason="", readiness=true. Elapsed: 2.006412445s + Jan 14 04:43:29.018: INFO: Pod "execpod-affinitygtfvd" satisfied condition "running" + Jan 14 04:43:30.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-6172 exec execpod-affinitygtfvd -- /bin/sh -x -c nc -v -z -w 2 affinity-clusterip-transition 80' + Jan 14 04:43:30.135: INFO: stderr: "+ nc -v -z -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" + Jan 14 04:43:30.135: INFO: stdout: "" + Jan 14 04:43:30.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-6172 exec execpod-affinitygtfvd -- /bin/sh -x -c nc -v -z -w 2 10.55.252.137 80' + Jan 14 04:43:30.243: INFO: stderr: "+ nc -v -z -w 2 10.55.252.137 80\nConnection to 10.55.252.137 80 port [tcp/http] succeeded!\n" + Jan 14 04:43:30.243: INFO: stdout: "" + Jan 14 04:43:30.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-6172 exec execpod-affinitygtfvd -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.55.252.137:80/ ; done' + Jan 14 04:43:30.418: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n" + Jan 14 04:43:30.418: INFO: stdout: "\naffinity-clusterip-transition-lgw4r\naffinity-clusterip-transition-lgw4r\naffinity-clusterip-transition-lgw4r\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-dgcvb\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-lgw4r\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-dgcvb\naffinity-clusterip-transition-lgw4r\naffinity-clusterip-transition-lgw4r\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-lgw4r" + Jan 14 04:43:30.418: INFO: Received response from host: affinity-clusterip-transition-lgw4r + Jan 14 04:43:30.418: INFO: Received response from host: affinity-clusterip-transition-lgw4r + Jan 14 04:43:30.418: INFO: Received response from host: affinity-clusterip-transition-lgw4r + Jan 14 04:43:30.418: INFO: Received response from host: affinity-clusterip-transition-p7cvv + Jan 14 04:43:30.418: INFO: Received response from host: affinity-clusterip-transition-p7cvv + Jan 14 04:43:30.418: INFO: Received response from host: affinity-clusterip-transition-dgcvb + Jan 14 04:43:30.418: INFO: Received response from host: affinity-clusterip-transition-p7cvv + Jan 14 04:43:30.418: INFO: Received response from host: affinity-clusterip-transition-lgw4r + Jan 14 04:43:30.418: INFO: Received response from host: affinity-clusterip-transition-p7cvv + Jan 14 04:43:30.418: INFO: Received response from host: affinity-clusterip-transition-p7cvv + Jan 14 04:43:30.418: INFO: Received response from host: affinity-clusterip-transition-dgcvb + Jan 14 04:43:30.418: INFO: Received response from host: affinity-clusterip-transition-lgw4r + Jan 14 04:43:30.418: INFO: Received response from host: affinity-clusterip-transition-lgw4r + Jan 14 04:43:30.418: INFO: Received response from host: affinity-clusterip-transition-p7cvv + Jan 14 04:43:30.418: INFO: Received response from host: affinity-clusterip-transition-p7cvv + Jan 14 04:43:30.418: INFO: Received response from host: affinity-clusterip-transition-lgw4r + Jan 14 04:43:30.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-6172 exec execpod-affinitygtfvd -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.55.252.137:80/ ; done' + Jan 14 04:43:30.590: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.55.252.137:80/\n" + Jan 14 04:43:30.590: INFO: stdout: "\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-p7cvv\naffinity-clusterip-transition-p7cvv" + Jan 14 04:43:30.590: INFO: Received response from host: affinity-clusterip-transition-p7cvv + Jan 14 04:43:30.590: INFO: Received response from host: affinity-clusterip-transition-p7cvv + Jan 14 04:43:30.590: INFO: Received response from host: affinity-clusterip-transition-p7cvv + Jan 14 04:43:30.590: INFO: Received response from host: affinity-clusterip-transition-p7cvv + Jan 14 04:43:30.590: INFO: Received response from host: affinity-clusterip-transition-p7cvv + Jan 14 04:43:30.590: INFO: Received response from host: affinity-clusterip-transition-p7cvv + Jan 14 04:43:30.590: INFO: Received response from host: affinity-clusterip-transition-p7cvv + Jan 14 04:43:30.590: INFO: Received response from host: affinity-clusterip-transition-p7cvv + Jan 14 04:43:30.590: INFO: Received response from host: affinity-clusterip-transition-p7cvv + Jan 14 04:43:30.590: INFO: Received response from host: affinity-clusterip-transition-p7cvv + Jan 14 04:43:30.590: INFO: Received response from host: affinity-clusterip-transition-p7cvv + Jan 14 04:43:30.590: INFO: Received response from host: affinity-clusterip-transition-p7cvv + Jan 14 04:43:30.590: INFO: Received response from host: affinity-clusterip-transition-p7cvv + Jan 14 04:43:30.590: INFO: Received response from host: affinity-clusterip-transition-p7cvv + Jan 14 04:43:30.590: INFO: Received response from host: affinity-clusterip-transition-p7cvv + Jan 14 04:43:30.590: INFO: Received response from host: affinity-clusterip-transition-p7cvv + Jan 14 04:43:30.590: INFO: Cleaning up the exec pod + STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-6172, will wait for the garbage collector to delete the pods 01/14/23 04:43:30.602 + Jan 14 04:43:30.663: INFO: Deleting ReplicationController affinity-clusterip-transition took: 7.027365ms + Jan 14 04:43:30.764: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 101.136598ms + [AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 + Jan 14 04:43:32.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 + STEP: Destroying namespace "services-6172" for this suite. 01/14/23 04:43:32.687 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should guarantee kube-root-ca.crt exist in any namespace [Conformance] + test/e2e/auth/service_accounts.go:742 +[BeforeEach] [sig-auth] ServiceAccounts + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:43:32.693 +Jan 14 04:43:32.693: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename svcaccounts 01/14/23 04:43:32.693 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:43:32.708 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:43:32.71 +[BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:31 +[It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] + test/e2e/auth/service_accounts.go:742 +Jan 14 04:43:32.715: INFO: Got root ca configmap in namespace "svcaccounts-3702" +Jan 14 04:43:32.722: INFO: Deleted root ca configmap in namespace "svcaccounts-3702" +STEP: waiting for a new root ca configmap created 01/14/23 04:43:33.223 +Jan 14 04:43:33.227: INFO: Recreated root ca configmap in namespace "svcaccounts-3702" +Jan 14 04:43:33.232: INFO: Updated root ca configmap in namespace "svcaccounts-3702" +STEP: waiting for the root ca configmap reconciled 01/14/23 04:43:33.733 +Jan 14 04:43:33.736: INFO: Reconciled root ca configmap in namespace "svcaccounts-3702" +[AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/node/init/init.go:32 +Jan 14 04:43:33.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + tear down framework | framework.go:193 +STEP: Destroying namespace "svcaccounts-3702" for this suite. 01/14/23 04:43:33.741 +------------------------------ +• [1.056 seconds] +[sig-auth] ServiceAccounts +test/e2e/auth/framework.go:23 + should guarantee kube-root-ca.crt exist in any namespace [Conformance] + test/e2e/auth/service_accounts.go:742 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-auth] ServiceAccounts + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:43:32.693 + Jan 14 04:43:32.693: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename svcaccounts 01/14/23 04:43:32.693 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:43:32.708 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:43:32.71 + [BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:31 + [It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] + test/e2e/auth/service_accounts.go:742 + Jan 14 04:43:32.715: INFO: Got root ca configmap in namespace "svcaccounts-3702" + Jan 14 04:43:32.722: INFO: Deleted root ca configmap in namespace "svcaccounts-3702" + STEP: waiting for a new root ca configmap created 01/14/23 04:43:33.223 + Jan 14 04:43:33.227: INFO: Recreated root ca configmap in namespace "svcaccounts-3702" + Jan 14 04:43:33.232: INFO: Updated root ca configmap in namespace "svcaccounts-3702" + STEP: waiting for the root ca configmap reconciled 01/14/23 04:43:33.733 + Jan 14 04:43:33.736: INFO: Reconciled root ca configmap in namespace "svcaccounts-3702" + [AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/node/init/init.go:32 + Jan 14 04:43:33.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + tear down framework | framework.go:193 + STEP: Destroying namespace "svcaccounts-3702" for this suite. 01/14/23 04:43:33.741 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-node] Sysctls [LinuxOnly] [NodeConformance] + should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] + test/e2e/common/node/sysctl.go:123 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/common/node/sysctl.go:37 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:43:33.749 +Jan 14 04:43:33.749: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename sysctl 01/14/23 04:43:33.75 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:43:33.763 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:43:33.765 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/common/node/sysctl.go:67 +[It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] + test/e2e/common/node/sysctl.go:123 +STEP: Creating a pod with one valid and two invalid sysctls 01/14/23 04:43:33.767 +[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:43:33.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + tear down framework | framework.go:193 +STEP: Destroying namespace "sysctl-4047" for this suite. 01/14/23 04:43:33.776 +------------------------------ +• [0.034 seconds] +[sig-node] Sysctls [LinuxOnly] [NodeConformance] +test/e2e/common/node/framework.go:23 + should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] + test/e2e/common/node/sysctl.go:123 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/common/node/sysctl.go:37 + [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:43:33.749 + Jan 14 04:43:33.749: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename sysctl 01/14/23 04:43:33.75 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:43:33.763 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:43:33.765 + [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/common/node/sysctl.go:67 + [It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] + test/e2e/common/node/sysctl.go:123 + STEP: Creating a pod with one valid and two invalid sysctls 01/14/23 04:43:33.767 + [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:43:33.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Sysctls [LinuxOnly] [NodeConformance] + tear down framework | framework.go:193 + STEP: Destroying namespace "sysctl-4047" for this suite. 01/14/23 04:43:33.776 + << End Captured GinkgoWriter Output +------------------------------ +S +------------------------------ +[sig-node] Security Context When creating a pod with privileged + should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:528 +[BeforeEach] [sig-node] Security Context + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:43:33.783 +Jan 14 04:43:33.784: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename security-context-test 01/14/23 04:43:33.784 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:43:33.795 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:43:33.797 +[BeforeEach] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Security Context + test/e2e/common/node/security_context.go:50 +[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:528 +Jan 14 04:43:33.808: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-17f9906d-0abd-468c-8632-71277fad957d" in namespace "security-context-test-3822" to be "Succeeded or Failed" +Jan 14 04:43:33.811: INFO: Pod "busybox-privileged-false-17f9906d-0abd-468c-8632-71277fad957d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.622271ms +Jan 14 04:43:35.815: INFO: Pod "busybox-privileged-false-17f9906d-0abd-468c-8632-71277fad957d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006565851s +Jan 14 04:43:37.817: INFO: Pod "busybox-privileged-false-17f9906d-0abd-468c-8632-71277fad957d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009115813s +Jan 14 04:43:37.817: INFO: Pod "busybox-privileged-false-17f9906d-0abd-468c-8632-71277fad957d" satisfied condition "Succeeded or Failed" +Jan 14 04:43:37.823: INFO: Got logs for pod "busybox-privileged-false-17f9906d-0abd-468c-8632-71277fad957d": "ip: RTNETLINK answers: Operation not permitted\n" +[AfterEach] [sig-node] Security Context + test/e2e/framework/node/init/init.go:32 +Jan 14 04:43:37.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Security Context + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Security Context + tear down framework | framework.go:193 +STEP: Destroying namespace "security-context-test-3822" for this suite. 01/14/23 04:43:37.828 +------------------------------ +• [4.051 seconds] +[sig-node] Security Context +test/e2e/common/node/framework.go:23 + When creating a pod with privileged + test/e2e/common/node/security_context.go:491 + should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:528 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Security Context + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:43:33.783 + Jan 14 04:43:33.784: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename security-context-test 01/14/23 04:43:33.784 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:43:33.795 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:43:33.797 + [BeforeEach] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Security Context + test/e2e/common/node/security_context.go:50 + [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/node/security_context.go:528 + Jan 14 04:43:33.808: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-17f9906d-0abd-468c-8632-71277fad957d" in namespace "security-context-test-3822" to be "Succeeded or Failed" + Jan 14 04:43:33.811: INFO: Pod "busybox-privileged-false-17f9906d-0abd-468c-8632-71277fad957d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.622271ms + Jan 14 04:43:35.815: INFO: Pod "busybox-privileged-false-17f9906d-0abd-468c-8632-71277fad957d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006565851s + Jan 14 04:43:37.817: INFO: Pod "busybox-privileged-false-17f9906d-0abd-468c-8632-71277fad957d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009115813s + Jan 14 04:43:37.817: INFO: Pod "busybox-privileged-false-17f9906d-0abd-468c-8632-71277fad957d" satisfied condition "Succeeded or Failed" + Jan 14 04:43:37.823: INFO: Got logs for pod "busybox-privileged-false-17f9906d-0abd-468c-8632-71277fad957d": "ip: RTNETLINK answers: Operation not permitted\n" + [AfterEach] [sig-node] Security Context + test/e2e/framework/node/init/init.go:32 + Jan 14 04:43:37.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Security Context + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Security Context + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Security Context + tear down framework | framework.go:193 + STEP: Destroying namespace "security-context-test-3822" for this suite. 01/14/23 04:43:37.828 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Job + should apply changes to a job status [Conformance] + test/e2e/apps/job.go:636 +[BeforeEach] [sig-apps] Job + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:43:37.838 +Jan 14 04:43:37.838: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename job 01/14/23 04:43:37.839 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:43:37.85 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:43:37.852 +[BeforeEach] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:31 +[It] should apply changes to a job status [Conformance] + test/e2e/apps/job.go:636 +STEP: Creating a job 01/14/23 04:43:37.854 +STEP: Ensure pods equal to parallelism count is attached to the job 01/14/23 04:43:37.859 +STEP: patching /status 01/14/23 04:43:39.863 +STEP: updating /status 01/14/23 04:43:39.872 +STEP: get /status 01/14/23 04:43:39.879 +[AfterEach] [sig-apps] Job + test/e2e/framework/node/init/init.go:32 +Jan 14 04:43:39.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Job + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Job + tear down framework | framework.go:193 +STEP: Destroying namespace "job-5730" for this suite. 01/14/23 04:43:39.887 +------------------------------ +• [2.054 seconds] +[sig-apps] Job +test/e2e/apps/framework.go:23 + should apply changes to a job status [Conformance] + test/e2e/apps/job.go:636 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Job + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:43:37.838 + Jan 14 04:43:37.838: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename job 01/14/23 04:43:37.839 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:43:37.85 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:43:37.852 + [BeforeEach] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:31 + [It] should apply changes to a job status [Conformance] + test/e2e/apps/job.go:636 + STEP: Creating a job 01/14/23 04:43:37.854 + STEP: Ensure pods equal to parallelism count is attached to the job 01/14/23 04:43:37.859 + STEP: patching /status 01/14/23 04:43:39.863 + STEP: updating /status 01/14/23 04:43:39.872 + STEP: get /status 01/14/23 04:43:39.879 + [AfterEach] [sig-apps] Job + test/e2e/framework/node/init/init.go:32 + Jan 14 04:43:39.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Job + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Job + tear down framework | framework.go:193 + STEP: Destroying namespace "job-5730" for this suite. 01/14/23 04:43:39.887 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] CSIStorageCapacity + should support CSIStorageCapacities API operations [Conformance] + test/e2e/storage/csistoragecapacity.go:49 +[BeforeEach] [sig-storage] CSIStorageCapacity + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:43:39.894 +Jan 14 04:43:39.894: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename csistoragecapacity 01/14/23 04:43:39.895 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:43:39.907 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:43:39.909 +[BeforeEach] [sig-storage] CSIStorageCapacity + test/e2e/framework/metrics/init/init.go:31 +[It] should support CSIStorageCapacities API operations [Conformance] + test/e2e/storage/csistoragecapacity.go:49 +STEP: getting /apis 01/14/23 04:43:39.911 +STEP: getting /apis/storage.k8s.io 01/14/23 04:43:39.913 +STEP: getting /apis/storage.k8s.io/v1 01/14/23 04:43:39.914 +STEP: creating 01/14/23 04:43:39.915 +STEP: watching 01/14/23 04:43:39.927 +Jan 14 04:43:39.928: INFO: starting watch +STEP: getting 01/14/23 04:43:39.934 +STEP: listing in namespace 01/14/23 04:43:39.936 +STEP: listing across namespaces 01/14/23 04:43:39.94 +STEP: patching 01/14/23 04:43:39.943 +STEP: updating 01/14/23 04:43:39.948 +Jan 14 04:43:39.952: INFO: waiting for watch events with expected annotations in namespace +Jan 14 04:43:39.952: INFO: waiting for watch events with expected annotations across namespace +STEP: deleting 01/14/23 04:43:39.953 +STEP: deleting a collection 01/14/23 04:43:39.963 +[AfterEach] [sig-storage] CSIStorageCapacity + test/e2e/framework/node/init/init.go:32 +Jan 14 04:43:39.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] CSIStorageCapacity + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] CSIStorageCapacity + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] CSIStorageCapacity + tear down framework | framework.go:193 +STEP: Destroying namespace "csistoragecapacity-2568" for this suite. 01/14/23 04:43:39.98 +------------------------------ +• [0.091 seconds] +[sig-storage] CSIStorageCapacity +test/e2e/storage/utils/framework.go:23 + should support CSIStorageCapacities API operations [Conformance] + test/e2e/storage/csistoragecapacity.go:49 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] CSIStorageCapacity + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:43:39.894 + Jan 14 04:43:39.894: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename csistoragecapacity 01/14/23 04:43:39.895 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:43:39.907 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:43:39.909 + [BeforeEach] [sig-storage] CSIStorageCapacity + test/e2e/framework/metrics/init/init.go:31 + [It] should support CSIStorageCapacities API operations [Conformance] + test/e2e/storage/csistoragecapacity.go:49 + STEP: getting /apis 01/14/23 04:43:39.911 + STEP: getting /apis/storage.k8s.io 01/14/23 04:43:39.913 + STEP: getting /apis/storage.k8s.io/v1 01/14/23 04:43:39.914 + STEP: creating 01/14/23 04:43:39.915 + STEP: watching 01/14/23 04:43:39.927 + Jan 14 04:43:39.928: INFO: starting watch + STEP: getting 01/14/23 04:43:39.934 + STEP: listing in namespace 01/14/23 04:43:39.936 + STEP: listing across namespaces 01/14/23 04:43:39.94 + STEP: patching 01/14/23 04:43:39.943 + STEP: updating 01/14/23 04:43:39.948 + Jan 14 04:43:39.952: INFO: waiting for watch events with expected annotations in namespace + Jan 14 04:43:39.952: INFO: waiting for watch events with expected annotations across namespace + STEP: deleting 01/14/23 04:43:39.953 + STEP: deleting a collection 01/14/23 04:43:39.963 + [AfterEach] [sig-storage] CSIStorageCapacity + test/e2e/framework/node/init/init.go:32 + Jan 14 04:43:39.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] CSIStorageCapacity + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] CSIStorageCapacity + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] CSIStorageCapacity + tear down framework | framework.go:193 + STEP: Destroying namespace "csistoragecapacity-2568" for this suite. 01/14/23 04:43:39.98 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + test/e2e/apps/statefulset.go:697 +[BeforeEach] [sig-apps] StatefulSet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:43:39.986 +Jan 14 04:43:39.986: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename statefulset 01/14/23 04:43:39.987 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:43:39.999 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:43:40.001 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 +STEP: Creating service test in namespace statefulset-6874 01/14/23 04:43:40.003 +[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + test/e2e/apps/statefulset.go:697 +STEP: Creating stateful set ss in namespace statefulset-6874 01/14/23 04:43:40.007 +STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6874 01/14/23 04:43:40.014 +Jan 14 04:43:40.017: INFO: Found 0 stateful pods, waiting for 1 +Jan 14 04:43:50.022: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod 01/14/23 04:43:50.022 +Jan 14 04:43:50.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-6874 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Jan 14 04:43:50.139: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Jan 14 04:43:50.139: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Jan 14 04:43:50.139: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Jan 14 04:43:50.143: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true +Jan 14 04:44:00.147: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Jan 14 04:44:00.147: INFO: Waiting for statefulset status.replicas updated to 0 +Jan 14 04:44:00.173: INFO: POD NODE PHASE GRACE CONDITIONS +Jan 14 04:44:00.173: INFO: ss-0 10.0.1.99 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:43:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:43:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:43:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:43:40 +0000 UTC }] +Jan 14 04:44:00.173: INFO: +Jan 14 04:44:00.173: INFO: StatefulSet ss has not reached scale 3, at 1 +Jan 14 04:44:01.178: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.98511738s +Jan 14 04:44:02.183: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.980108436s +Jan 14 04:44:03.188: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.975249646s +Jan 14 04:44:04.192: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.970524531s +Jan 14 04:44:05.197: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.965919106s +Jan 14 04:44:06.201: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.961439697s +Jan 14 04:44:07.206: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.957448291s +Jan 14 04:44:08.210: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.952584328s +Jan 14 04:44:09.215: INFO: Verifying statefulset ss doesn't scale past 3 for another 948.128814ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6874 01/14/23 04:44:10.215 +Jan 14 04:44:10.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-6874 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Jan 14 04:44:10.330: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Jan 14 04:44:10.330: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Jan 14 04:44:10.330: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Jan 14 04:44:10.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-6874 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Jan 14 04:44:10.439: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" +Jan 14 04:44:10.439: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Jan 14 04:44:10.439: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Jan 14 04:44:10.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-6874 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Jan 14 04:44:10.558: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" +Jan 14 04:44:10.558: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Jan 14 04:44:10.558: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +Jan 14 04:44:10.562: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Jan 14 04:44:10.562: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Jan 14 04:44:10.562: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Scale down will not halt with unhealthy stateful pod 01/14/23 04:44:10.562 +Jan 14 04:44:10.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-6874 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Jan 14 04:44:10.675: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Jan 14 04:44:10.675: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Jan 14 04:44:10.675: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Jan 14 04:44:10.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-6874 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Jan 14 04:44:10.782: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Jan 14 04:44:10.782: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Jan 14 04:44:10.782: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Jan 14 04:44:10.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-6874 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Jan 14 04:44:10.893: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Jan 14 04:44:10.893: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Jan 14 04:44:10.893: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Jan 14 04:44:10.893: INFO: Waiting for statefulset status.replicas updated to 0 +Jan 14 04:44:10.897: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 +Jan 14 04:44:20.905: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Jan 14 04:44:20.905: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Jan 14 04:44:20.905: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Jan 14 04:44:20.916: INFO: POD NODE PHASE GRACE CONDITIONS +Jan 14 04:44:20.916: INFO: ss-0 10.0.1.99 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:43:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:44:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:44:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:43:40 +0000 UTC }] +Jan 14 04:44:20.916: INFO: ss-1 10.0.1.106 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:44:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:44:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:44:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:44:00 +0000 UTC }] +Jan 14 04:44:20.916: INFO: ss-2 10.0.1.212 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:44:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:44:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:44:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:44:00 +0000 UTC }] +Jan 14 04:44:20.916: INFO: +Jan 14 04:44:20.916: INFO: StatefulSet ss has not reached scale 0, at 3 +Jan 14 04:44:21.921: INFO: POD NODE PHASE GRACE CONDITIONS +Jan 14 04:44:21.921: INFO: ss-1 10.0.1.106 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:44:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:44:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:44:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:44:00 +0000 UTC }] +Jan 14 04:44:21.921: INFO: +Jan 14 04:44:21.921: INFO: StatefulSet ss has not reached scale 0, at 1 +Jan 14 04:44:22.924: INFO: Verifying statefulset ss doesn't scale past 0 for another 7.99250133s +Jan 14 04:44:23.929: INFO: Verifying statefulset ss doesn't scale past 0 for another 6.988245183s +Jan 14 04:44:24.933: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.983798695s +Jan 14 04:44:25.937: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.979835372s +Jan 14 04:44:26.941: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.975757043s +Jan 14 04:44:27.945: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.971814644s +Jan 14 04:44:28.949: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.967861562s +Jan 14 04:44:29.953: INFO: Verifying statefulset ss doesn't scale past 0 for another 963.836987ms +STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6874 01/14/23 04:44:30.953 +Jan 14 04:44:30.957: INFO: Scaling statefulset ss to 0 +Jan 14 04:44:30.967: INFO: Waiting for statefulset status.replicas updated to 0 +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 +Jan 14 04:44:30.969: INFO: Deleting all statefulset in ns statefulset-6874 +Jan 14 04:44:30.972: INFO: Scaling statefulset ss to 0 +Jan 14 04:44:30.980: INFO: Waiting for statefulset status.replicas updated to 0 +Jan 14 04:44:30.983: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + test/e2e/framework/node/init/init.go:32 +Jan 14 04:44:30.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] StatefulSet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] StatefulSet + tear down framework | framework.go:193 +STEP: Destroying namespace "statefulset-6874" for this suite. 01/14/23 04:44:31 +------------------------------ +• [SLOW TEST] [51.020 seconds] +[sig-apps] StatefulSet +test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:103 + Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + test/e2e/apps/statefulset.go:697 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] StatefulSet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:43:39.986 + Jan 14 04:43:39.986: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename statefulset 01/14/23 04:43:39.987 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:43:39.999 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:43:40.001 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 + [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 + STEP: Creating service test in namespace statefulset-6874 01/14/23 04:43:40.003 + [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] + test/e2e/apps/statefulset.go:697 + STEP: Creating stateful set ss in namespace statefulset-6874 01/14/23 04:43:40.007 + STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6874 01/14/23 04:43:40.014 + Jan 14 04:43:40.017: INFO: Found 0 stateful pods, waiting for 1 + Jan 14 04:43:50.022: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true + STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod 01/14/23 04:43:50.022 + Jan 14 04:43:50.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-6874 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Jan 14 04:43:50.139: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Jan 14 04:43:50.139: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Jan 14 04:43:50.139: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + Jan 14 04:43:50.143: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true + Jan 14 04:44:00.147: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false + Jan 14 04:44:00.147: INFO: Waiting for statefulset status.replicas updated to 0 + Jan 14 04:44:00.173: INFO: POD NODE PHASE GRACE CONDITIONS + Jan 14 04:44:00.173: INFO: ss-0 10.0.1.99 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:43:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:43:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:43:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:43:40 +0000 UTC }] + Jan 14 04:44:00.173: INFO: + Jan 14 04:44:00.173: INFO: StatefulSet ss has not reached scale 3, at 1 + Jan 14 04:44:01.178: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.98511738s + Jan 14 04:44:02.183: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.980108436s + Jan 14 04:44:03.188: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.975249646s + Jan 14 04:44:04.192: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.970524531s + Jan 14 04:44:05.197: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.965919106s + Jan 14 04:44:06.201: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.961439697s + Jan 14 04:44:07.206: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.957448291s + Jan 14 04:44:08.210: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.952584328s + Jan 14 04:44:09.215: INFO: Verifying statefulset ss doesn't scale past 3 for another 948.128814ms + STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6874 01/14/23 04:44:10.215 + Jan 14 04:44:10.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-6874 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' + Jan 14 04:44:10.330: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" + Jan 14 04:44:10.330: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" + Jan 14 04:44:10.330: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + + Jan 14 04:44:10.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-6874 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' + Jan 14 04:44:10.439: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" + Jan 14 04:44:10.439: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" + Jan 14 04:44:10.439: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + + Jan 14 04:44:10.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-6874 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' + Jan 14 04:44:10.558: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" + Jan 14 04:44:10.558: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" + Jan 14 04:44:10.558: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + + Jan 14 04:44:10.562: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true + Jan 14 04:44:10.562: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true + Jan 14 04:44:10.562: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true + STEP: Scale down will not halt with unhealthy stateful pod 01/14/23 04:44:10.562 + Jan 14 04:44:10.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-6874 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Jan 14 04:44:10.675: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Jan 14 04:44:10.675: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Jan 14 04:44:10.675: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + Jan 14 04:44:10.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-6874 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Jan 14 04:44:10.782: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Jan 14 04:44:10.782: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Jan 14 04:44:10.782: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + Jan 14 04:44:10.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-6874 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Jan 14 04:44:10.893: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Jan 14 04:44:10.893: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Jan 14 04:44:10.893: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + Jan 14 04:44:10.893: INFO: Waiting for statefulset status.replicas updated to 0 + Jan 14 04:44:10.897: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 + Jan 14 04:44:20.905: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false + Jan 14 04:44:20.905: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false + Jan 14 04:44:20.905: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false + Jan 14 04:44:20.916: INFO: POD NODE PHASE GRACE CONDITIONS + Jan 14 04:44:20.916: INFO: ss-0 10.0.1.99 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:43:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:44:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:44:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:43:40 +0000 UTC }] + Jan 14 04:44:20.916: INFO: ss-1 10.0.1.106 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:44:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:44:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:44:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:44:00 +0000 UTC }] + Jan 14 04:44:20.916: INFO: ss-2 10.0.1.212 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:44:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:44:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:44:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:44:00 +0000 UTC }] + Jan 14 04:44:20.916: INFO: + Jan 14 04:44:20.916: INFO: StatefulSet ss has not reached scale 0, at 3 + Jan 14 04:44:21.921: INFO: POD NODE PHASE GRACE CONDITIONS + Jan 14 04:44:21.921: INFO: ss-1 10.0.1.106 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:44:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:44:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:44:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-14 04:44:00 +0000 UTC }] + Jan 14 04:44:21.921: INFO: + Jan 14 04:44:21.921: INFO: StatefulSet ss has not reached scale 0, at 1 + Jan 14 04:44:22.924: INFO: Verifying statefulset ss doesn't scale past 0 for another 7.99250133s + Jan 14 04:44:23.929: INFO: Verifying statefulset ss doesn't scale past 0 for another 6.988245183s + Jan 14 04:44:24.933: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.983798695s + Jan 14 04:44:25.937: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.979835372s + Jan 14 04:44:26.941: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.975757043s + Jan 14 04:44:27.945: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.971814644s + Jan 14 04:44:28.949: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.967861562s + Jan 14 04:44:29.953: INFO: Verifying statefulset ss doesn't scale past 0 for another 963.836987ms + STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6874 01/14/23 04:44:30.953 + Jan 14 04:44:30.957: INFO: Scaling statefulset ss to 0 + Jan 14 04:44:30.967: INFO: Waiting for statefulset status.replicas updated to 0 + [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 + Jan 14 04:44:30.969: INFO: Deleting all statefulset in ns statefulset-6874 + Jan 14 04:44:30.972: INFO: Scaling statefulset ss to 0 + Jan 14 04:44:30.980: INFO: Waiting for statefulset status.replicas updated to 0 + Jan 14 04:44:30.983: INFO: Deleting statefulset ss + [AfterEach] [sig-apps] StatefulSet + test/e2e/framework/node/init/init.go:32 + Jan 14 04:44:30.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] StatefulSet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] StatefulSet + tear down framework | framework.go:193 + STEP: Destroying namespace "statefulset-6874" for this suite. 01/14/23 04:44:31 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Secrets + should be consumable from pods in env vars [NodeConformance] [Conformance] + test/e2e/common/node/secrets.go:46 +[BeforeEach] [sig-node] Secrets + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:44:31.007 +Jan 14 04:44:31.007: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename secrets 01/14/23 04:44:31.008 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:44:31.019 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:44:31.021 +[BeforeEach] [sig-node] Secrets + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in env vars [NodeConformance] [Conformance] + test/e2e/common/node/secrets.go:46 +STEP: Creating secret with name secret-test-b91d746d-99a9-46f7-ac13-66f1a238d057 01/14/23 04:44:31.023 +STEP: Creating a pod to test consume secrets 01/14/23 04:44:31.028 +Jan 14 04:44:31.037: INFO: Waiting up to 5m0s for pod "pod-secrets-6dbad291-72fd-4e34-a8ba-7112ff3320b6" in namespace "secrets-3045" to be "Succeeded or Failed" +Jan 14 04:44:31.040: INFO: Pod "pod-secrets-6dbad291-72fd-4e34-a8ba-7112ff3320b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.687483ms +Jan 14 04:44:33.045: INFO: Pod "pod-secrets-6dbad291-72fd-4e34-a8ba-7112ff3320b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007628242s +Jan 14 04:44:35.044: INFO: Pod "pod-secrets-6dbad291-72fd-4e34-a8ba-7112ff3320b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006677538s +STEP: Saw pod success 01/14/23 04:44:35.044 +Jan 14 04:44:35.044: INFO: Pod "pod-secrets-6dbad291-72fd-4e34-a8ba-7112ff3320b6" satisfied condition "Succeeded or Failed" +Jan 14 04:44:35.047: INFO: Trying to get logs from node 10.0.1.106 pod pod-secrets-6dbad291-72fd-4e34-a8ba-7112ff3320b6 container secret-env-test: +STEP: delete the pod 01/14/23 04:44:35.055 +Jan 14 04:44:35.067: INFO: Waiting for pod pod-secrets-6dbad291-72fd-4e34-a8ba-7112ff3320b6 to disappear +Jan 14 04:44:35.070: INFO: Pod pod-secrets-6dbad291-72fd-4e34-a8ba-7112ff3320b6 no longer exists +[AfterEach] [sig-node] Secrets + test/e2e/framework/node/init/init.go:32 +Jan 14 04:44:35.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Secrets + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Secrets + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Secrets + tear down framework | framework.go:193 +STEP: Destroying namespace "secrets-3045" for this suite. 01/14/23 04:44:35.075 +------------------------------ +• [4.074 seconds] +[sig-node] Secrets +test/e2e/common/node/framework.go:23 + should be consumable from pods in env vars [NodeConformance] [Conformance] + test/e2e/common/node/secrets.go:46 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Secrets + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:44:31.007 + Jan 14 04:44:31.007: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename secrets 01/14/23 04:44:31.008 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:44:31.019 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:44:31.021 + [BeforeEach] [sig-node] Secrets + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in env vars [NodeConformance] [Conformance] + test/e2e/common/node/secrets.go:46 + STEP: Creating secret with name secret-test-b91d746d-99a9-46f7-ac13-66f1a238d057 01/14/23 04:44:31.023 + STEP: Creating a pod to test consume secrets 01/14/23 04:44:31.028 + Jan 14 04:44:31.037: INFO: Waiting up to 5m0s for pod "pod-secrets-6dbad291-72fd-4e34-a8ba-7112ff3320b6" in namespace "secrets-3045" to be "Succeeded or Failed" + Jan 14 04:44:31.040: INFO: Pod "pod-secrets-6dbad291-72fd-4e34-a8ba-7112ff3320b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.687483ms + Jan 14 04:44:33.045: INFO: Pod "pod-secrets-6dbad291-72fd-4e34-a8ba-7112ff3320b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007628242s + Jan 14 04:44:35.044: INFO: Pod "pod-secrets-6dbad291-72fd-4e34-a8ba-7112ff3320b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.006677538s + STEP: Saw pod success 01/14/23 04:44:35.044 + Jan 14 04:44:35.044: INFO: Pod "pod-secrets-6dbad291-72fd-4e34-a8ba-7112ff3320b6" satisfied condition "Succeeded or Failed" + Jan 14 04:44:35.047: INFO: Trying to get logs from node 10.0.1.106 pod pod-secrets-6dbad291-72fd-4e34-a8ba-7112ff3320b6 container secret-env-test: + STEP: delete the pod 01/14/23 04:44:35.055 + Jan 14 04:44:35.067: INFO: Waiting for pod pod-secrets-6dbad291-72fd-4e34-a8ba-7112ff3320b6 to disappear + Jan 14 04:44:35.070: INFO: Pod pod-secrets-6dbad291-72fd-4e34-a8ba-7112ff3320b6 no longer exists + [AfterEach] [sig-node] Secrets + test/e2e/framework/node/init/init.go:32 + Jan 14 04:44:35.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Secrets + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Secrets + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Secrets + tear down framework | framework.go:193 + STEP: Destroying namespace "secrets-3045" for this suite. 01/14/23 04:44:35.075 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPreemption [Serial] + validates basic preemption works [Conformance] + test/e2e/scheduling/preemption.go:129 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:44:35.089 +Jan 14 04:44:35.089: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename sched-preemption 01/14/23 04:44:35.089 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:44:35.104 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:44:35.106 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:96 +Jan 14 04:44:35.121: INFO: Waiting up to 1m0s for all nodes to be ready +Jan 14 04:45:35.160: INFO: Waiting for terminating namespaces to be deleted... +[It] validates basic preemption works [Conformance] + test/e2e/scheduling/preemption.go:129 +STEP: Create pods that use 4/5 of node resources. 01/14/23 04:45:35.163 +Jan 14 04:45:35.189: INFO: Created pod: pod0-0-sched-preemption-low-priority +Jan 14 04:45:35.197: INFO: Created pod: pod0-1-sched-preemption-medium-priority +Jan 14 04:45:35.214: INFO: Created pod: pod1-0-sched-preemption-medium-priority +Jan 14 04:45:35.222: INFO: Created pod: pod1-1-sched-preemption-medium-priority +Jan 14 04:45:35.240: INFO: Created pod: pod2-0-sched-preemption-medium-priority +Jan 14 04:45:35.247: INFO: Created pod: pod2-1-sched-preemption-medium-priority +STEP: Wait for pods to be scheduled. 01/14/23 04:45:35.247 +Jan 14 04:45:35.247: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-4671" to be "running" +Jan 14 04:45:35.250: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 3.025395ms +Jan 14 04:45:37.255: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007286056s +Jan 14 04:45:39.255: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007785683s +Jan 14 04:45:41.255: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 6.007748576s +Jan 14 04:45:43.255: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 8.008157552s +Jan 14 04:45:45.255: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 10.007344213s +Jan 14 04:45:45.255: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" +Jan 14 04:45:45.255: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-4671" to be "running" +Jan 14 04:45:45.258: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 3.221738ms +Jan 14 04:45:45.258: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" +Jan 14 04:45:45.258: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-4671" to be "running" +Jan 14 04:45:45.261: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.851683ms +Jan 14 04:45:45.261: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" +Jan 14 04:45:45.261: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-4671" to be "running" +Jan 14 04:45:45.263: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.588301ms +Jan 14 04:45:45.263: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" +Jan 14 04:45:45.263: INFO: Waiting up to 5m0s for pod "pod2-0-sched-preemption-medium-priority" in namespace "sched-preemption-4671" to be "running" +Jan 14 04:45:45.266: INFO: Pod "pod2-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.573928ms +Jan 14 04:45:45.266: INFO: Pod "pod2-0-sched-preemption-medium-priority" satisfied condition "running" +Jan 14 04:45:45.266: INFO: Waiting up to 5m0s for pod "pod2-1-sched-preemption-medium-priority" in namespace "sched-preemption-4671" to be "running" +Jan 14 04:45:45.269: INFO: Pod "pod2-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.729013ms +Jan 14 04:45:45.269: INFO: Pod "pod2-1-sched-preemption-medium-priority" satisfied condition "running" +STEP: Run a high priority pod that has same requirements as that of lower priority pod 01/14/23 04:45:45.269 +Jan 14 04:45:45.276: INFO: Waiting up to 2m0s for pod "preemptor-pod" in namespace "sched-preemption-4671" to be "running" +Jan 14 04:45:45.279: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.808689ms +Jan 14 04:45:47.283: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007565895s +Jan 14 04:45:49.284: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008003879s +Jan 14 04:45:51.284: INFO: Pod "preemptor-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.008144431s +Jan 14 04:45:51.284: INFO: Pod "preemptor-pod" satisfied condition "running" +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:45:51.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:84 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "sched-preemption-4671" for this suite. 01/14/23 04:45:51.345 +------------------------------ +• [SLOW TEST] [76.261 seconds] +[sig-scheduling] SchedulerPreemption [Serial] +test/e2e/scheduling/framework.go:40 + validates basic preemption works [Conformance] + test/e2e/scheduling/preemption.go:129 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:44:35.089 + Jan 14 04:44:35.089: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename sched-preemption 01/14/23 04:44:35.089 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:44:35.104 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:44:35.106 + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:96 + Jan 14 04:44:35.121: INFO: Waiting up to 1m0s for all nodes to be ready + Jan 14 04:45:35.160: INFO: Waiting for terminating namespaces to be deleted... + [It] validates basic preemption works [Conformance] + test/e2e/scheduling/preemption.go:129 + STEP: Create pods that use 4/5 of node resources. 01/14/23 04:45:35.163 + Jan 14 04:45:35.189: INFO: Created pod: pod0-0-sched-preemption-low-priority + Jan 14 04:45:35.197: INFO: Created pod: pod0-1-sched-preemption-medium-priority + Jan 14 04:45:35.214: INFO: Created pod: pod1-0-sched-preemption-medium-priority + Jan 14 04:45:35.222: INFO: Created pod: pod1-1-sched-preemption-medium-priority + Jan 14 04:45:35.240: INFO: Created pod: pod2-0-sched-preemption-medium-priority + Jan 14 04:45:35.247: INFO: Created pod: pod2-1-sched-preemption-medium-priority + STEP: Wait for pods to be scheduled. 01/14/23 04:45:35.247 + Jan 14 04:45:35.247: INFO: Waiting up to 5m0s for pod "pod0-0-sched-preemption-low-priority" in namespace "sched-preemption-4671" to be "running" + Jan 14 04:45:35.250: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 3.025395ms + Jan 14 04:45:37.255: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007286056s + Jan 14 04:45:39.255: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007785683s + Jan 14 04:45:41.255: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 6.007748576s + Jan 14 04:45:43.255: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Pending", Reason="", readiness=false. Elapsed: 8.008157552s + Jan 14 04:45:45.255: INFO: Pod "pod0-0-sched-preemption-low-priority": Phase="Running", Reason="", readiness=true. Elapsed: 10.007344213s + Jan 14 04:45:45.255: INFO: Pod "pod0-0-sched-preemption-low-priority" satisfied condition "running" + Jan 14 04:45:45.255: INFO: Waiting up to 5m0s for pod "pod0-1-sched-preemption-medium-priority" in namespace "sched-preemption-4671" to be "running" + Jan 14 04:45:45.258: INFO: Pod "pod0-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 3.221738ms + Jan 14 04:45:45.258: INFO: Pod "pod0-1-sched-preemption-medium-priority" satisfied condition "running" + Jan 14 04:45:45.258: INFO: Waiting up to 5m0s for pod "pod1-0-sched-preemption-medium-priority" in namespace "sched-preemption-4671" to be "running" + Jan 14 04:45:45.261: INFO: Pod "pod1-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.851683ms + Jan 14 04:45:45.261: INFO: Pod "pod1-0-sched-preemption-medium-priority" satisfied condition "running" + Jan 14 04:45:45.261: INFO: Waiting up to 5m0s for pod "pod1-1-sched-preemption-medium-priority" in namespace "sched-preemption-4671" to be "running" + Jan 14 04:45:45.263: INFO: Pod "pod1-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.588301ms + Jan 14 04:45:45.263: INFO: Pod "pod1-1-sched-preemption-medium-priority" satisfied condition "running" + Jan 14 04:45:45.263: INFO: Waiting up to 5m0s for pod "pod2-0-sched-preemption-medium-priority" in namespace "sched-preemption-4671" to be "running" + Jan 14 04:45:45.266: INFO: Pod "pod2-0-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.573928ms + Jan 14 04:45:45.266: INFO: Pod "pod2-0-sched-preemption-medium-priority" satisfied condition "running" + Jan 14 04:45:45.266: INFO: Waiting up to 5m0s for pod "pod2-1-sched-preemption-medium-priority" in namespace "sched-preemption-4671" to be "running" + Jan 14 04:45:45.269: INFO: Pod "pod2-1-sched-preemption-medium-priority": Phase="Running", Reason="", readiness=true. Elapsed: 2.729013ms + Jan 14 04:45:45.269: INFO: Pod "pod2-1-sched-preemption-medium-priority" satisfied condition "running" + STEP: Run a high priority pod that has same requirements as that of lower priority pod 01/14/23 04:45:45.269 + Jan 14 04:45:45.276: INFO: Waiting up to 2m0s for pod "preemptor-pod" in namespace "sched-preemption-4671" to be "running" + Jan 14 04:45:45.279: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.808689ms + Jan 14 04:45:47.283: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007565895s + Jan 14 04:45:49.284: INFO: Pod "preemptor-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008003879s + Jan 14 04:45:51.284: INFO: Pod "preemptor-pod": Phase="Running", Reason="", readiness=true. Elapsed: 6.008144431s + Jan 14 04:45:51.284: INFO: Pod "preemptor-pod" satisfied condition "running" + [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:45:51.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/scheduling/preemption.go:84 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPreemption [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "sched-preemption-4671" for this suite. 01/14/23 04:45:51.345 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSS +------------------------------ +[sig-apps] DisruptionController + should update/patch PodDisruptionBudget status [Conformance] + test/e2e/apps/disruption.go:164 +[BeforeEach] [sig-apps] DisruptionController + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:45:51.35 +Jan 14 04:45:51.351: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename disruption 01/14/23 04:45:51.351 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:45:51.363 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:45:51.365 +[BeforeEach] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:72 +[It] should update/patch PodDisruptionBudget status [Conformance] + test/e2e/apps/disruption.go:164 +STEP: Waiting for the pdb to be processed 01/14/23 04:45:51.374 +STEP: Updating PodDisruptionBudget status 01/14/23 04:45:53.382 +STEP: Waiting for all pods to be running 01/14/23 04:45:53.39 +Jan 14 04:45:53.394: INFO: running pods: 0 < 1 +STEP: locating a running pod 01/14/23 04:45:55.398 +STEP: Waiting for the pdb to be processed 01/14/23 04:45:55.413 +STEP: Patching PodDisruptionBudget status 01/14/23 04:45:55.419 +STEP: Waiting for the pdb to be processed 01/14/23 04:45:55.429 +[AfterEach] [sig-apps] DisruptionController + test/e2e/framework/node/init/init.go:32 +Jan 14 04:45:55.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] DisruptionController + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] DisruptionController + tear down framework | framework.go:193 +STEP: Destroying namespace "disruption-3890" for this suite. 01/14/23 04:45:55.438 +------------------------------ +• [4.094 seconds] +[sig-apps] DisruptionController +test/e2e/apps/framework.go:23 + should update/patch PodDisruptionBudget status [Conformance] + test/e2e/apps/disruption.go:164 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] DisruptionController + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:45:51.35 + Jan 14 04:45:51.351: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename disruption 01/14/23 04:45:51.351 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:45:51.363 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:45:51.365 + [BeforeEach] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] DisruptionController + test/e2e/apps/disruption.go:72 + [It] should update/patch PodDisruptionBudget status [Conformance] + test/e2e/apps/disruption.go:164 + STEP: Waiting for the pdb to be processed 01/14/23 04:45:51.374 + STEP: Updating PodDisruptionBudget status 01/14/23 04:45:53.382 + STEP: Waiting for all pods to be running 01/14/23 04:45:53.39 + Jan 14 04:45:53.394: INFO: running pods: 0 < 1 + STEP: locating a running pod 01/14/23 04:45:55.398 + STEP: Waiting for the pdb to be processed 01/14/23 04:45:55.413 + STEP: Patching PodDisruptionBudget status 01/14/23 04:45:55.419 + STEP: Waiting for the pdb to be processed 01/14/23 04:45:55.429 + [AfterEach] [sig-apps] DisruptionController + test/e2e/framework/node/init/init.go:32 + Jan 14 04:45:55.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] DisruptionController + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] DisruptionController + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] DisruptionController + tear down framework | framework.go:193 + STEP: Destroying namespace "disruption-3890" for this suite. 01/14/23 04:45:55.438 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should orphan pods created by rc if delete options say so [Conformance] + test/e2e/apimachinery/garbage_collector.go:370 +[BeforeEach] [sig-api-machinery] Garbage collector + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:45:55.445 +Jan 14 04:45:55.445: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename gc 01/14/23 04:45:55.446 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:45:55.458 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:45:55.46 +[BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:31 +[It] should orphan pods created by rc if delete options say so [Conformance] + test/e2e/apimachinery/garbage_collector.go:370 +STEP: create the rc 01/14/23 04:45:55.467 +STEP: delete the rc 01/14/23 04:46:00.476 +STEP: wait for the rc to be deleted 01/14/23 04:46:00.482 +STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods 01/14/23 04:46:05.486 +STEP: Gathering metrics 01/14/23 04:46:35.498 +Jan 14 04:46:35.524: INFO: Waiting up to 5m0s for pod "kube-controller-manager-10.0.1.231" in namespace "kube-system" to be "running and ready" +Jan 14 04:46:35.527: INFO: Pod "kube-controller-manager-10.0.1.231": Phase="Running", Reason="", readiness=true. Elapsed: 3.282512ms +Jan 14 04:46:35.527: INFO: The phase of Pod kube-controller-manager-10.0.1.231 is Running (Ready = true) +Jan 14 04:46:35.527: INFO: Pod "kube-controller-manager-10.0.1.231" satisfied condition "running and ready" +Jan 14 04:46:35.579: INFO: For apiserver_request_total: +For apiserver_request_latency_seconds: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: +For garbage_collector_attempt_to_delete_work_duration: +For garbage_collector_attempt_to_orphan_queue_latency: +For garbage_collector_attempt_to_orphan_work_duration: +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: +For garbage_collector_graph_changes_work_duration: +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: +For namespace_queue_latency_sum: +For namespace_queue_latency_count: +For namespace_retries: +For namespace_work_duration: +For namespace_work_duration_sum: +For namespace_work_duration_count: +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +Jan 14 04:46:35.579: INFO: Deleting pod "simpletest.rc-2brr5" in namespace "gc-3809" +Jan 14 04:46:35.593: INFO: Deleting pod "simpletest.rc-2qpbr" in namespace "gc-3809" +Jan 14 04:46:35.607: INFO: Deleting pod "simpletest.rc-2sq6l" in namespace "gc-3809" +Jan 14 04:46:35.618: INFO: Deleting pod "simpletest.rc-2vr5z" in namespace "gc-3809" +Jan 14 04:46:35.631: INFO: Deleting pod "simpletest.rc-2znll" in namespace "gc-3809" +Jan 14 04:46:35.641: INFO: Deleting pod "simpletest.rc-48kjf" in namespace "gc-3809" +Jan 14 04:46:35.650: INFO: Deleting pod "simpletest.rc-4b77n" in namespace "gc-3809" +Jan 14 04:46:35.663: INFO: Deleting pod "simpletest.rc-4nv85" in namespace "gc-3809" +Jan 14 04:46:35.675: INFO: Deleting pod "simpletest.rc-4pt8l" in namespace "gc-3809" +Jan 14 04:46:35.684: INFO: Deleting pod "simpletest.rc-6f9fl" in namespace "gc-3809" +Jan 14 04:46:35.694: INFO: Deleting pod "simpletest.rc-6skwz" in namespace "gc-3809" +Jan 14 04:46:35.706: INFO: Deleting pod "simpletest.rc-6w95t" in namespace "gc-3809" +Jan 14 04:46:35.719: INFO: Deleting pod "simpletest.rc-6x8qh" in namespace "gc-3809" +Jan 14 04:46:35.730: INFO: Deleting pod "simpletest.rc-7dsxg" in namespace "gc-3809" +Jan 14 04:46:35.742: INFO: Deleting pod "simpletest.rc-7f5w7" in namespace "gc-3809" +Jan 14 04:46:35.753: INFO: Deleting pod "simpletest.rc-88tw4" in namespace "gc-3809" +Jan 14 04:46:35.763: INFO: Deleting pod "simpletest.rc-89mnw" in namespace "gc-3809" +Jan 14 04:46:35.772: INFO: Deleting pod "simpletest.rc-8dnjm" in namespace "gc-3809" +Jan 14 04:46:35.786: INFO: Deleting pod "simpletest.rc-8zb2z" in namespace "gc-3809" +Jan 14 04:46:35.798: INFO: Deleting pod "simpletest.rc-95jtj" in namespace "gc-3809" +Jan 14 04:46:35.809: INFO: Deleting pod "simpletest.rc-964k5" in namespace "gc-3809" +Jan 14 04:46:35.820: INFO: Deleting pod "simpletest.rc-9bl88" in namespace "gc-3809" +Jan 14 04:46:35.844: INFO: Deleting pod "simpletest.rc-9qpr8" in namespace "gc-3809" +Jan 14 04:46:35.856: INFO: Deleting pod "simpletest.rc-9qv6c" in namespace "gc-3809" +Jan 14 04:46:35.869: INFO: Deleting pod "simpletest.rc-9r855" in namespace "gc-3809" +Jan 14 04:46:35.879: INFO: Deleting pod "simpletest.rc-b997z" in namespace "gc-3809" +Jan 14 04:46:35.890: INFO: Deleting pod "simpletest.rc-bghwb" in namespace "gc-3809" +Jan 14 04:46:35.902: INFO: Deleting pod "simpletest.rc-bhs48" in namespace "gc-3809" +Jan 14 04:46:35.912: INFO: Deleting pod "simpletest.rc-bmtl2" in namespace "gc-3809" +Jan 14 04:46:35.923: INFO: Deleting pod "simpletest.rc-bqgf4" in namespace "gc-3809" +Jan 14 04:46:35.934: INFO: Deleting pod "simpletest.rc-bsgh8" in namespace "gc-3809" +Jan 14 04:46:35.946: INFO: Deleting pod "simpletest.rc-bsxbk" in namespace "gc-3809" +Jan 14 04:46:35.957: INFO: Deleting pod "simpletest.rc-bt8hc" in namespace "gc-3809" +Jan 14 04:46:35.968: INFO: Deleting pod "simpletest.rc-bwljc" in namespace "gc-3809" +Jan 14 04:46:35.979: INFO: Deleting pod "simpletest.rc-d2ncw" in namespace "gc-3809" +Jan 14 04:46:35.991: INFO: Deleting pod "simpletest.rc-d46ls" in namespace "gc-3809" +Jan 14 04:46:36.002: INFO: Deleting pod "simpletest.rc-f7x9m" in namespace "gc-3809" +Jan 14 04:46:36.017: INFO: Deleting pod "simpletest.rc-fbxtk" in namespace "gc-3809" +Jan 14 04:46:36.035: INFO: Deleting pod "simpletest.rc-fgpkz" in namespace "gc-3809" +Jan 14 04:46:36.047: INFO: Deleting pod "simpletest.rc-g564f" in namespace "gc-3809" +Jan 14 04:46:36.058: INFO: Deleting pod "simpletest.rc-g92vc" in namespace "gc-3809" +Jan 14 04:46:36.069: INFO: Deleting pod "simpletest.rc-gd4w8" in namespace "gc-3809" +Jan 14 04:46:36.081: INFO: Deleting pod "simpletest.rc-ghq78" in namespace "gc-3809" +Jan 14 04:46:36.093: INFO: Deleting pod "simpletest.rc-gmfb5" in namespace "gc-3809" +Jan 14 04:46:36.104: INFO: Deleting pod "simpletest.rc-gpvfj" in namespace "gc-3809" +Jan 14 04:46:36.116: INFO: Deleting pod "simpletest.rc-h7vpt" in namespace "gc-3809" +Jan 14 04:46:36.129: INFO: Deleting pod "simpletest.rc-hfw82" in namespace "gc-3809" +Jan 14 04:46:36.152: INFO: Deleting pod "simpletest.rc-hjthw" in namespace "gc-3809" +Jan 14 04:46:36.161: INFO: Deleting pod "simpletest.rc-hpzc6" in namespace "gc-3809" +Jan 14 04:46:36.174: INFO: Deleting pod "simpletest.rc-j8rsw" in namespace "gc-3809" +Jan 14 04:46:36.187: INFO: Deleting pod "simpletest.rc-j9qcp" in namespace "gc-3809" +Jan 14 04:46:36.203: INFO: Deleting pod "simpletest.rc-jk7zs" in namespace "gc-3809" +Jan 14 04:46:36.218: INFO: Deleting pod "simpletest.rc-jzkg6" in namespace "gc-3809" +Jan 14 04:46:36.232: INFO: Deleting pod "simpletest.rc-k7dsl" in namespace "gc-3809" +Jan 14 04:46:36.244: INFO: Deleting pod "simpletest.rc-kdrv9" in namespace "gc-3809" +Jan 14 04:46:36.255: INFO: Deleting pod "simpletest.rc-kmck4" in namespace "gc-3809" +Jan 14 04:46:36.266: INFO: Deleting pod "simpletest.rc-kmhtv" in namespace "gc-3809" +Jan 14 04:46:36.275: INFO: Deleting pod "simpletest.rc-knmsd" in namespace "gc-3809" +Jan 14 04:46:36.287: INFO: Deleting pod "simpletest.rc-l4b49" in namespace "gc-3809" +Jan 14 04:46:36.295: INFO: Deleting pod "simpletest.rc-l5bqz" in namespace "gc-3809" +Jan 14 04:46:36.304: INFO: Deleting pod "simpletest.rc-lh9cq" in namespace "gc-3809" +Jan 14 04:46:36.315: INFO: Deleting pod "simpletest.rc-lv284" in namespace "gc-3809" +Jan 14 04:46:36.327: INFO: Deleting pod "simpletest.rc-m8hqb" in namespace "gc-3809" +Jan 14 04:46:36.350: INFO: Deleting pod "simpletest.rc-nhbkj" in namespace "gc-3809" +Jan 14 04:46:36.398: INFO: Deleting pod "simpletest.rc-pl46h" in namespace "gc-3809" +Jan 14 04:46:36.450: INFO: Deleting pod "simpletest.rc-prndw" in namespace "gc-3809" +Jan 14 04:46:36.499: INFO: Deleting pod "simpletest.rc-q5ncz" in namespace "gc-3809" +Jan 14 04:46:36.548: INFO: Deleting pod "simpletest.rc-q7krt" in namespace "gc-3809" +Jan 14 04:46:36.600: INFO: Deleting pod "simpletest.rc-qdsm2" in namespace "gc-3809" +Jan 14 04:46:36.649: INFO: Deleting pod "simpletest.rc-qn6p8" in namespace "gc-3809" +Jan 14 04:46:36.698: INFO: Deleting pod "simpletest.rc-qtj8n" in namespace "gc-3809" +Jan 14 04:46:36.750: INFO: Deleting pod "simpletest.rc-r9l46" in namespace "gc-3809" +Jan 14 04:46:36.799: INFO: Deleting pod "simpletest.rc-r9p9w" in namespace "gc-3809" +Jan 14 04:46:36.849: INFO: Deleting pod "simpletest.rc-rkwns" in namespace "gc-3809" +Jan 14 04:46:36.899: INFO: Deleting pod "simpletest.rc-rr9b9" in namespace "gc-3809" +Jan 14 04:46:36.949: INFO: Deleting pod "simpletest.rc-rvs4d" in namespace "gc-3809" +Jan 14 04:46:37.000: INFO: Deleting pod "simpletest.rc-swgs2" in namespace "gc-3809" +Jan 14 04:46:37.048: INFO: Deleting pod "simpletest.rc-t52wm" in namespace "gc-3809" +Jan 14 04:46:37.098: INFO: Deleting pod "simpletest.rc-tcxg9" in namespace "gc-3809" +Jan 14 04:46:37.149: INFO: Deleting pod "simpletest.rc-tgw6n" in namespace "gc-3809" +Jan 14 04:46:37.201: INFO: Deleting pod "simpletest.rc-tr8f9" in namespace "gc-3809" +Jan 14 04:46:37.249: INFO: Deleting pod "simpletest.rc-ts9rb" in namespace "gc-3809" +Jan 14 04:46:37.299: INFO: Deleting pod "simpletest.rc-wq9mz" in namespace "gc-3809" +Jan 14 04:46:37.349: INFO: Deleting pod "simpletest.rc-wqzzh" in namespace "gc-3809" +Jan 14 04:46:37.399: INFO: Deleting pod "simpletest.rc-xh7kr" in namespace "gc-3809" +Jan 14 04:46:37.457: INFO: Deleting pod "simpletest.rc-xsrkh" in namespace "gc-3809" +Jan 14 04:46:37.500: INFO: Deleting pod "simpletest.rc-z4crs" in namespace "gc-3809" +Jan 14 04:46:37.547: INFO: Deleting pod "simpletest.rc-z8xb9" in namespace "gc-3809" +Jan 14 04:46:37.606: INFO: Deleting pod "simpletest.rc-zj8xw" in namespace "gc-3809" +Jan 14 04:46:37.650: INFO: Deleting pod "simpletest.rc-zls2c" in namespace "gc-3809" +Jan 14 04:46:37.702: INFO: Deleting pod "simpletest.rc-zx795" in namespace "gc-3809" +[AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/node/init/init.go:32 +Jan 14 04:46:37.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Garbage collector + tear down framework | framework.go:193 +STEP: Destroying namespace "gc-3809" for this suite. 01/14/23 04:46:37.796 +------------------------------ +• [SLOW TEST] [42.398 seconds] +[sig-api-machinery] Garbage collector +test/e2e/apimachinery/framework.go:23 + should orphan pods created by rc if delete options say so [Conformance] + test/e2e/apimachinery/garbage_collector.go:370 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Garbage collector + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:45:55.445 + Jan 14 04:45:55.445: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename gc 01/14/23 04:45:55.446 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:45:55.458 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:45:55.46 + [BeforeEach] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:31 + [It] should orphan pods created by rc if delete options say so [Conformance] + test/e2e/apimachinery/garbage_collector.go:370 + STEP: create the rc 01/14/23 04:45:55.467 + STEP: delete the rc 01/14/23 04:46:00.476 + STEP: wait for the rc to be deleted 01/14/23 04:46:00.482 + STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods 01/14/23 04:46:05.486 + STEP: Gathering metrics 01/14/23 04:46:35.498 + Jan 14 04:46:35.524: INFO: Waiting up to 5m0s for pod "kube-controller-manager-10.0.1.231" in namespace "kube-system" to be "running and ready" + Jan 14 04:46:35.527: INFO: Pod "kube-controller-manager-10.0.1.231": Phase="Running", Reason="", readiness=true. Elapsed: 3.282512ms + Jan 14 04:46:35.527: INFO: The phase of Pod kube-controller-manager-10.0.1.231 is Running (Ready = true) + Jan 14 04:46:35.527: INFO: Pod "kube-controller-manager-10.0.1.231" satisfied condition "running and ready" + Jan 14 04:46:35.579: INFO: For apiserver_request_total: + For apiserver_request_latency_seconds: + For apiserver_init_events_total: + For garbage_collector_attempt_to_delete_queue_latency: + For garbage_collector_attempt_to_delete_work_duration: + For garbage_collector_attempt_to_orphan_queue_latency: + For garbage_collector_attempt_to_orphan_work_duration: + For garbage_collector_dirty_processing_latency_microseconds: + For garbage_collector_event_processing_latency_microseconds: + For garbage_collector_graph_changes_queue_latency: + For garbage_collector_graph_changes_work_duration: + For garbage_collector_orphan_processing_latency_microseconds: + For namespace_queue_latency: + For namespace_queue_latency_sum: + For namespace_queue_latency_count: + For namespace_retries: + For namespace_work_duration: + For namespace_work_duration_sum: + For namespace_work_duration_count: + For function_duration_seconds: + For errors_total: + For evicted_pods_total: + + Jan 14 04:46:35.579: INFO: Deleting pod "simpletest.rc-2brr5" in namespace "gc-3809" + Jan 14 04:46:35.593: INFO: Deleting pod "simpletest.rc-2qpbr" in namespace "gc-3809" + Jan 14 04:46:35.607: INFO: Deleting pod "simpletest.rc-2sq6l" in namespace "gc-3809" + Jan 14 04:46:35.618: INFO: Deleting pod "simpletest.rc-2vr5z" in namespace "gc-3809" + Jan 14 04:46:35.631: INFO: Deleting pod "simpletest.rc-2znll" in namespace "gc-3809" + Jan 14 04:46:35.641: INFO: Deleting pod "simpletest.rc-48kjf" in namespace "gc-3809" + Jan 14 04:46:35.650: INFO: Deleting pod "simpletest.rc-4b77n" in namespace "gc-3809" + Jan 14 04:46:35.663: INFO: Deleting pod "simpletest.rc-4nv85" in namespace "gc-3809" + Jan 14 04:46:35.675: INFO: Deleting pod "simpletest.rc-4pt8l" in namespace "gc-3809" + Jan 14 04:46:35.684: INFO: Deleting pod "simpletest.rc-6f9fl" in namespace "gc-3809" + Jan 14 04:46:35.694: INFO: Deleting pod "simpletest.rc-6skwz" in namespace "gc-3809" + Jan 14 04:46:35.706: INFO: Deleting pod "simpletest.rc-6w95t" in namespace "gc-3809" + Jan 14 04:46:35.719: INFO: Deleting pod "simpletest.rc-6x8qh" in namespace "gc-3809" + Jan 14 04:46:35.730: INFO: Deleting pod "simpletest.rc-7dsxg" in namespace "gc-3809" + Jan 14 04:46:35.742: INFO: Deleting pod "simpletest.rc-7f5w7" in namespace "gc-3809" + Jan 14 04:46:35.753: INFO: Deleting pod "simpletest.rc-88tw4" in namespace "gc-3809" + Jan 14 04:46:35.763: INFO: Deleting pod "simpletest.rc-89mnw" in namespace "gc-3809" + Jan 14 04:46:35.772: INFO: Deleting pod "simpletest.rc-8dnjm" in namespace "gc-3809" + Jan 14 04:46:35.786: INFO: Deleting pod "simpletest.rc-8zb2z" in namespace "gc-3809" + Jan 14 04:46:35.798: INFO: Deleting pod "simpletest.rc-95jtj" in namespace "gc-3809" + Jan 14 04:46:35.809: INFO: Deleting pod "simpletest.rc-964k5" in namespace "gc-3809" + Jan 14 04:46:35.820: INFO: Deleting pod "simpletest.rc-9bl88" in namespace "gc-3809" + Jan 14 04:46:35.844: INFO: Deleting pod "simpletest.rc-9qpr8" in namespace "gc-3809" + Jan 14 04:46:35.856: INFO: Deleting pod "simpletest.rc-9qv6c" in namespace "gc-3809" + Jan 14 04:46:35.869: INFO: Deleting pod "simpletest.rc-9r855" in namespace "gc-3809" + Jan 14 04:46:35.879: INFO: Deleting pod "simpletest.rc-b997z" in namespace "gc-3809" + Jan 14 04:46:35.890: INFO: Deleting pod "simpletest.rc-bghwb" in namespace "gc-3809" + Jan 14 04:46:35.902: INFO: Deleting pod "simpletest.rc-bhs48" in namespace "gc-3809" + Jan 14 04:46:35.912: INFO: Deleting pod "simpletest.rc-bmtl2" in namespace "gc-3809" + Jan 14 04:46:35.923: INFO: Deleting pod "simpletest.rc-bqgf4" in namespace "gc-3809" + Jan 14 04:46:35.934: INFO: Deleting pod "simpletest.rc-bsgh8" in namespace "gc-3809" + Jan 14 04:46:35.946: INFO: Deleting pod "simpletest.rc-bsxbk" in namespace "gc-3809" + Jan 14 04:46:35.957: INFO: Deleting pod "simpletest.rc-bt8hc" in namespace "gc-3809" + Jan 14 04:46:35.968: INFO: Deleting pod "simpletest.rc-bwljc" in namespace "gc-3809" + Jan 14 04:46:35.979: INFO: Deleting pod "simpletest.rc-d2ncw" in namespace "gc-3809" + Jan 14 04:46:35.991: INFO: Deleting pod "simpletest.rc-d46ls" in namespace "gc-3809" + Jan 14 04:46:36.002: INFO: Deleting pod "simpletest.rc-f7x9m" in namespace "gc-3809" + Jan 14 04:46:36.017: INFO: Deleting pod "simpletest.rc-fbxtk" in namespace "gc-3809" + Jan 14 04:46:36.035: INFO: Deleting pod "simpletest.rc-fgpkz" in namespace "gc-3809" + Jan 14 04:46:36.047: INFO: Deleting pod "simpletest.rc-g564f" in namespace "gc-3809" + Jan 14 04:46:36.058: INFO: Deleting pod "simpletest.rc-g92vc" in namespace "gc-3809" + Jan 14 04:46:36.069: INFO: Deleting pod "simpletest.rc-gd4w8" in namespace "gc-3809" + Jan 14 04:46:36.081: INFO: Deleting pod "simpletest.rc-ghq78" in namespace "gc-3809" + Jan 14 04:46:36.093: INFO: Deleting pod "simpletest.rc-gmfb5" in namespace "gc-3809" + Jan 14 04:46:36.104: INFO: Deleting pod "simpletest.rc-gpvfj" in namespace "gc-3809" + Jan 14 04:46:36.116: INFO: Deleting pod "simpletest.rc-h7vpt" in namespace "gc-3809" + Jan 14 04:46:36.129: INFO: Deleting pod "simpletest.rc-hfw82" in namespace "gc-3809" + Jan 14 04:46:36.152: INFO: Deleting pod "simpletest.rc-hjthw" in namespace "gc-3809" + Jan 14 04:46:36.161: INFO: Deleting pod "simpletest.rc-hpzc6" in namespace "gc-3809" + Jan 14 04:46:36.174: INFO: Deleting pod "simpletest.rc-j8rsw" in namespace "gc-3809" + Jan 14 04:46:36.187: INFO: Deleting pod "simpletest.rc-j9qcp" in namespace "gc-3809" + Jan 14 04:46:36.203: INFO: Deleting pod "simpletest.rc-jk7zs" in namespace "gc-3809" + Jan 14 04:46:36.218: INFO: Deleting pod "simpletest.rc-jzkg6" in namespace "gc-3809" + Jan 14 04:46:36.232: INFO: Deleting pod "simpletest.rc-k7dsl" in namespace "gc-3809" + Jan 14 04:46:36.244: INFO: Deleting pod "simpletest.rc-kdrv9" in namespace "gc-3809" + Jan 14 04:46:36.255: INFO: Deleting pod "simpletest.rc-kmck4" in namespace "gc-3809" + Jan 14 04:46:36.266: INFO: Deleting pod "simpletest.rc-kmhtv" in namespace "gc-3809" + Jan 14 04:46:36.275: INFO: Deleting pod "simpletest.rc-knmsd" in namespace "gc-3809" + Jan 14 04:46:36.287: INFO: Deleting pod "simpletest.rc-l4b49" in namespace "gc-3809" + Jan 14 04:46:36.295: INFO: Deleting pod "simpletest.rc-l5bqz" in namespace "gc-3809" + Jan 14 04:46:36.304: INFO: Deleting pod "simpletest.rc-lh9cq" in namespace "gc-3809" + Jan 14 04:46:36.315: INFO: Deleting pod "simpletest.rc-lv284" in namespace "gc-3809" + Jan 14 04:46:36.327: INFO: Deleting pod "simpletest.rc-m8hqb" in namespace "gc-3809" + Jan 14 04:46:36.350: INFO: Deleting pod "simpletest.rc-nhbkj" in namespace "gc-3809" + Jan 14 04:46:36.398: INFO: Deleting pod "simpletest.rc-pl46h" in namespace "gc-3809" + Jan 14 04:46:36.450: INFO: Deleting pod "simpletest.rc-prndw" in namespace "gc-3809" + Jan 14 04:46:36.499: INFO: Deleting pod "simpletest.rc-q5ncz" in namespace "gc-3809" + Jan 14 04:46:36.548: INFO: Deleting pod "simpletest.rc-q7krt" in namespace "gc-3809" + Jan 14 04:46:36.600: INFO: Deleting pod "simpletest.rc-qdsm2" in namespace "gc-3809" + Jan 14 04:46:36.649: INFO: Deleting pod "simpletest.rc-qn6p8" in namespace "gc-3809" + Jan 14 04:46:36.698: INFO: Deleting pod "simpletest.rc-qtj8n" in namespace "gc-3809" + Jan 14 04:46:36.750: INFO: Deleting pod "simpletest.rc-r9l46" in namespace "gc-3809" + Jan 14 04:46:36.799: INFO: Deleting pod "simpletest.rc-r9p9w" in namespace "gc-3809" + Jan 14 04:46:36.849: INFO: Deleting pod "simpletest.rc-rkwns" in namespace "gc-3809" + Jan 14 04:46:36.899: INFO: Deleting pod "simpletest.rc-rr9b9" in namespace "gc-3809" + Jan 14 04:46:36.949: INFO: Deleting pod "simpletest.rc-rvs4d" in namespace "gc-3809" + Jan 14 04:46:37.000: INFO: Deleting pod "simpletest.rc-swgs2" in namespace "gc-3809" + Jan 14 04:46:37.048: INFO: Deleting pod "simpletest.rc-t52wm" in namespace "gc-3809" + Jan 14 04:46:37.098: INFO: Deleting pod "simpletest.rc-tcxg9" in namespace "gc-3809" + Jan 14 04:46:37.149: INFO: Deleting pod "simpletest.rc-tgw6n" in namespace "gc-3809" + Jan 14 04:46:37.201: INFO: Deleting pod "simpletest.rc-tr8f9" in namespace "gc-3809" + Jan 14 04:46:37.249: INFO: Deleting pod "simpletest.rc-ts9rb" in namespace "gc-3809" + Jan 14 04:46:37.299: INFO: Deleting pod "simpletest.rc-wq9mz" in namespace "gc-3809" + Jan 14 04:46:37.349: INFO: Deleting pod "simpletest.rc-wqzzh" in namespace "gc-3809" + Jan 14 04:46:37.399: INFO: Deleting pod "simpletest.rc-xh7kr" in namespace "gc-3809" + Jan 14 04:46:37.457: INFO: Deleting pod "simpletest.rc-xsrkh" in namespace "gc-3809" + Jan 14 04:46:37.500: INFO: Deleting pod "simpletest.rc-z4crs" in namespace "gc-3809" + Jan 14 04:46:37.547: INFO: Deleting pod "simpletest.rc-z8xb9" in namespace "gc-3809" + Jan 14 04:46:37.606: INFO: Deleting pod "simpletest.rc-zj8xw" in namespace "gc-3809" + Jan 14 04:46:37.650: INFO: Deleting pod "simpletest.rc-zls2c" in namespace "gc-3809" + Jan 14 04:46:37.702: INFO: Deleting pod "simpletest.rc-zx795" in namespace "gc-3809" + [AfterEach] [sig-api-machinery] Garbage collector + test/e2e/framework/node/init/init.go:32 + Jan 14 04:46:37.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Garbage collector + tear down framework | framework.go:193 + STEP: Destroying namespace "gc-3809" for this suite. 01/14/23 04:46:37.796 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-apps] CronJob + should not schedule jobs when suspended [Slow] [Conformance] + test/e2e/apps/cronjob.go:96 +[BeforeEach] [sig-apps] CronJob + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:46:37.844 +Jan 14 04:46:37.844: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename cronjob 01/14/23 04:46:37.845 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:46:37.859 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:46:37.861 +[BeforeEach] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:31 +[It] should not schedule jobs when suspended [Slow] [Conformance] + test/e2e/apps/cronjob.go:96 +STEP: Creating a suspended cronjob 01/14/23 04:46:37.863 +STEP: Ensuring no jobs are scheduled 01/14/23 04:46:37.868 +STEP: Ensuring no job exists by listing jobs explicitly 01/14/23 04:51:37.875 +STEP: Removing cronjob 01/14/23 04:51:37.88 +[AfterEach] [sig-apps] CronJob + test/e2e/framework/node/init/init.go:32 +Jan 14 04:51:37.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] CronJob + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] CronJob + tear down framework | framework.go:193 +STEP: Destroying namespace "cronjob-9508" for this suite. 01/14/23 04:51:37.891 +------------------------------ +• [SLOW TEST] [300.052 seconds] +[sig-apps] CronJob +test/e2e/apps/framework.go:23 + should not schedule jobs when suspended [Slow] [Conformance] + test/e2e/apps/cronjob.go:96 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] CronJob + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:46:37.844 + Jan 14 04:46:37.844: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename cronjob 01/14/23 04:46:37.845 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:46:37.859 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:46:37.861 + [BeforeEach] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:31 + [It] should not schedule jobs when suspended [Slow] [Conformance] + test/e2e/apps/cronjob.go:96 + STEP: Creating a suspended cronjob 01/14/23 04:46:37.863 + STEP: Ensuring no jobs are scheduled 01/14/23 04:46:37.868 + STEP: Ensuring no job exists by listing jobs explicitly 01/14/23 04:51:37.875 + STEP: Removing cronjob 01/14/23 04:51:37.88 + [AfterEach] [sig-apps] CronJob + test/e2e/framework/node/init/init.go:32 + Jan 14 04:51:37.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] CronJob + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] CronJob + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] CronJob + tear down framework | framework.go:193 + STEP: Destroying namespace "cronjob-9508" for this suite. 01/14/23 04:51:37.891 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-network] DNS + should support configurable pod DNS nameservers [Conformance] + test/e2e/network/dns.go:411 +[BeforeEach] [sig-network] DNS + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:51:37.896 +Jan 14 04:51:37.897: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename dns 01/14/23 04:51:37.897 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:51:37.91 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:51:37.912 +[BeforeEach] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:31 +[It] should support configurable pod DNS nameservers [Conformance] + test/e2e/network/dns.go:411 +STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... 01/14/23 04:51:37.914 +Jan 14 04:51:37.928: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-8963 a976fa46-2980-4a3e-953f-9e6e4e604633 455498 0 2023-01-14 04:51:37 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2023-01-14 04:51:37 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-kzmmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kzmmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jan 14 04:51:37.928: INFO: Waiting up to 5m0s for pod "test-dns-nameservers" in namespace "dns-8963" to be "running and ready" +Jan 14 04:51:37.931: INFO: Pod "test-dns-nameservers": Phase="Pending", Reason="", readiness=false. Elapsed: 2.75362ms +Jan 14 04:51:37.931: INFO: The phase of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) +Jan 14 04:51:39.936: INFO: Pod "test-dns-nameservers": Phase="Running", Reason="", readiness=true. Elapsed: 2.007685327s +Jan 14 04:51:39.936: INFO: The phase of Pod test-dns-nameservers is Running (Ready = true) +Jan 14 04:51:39.936: INFO: Pod "test-dns-nameservers" satisfied condition "running and ready" +STEP: Verifying customized DNS suffix list is configured on pod... 01/14/23 04:51:39.936 +Jan 14 04:51:39.936: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-8963 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jan 14 04:51:39.936: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +Jan 14 04:51:39.937: INFO: ExecWithOptions: Clientset creation +Jan 14 04:51:39.937: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/dns-8963/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-suffix&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +STEP: Verifying customized DNS server is configured on pod... 01/14/23 04:51:39.994 +Jan 14 04:51:39.994: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-8963 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jan 14 04:51:39.994: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +Jan 14 04:51:39.995: INFO: ExecWithOptions: Clientset creation +Jan 14 04:51:39.995: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/dns-8963/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-server-list&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) +Jan 14 04:51:40.052: INFO: Deleting pod test-dns-nameservers... +[AfterEach] [sig-network] DNS + test/e2e/framework/node/init/init.go:32 +Jan 14 04:51:40.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] DNS + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] DNS + tear down framework | framework.go:193 +STEP: Destroying namespace "dns-8963" for this suite. 01/14/23 04:51:40.07 +------------------------------ +• [2.180 seconds] +[sig-network] DNS +test/e2e/network/common/framework.go:23 + should support configurable pod DNS nameservers [Conformance] + test/e2e/network/dns.go:411 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] DNS + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:51:37.896 + Jan 14 04:51:37.897: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename dns 01/14/23 04:51:37.897 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:51:37.91 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:51:37.912 + [BeforeEach] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:31 + [It] should support configurable pod DNS nameservers [Conformance] + test/e2e/network/dns.go:411 + STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... 01/14/23 04:51:37.914 + Jan 14 04:51:37.928: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-8963 a976fa46-2980-4a3e-953f-9e6e4e604633 455498 0 2023-01-14 04:51:37 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2023-01-14 04:51:37 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-kzmmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kzmmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jan 14 04:51:37.928: INFO: Waiting up to 5m0s for pod "test-dns-nameservers" in namespace "dns-8963" to be "running and ready" + Jan 14 04:51:37.931: INFO: Pod "test-dns-nameservers": Phase="Pending", Reason="", readiness=false. Elapsed: 2.75362ms + Jan 14 04:51:37.931: INFO: The phase of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) + Jan 14 04:51:39.936: INFO: Pod "test-dns-nameservers": Phase="Running", Reason="", readiness=true. Elapsed: 2.007685327s + Jan 14 04:51:39.936: INFO: The phase of Pod test-dns-nameservers is Running (Ready = true) + Jan 14 04:51:39.936: INFO: Pod "test-dns-nameservers" satisfied condition "running and ready" + STEP: Verifying customized DNS suffix list is configured on pod... 01/14/23 04:51:39.936 + Jan 14 04:51:39.936: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-8963 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jan 14 04:51:39.936: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + Jan 14 04:51:39.937: INFO: ExecWithOptions: Clientset creation + Jan 14 04:51:39.937: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/dns-8963/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-suffix&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) + STEP: Verifying customized DNS server is configured on pod... 01/14/23 04:51:39.994 + Jan 14 04:51:39.994: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-8963 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jan 14 04:51:39.994: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + Jan 14 04:51:39.995: INFO: ExecWithOptions: Clientset creation + Jan 14 04:51:39.995: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/dns-8963/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-server-list&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true) + Jan 14 04:51:40.052: INFO: Deleting pod test-dns-nameservers... + [AfterEach] [sig-network] DNS + test/e2e/framework/node/init/init.go:32 + Jan 14 04:51:40.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] DNS + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] DNS + tear down framework | framework.go:193 + STEP: Destroying namespace "dns-8963" for this suite. 01/14/23 04:51:40.07 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:119 +[BeforeEach] [sig-storage] Projected secret + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:51:40.077 +Jan 14 04:51:40.077: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename projected 01/14/23 04:51:40.077 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:51:40.091 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:51:40.093 +[BeforeEach] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:119 +STEP: Creating secret with name projected-secret-test-95735fa9-06df-45c0-ae61-db50cb2303bb 01/14/23 04:51:40.095 +STEP: Creating a pod to test consume secrets 01/14/23 04:51:40.1 +Jan 14 04:51:40.109: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1ad4180d-139b-484b-b039-0dfd59fae56f" in namespace "projected-9557" to be "Succeeded or Failed" +Jan 14 04:51:40.112: INFO: Pod "pod-projected-secrets-1ad4180d-139b-484b-b039-0dfd59fae56f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.736963ms +Jan 14 04:51:42.117: INFO: Pod "pod-projected-secrets-1ad4180d-139b-484b-b039-0dfd59fae56f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007786019s +Jan 14 04:51:44.117: INFO: Pod "pod-projected-secrets-1ad4180d-139b-484b-b039-0dfd59fae56f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007870117s +STEP: Saw pod success 01/14/23 04:51:44.117 +Jan 14 04:51:44.117: INFO: Pod "pod-projected-secrets-1ad4180d-139b-484b-b039-0dfd59fae56f" satisfied condition "Succeeded or Failed" +Jan 14 04:51:44.120: INFO: Trying to get logs from node 10.0.1.106 pod pod-projected-secrets-1ad4180d-139b-484b-b039-0dfd59fae56f container secret-volume-test: +STEP: delete the pod 01/14/23 04:51:44.134 +Jan 14 04:51:44.149: INFO: Waiting for pod pod-projected-secrets-1ad4180d-139b-484b-b039-0dfd59fae56f to disappear +Jan 14 04:51:44.152: INFO: Pod pod-projected-secrets-1ad4180d-139b-484b-b039-0dfd59fae56f no longer exists +[AfterEach] [sig-storage] Projected secret + test/e2e/framework/node/init/init.go:32 +Jan 14 04:51:44.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected secret + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected secret + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-9557" for this suite. 01/14/23 04:51:44.157 +------------------------------ +• [4.090 seconds] +[sig-storage] Projected secret +test/e2e/common/storage/framework.go:23 + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:119 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected secret + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:51:40.077 + Jan 14 04:51:40.077: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename projected 01/14/23 04:51:40.077 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:51:40.091 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:51:40.093 + [BeforeEach] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + test/e2e/common/storage/projected_secret.go:119 + STEP: Creating secret with name projected-secret-test-95735fa9-06df-45c0-ae61-db50cb2303bb 01/14/23 04:51:40.095 + STEP: Creating a pod to test consume secrets 01/14/23 04:51:40.1 + Jan 14 04:51:40.109: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1ad4180d-139b-484b-b039-0dfd59fae56f" in namespace "projected-9557" to be "Succeeded or Failed" + Jan 14 04:51:40.112: INFO: Pod "pod-projected-secrets-1ad4180d-139b-484b-b039-0dfd59fae56f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.736963ms + Jan 14 04:51:42.117: INFO: Pod "pod-projected-secrets-1ad4180d-139b-484b-b039-0dfd59fae56f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007786019s + Jan 14 04:51:44.117: INFO: Pod "pod-projected-secrets-1ad4180d-139b-484b-b039-0dfd59fae56f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007870117s + STEP: Saw pod success 01/14/23 04:51:44.117 + Jan 14 04:51:44.117: INFO: Pod "pod-projected-secrets-1ad4180d-139b-484b-b039-0dfd59fae56f" satisfied condition "Succeeded or Failed" + Jan 14 04:51:44.120: INFO: Trying to get logs from node 10.0.1.106 pod pod-projected-secrets-1ad4180d-139b-484b-b039-0dfd59fae56f container secret-volume-test: + STEP: delete the pod 01/14/23 04:51:44.134 + Jan 14 04:51:44.149: INFO: Waiting for pod pod-projected-secrets-1ad4180d-139b-484b-b039-0dfd59fae56f to disappear + Jan 14 04:51:44.152: INFO: Pod pod-projected-secrets-1ad4180d-139b-484b-b039-0dfd59fae56f no longer exists + [AfterEach] [sig-storage] Projected secret + test/e2e/framework/node/init/init.go:32 + Jan 14 04:51:44.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected secret + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected secret + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected secret + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-9557" for this suite. 01/14/23 04:51:44.157 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-apps] Deployment + deployment should support proportional scaling [Conformance] + test/e2e/apps/deployment.go:160 +[BeforeEach] [sig-apps] Deployment + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:51:44.167 +Jan 14 04:51:44.167: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename deployment 01/14/23 04:51:44.168 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:51:44.188 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:51:44.191 +[BeforeEach] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] deployment should support proportional scaling [Conformance] + test/e2e/apps/deployment.go:160 +Jan 14 04:51:44.193: INFO: Creating deployment "webserver-deployment" +Jan 14 04:51:44.200: INFO: Waiting for observed generation 1 +Jan 14 04:51:46.222: INFO: Waiting for all required pods to come up +Jan 14 04:51:46.228: INFO: Pod name httpd: Found 10 pods out of 10 +STEP: ensuring each pod is running 01/14/23 04:51:46.228 +Jan 14 04:51:46.228: INFO: Waiting up to 5m0s for pod "webserver-deployment-7f5969cbc7-bx7s4" in namespace "deployment-2865" to be "running" +Jan 14 04:51:46.228: INFO: Waiting up to 5m0s for pod "webserver-deployment-7f5969cbc7-dcvm2" in namespace "deployment-2865" to be "running" +Jan 14 04:51:46.228: INFO: Waiting up to 5m0s for pod "webserver-deployment-7f5969cbc7-jh6pf" in namespace "deployment-2865" to be "running" +Jan 14 04:51:46.231: INFO: Pod "webserver-deployment-7f5969cbc7-dcvm2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.115283ms +Jan 14 04:51:46.232: INFO: Pod "webserver-deployment-7f5969cbc7-bx7s4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.91834ms +Jan 14 04:51:46.232: INFO: Pod "webserver-deployment-7f5969cbc7-jh6pf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.873515ms +Jan 14 04:51:48.236: INFO: Pod "webserver-deployment-7f5969cbc7-jh6pf": Phase="Running", Reason="", readiness=true. Elapsed: 2.007971046s +Jan 14 04:51:48.236: INFO: Pod "webserver-deployment-7f5969cbc7-jh6pf" satisfied condition "running" +Jan 14 04:51:48.236: INFO: Pod "webserver-deployment-7f5969cbc7-bx7s4": Phase="Running", Reason="", readiness=true. Elapsed: 2.008565622s +Jan 14 04:51:48.236: INFO: Pod "webserver-deployment-7f5969cbc7-bx7s4" satisfied condition "running" +Jan 14 04:51:48.236: INFO: Pod "webserver-deployment-7f5969cbc7-dcvm2": Phase="Running", Reason="", readiness=true. Elapsed: 2.008587881s +Jan 14 04:51:48.236: INFO: Pod "webserver-deployment-7f5969cbc7-dcvm2" satisfied condition "running" +Jan 14 04:51:48.237: INFO: Waiting for deployment "webserver-deployment" to complete +Jan 14 04:51:48.244: INFO: Updating deployment "webserver-deployment" with a non-existent image +Jan 14 04:51:48.254: INFO: Updating deployment webserver-deployment +Jan 14 04:51:48.254: INFO: Waiting for observed generation 2 +Jan 14 04:51:50.261: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 +Jan 14 04:51:50.264: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 +Jan 14 04:51:50.267: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas +Jan 14 04:51:50.275: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 +Jan 14 04:51:50.275: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 +Jan 14 04:51:50.277: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas +Jan 14 04:51:50.282: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas +Jan 14 04:51:50.282: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 +Jan 14 04:51:50.292: INFO: Updating deployment webserver-deployment +Jan 14 04:51:50.292: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas +Jan 14 04:51:50.299: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 +Jan 14 04:51:50.302: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Jan 14 04:51:50.313: INFO: Deployment "webserver-deployment": +&Deployment{ObjectMeta:{webserver-deployment deployment-2865 1cb73e88-de82-493c-8850-ac8e2075caf8 455797 3 2023-01-14 04:51:44 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{kube-controller-manager Update apps/v1 2023-01-14 04:51:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status} {e2e.test Update apps/v1 2023-01-14 04:51:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005753518 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-01-14 04:51:46 +0000 UTC,LastTransitionTime:2023-01-14 04:51:46 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-d9f79cb5" is progressing.,LastUpdateTime:2023-01-14 04:51:48 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} + +Jan 14 04:51:50.328: INFO: New ReplicaSet "webserver-deployment-d9f79cb5" of Deployment "webserver-deployment": +&ReplicaSet{ObjectMeta:{webserver-deployment-d9f79cb5 deployment-2865 4e86ac37-61a5-4d10-898f-665dce57befc 455801 3 2023-01-14 04:51:48 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 1cb73e88-de82-493c-8850-ac8e2075caf8 0xc005753b37 0xc005753b38}] [] [{kube-controller-manager Update apps/v1 2023-01-14 04:51:48 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status} {kube-controller-manager Update apps/v1 2023-01-14 04:51:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1cb73e88-de82-493c-8850-ac8e2075caf8\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: d9f79cb5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005753bd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Jan 14 04:51:50.328: INFO: All old ReplicaSets of Deployment "webserver-deployment": +Jan 14 04:51:50.329: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-7f5969cbc7 deployment-2865 7f23b900-63d1-44fa-8d46-df5ce1f3e0bc 455798 3 2023-01-14 04:51:44 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 1cb73e88-de82-493c-8850-ac8e2075caf8 0xc005753a47 0xc005753a48}] [] [{kube-controller-manager Update apps/v1 2023-01-14 04:51:48 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status} {kube-controller-manager Update apps/v1 2023-01-14 04:51:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1cb73e88-de82-493c-8850-ac8e2075caf8\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 7f5969cbc7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005753ad8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} +Jan 14 04:51:50.344: INFO: Pod "webserver-deployment-7f5969cbc7-4mvcd" is available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-4mvcd webserver-deployment-7f5969cbc7- deployment-2865 9be3aa23-f7b6-43f4-be63-6289a212d79b 455679 0 2023-01-14 04:51:44 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.1.79" + ], + "mac": "1e:8f:a9:c0:67:24", + "default": true, + "dns": {} +}]] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 7f23b900-63d1-44fa-8d46-df5ce1f3e0bc 0xc008207e57 0xc008207e58}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f23b900-63d1-44fa-8d46-df5ce1f3e0bc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus Update v1 2023-01-14 04:51:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status} {kubelet Update v1 2023-01-14 04:51:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.1.79\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-st98b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-st98b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.99,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.99,PodIP:10.52.1.79,StartTime:2023-01-14 04:51:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 04:51:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:containerd://5c8988b36c3cbf78a77ea543a95cf030a5564ddef485fdc6a8882564cdf2aad3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.52.1.79,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jan 14 04:51:50.344: INFO: Pod "webserver-deployment-7f5969cbc7-4qpfq" is not available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-4qpfq webserver-deployment-7f5969cbc7- deployment-2865 db478ac4-cebd-4b81-bcab-42e7e0ad4aed 455819 0 2023-01-14 04:51:50 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 7f23b900-63d1-44fa-8d46-df5ce1f3e0bc 0xc000ea0150 0xc000ea0151}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f23b900-63d1-44fa-8d46-df5ce1f3e0bc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4dcqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4dcqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jan 14 04:51:50.344: INFO: Pod "webserver-deployment-7f5969cbc7-8sgvj" is not available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-8sgvj webserver-deployment-7f5969cbc7- deployment-2865 c2820e2b-bb39-45b9-a06a-19dd35886911 455811 0 2023-01-14 04:51:50 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 7f23b900-63d1-44fa-8d46-df5ce1f3e0bc 0xc000ea10f0 0xc000ea10f1}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f23b900-63d1-44fa-8d46-df5ce1f3e0bc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-q45rn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q45rn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.212,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jan 14 04:51:50.345: INFO: Pod "webserver-deployment-7f5969cbc7-bhh5f" is not available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-bhh5f webserver-deployment-7f5969cbc7- deployment-2865 324fbc58-2b90-41fe-b6fb-b94af8d3b1ec 455814 0 2023-01-14 04:51:50 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 7f23b900-63d1-44fa-8d46-df5ce1f3e0bc 0xc000ea1520 0xc000ea1521}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f23b900-63d1-44fa-8d46-df5ce1f3e0bc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4z6mf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4z6mf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jan 14 04:51:50.345: INFO: Pod "webserver-deployment-7f5969cbc7-bx7s4" is available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-bx7s4 webserver-deployment-7f5969cbc7- deployment-2865 7c40cb7a-8951-4d81-8941-86c4f1cd5762 455703 0 2023-01-14 04:51:44 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.0.234" + ], + "mac": "d2:35:10:8f:a1:b7", + "default": true, + "dns": {} +}]] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 7f23b900-63d1-44fa-8d46-df5ce1f3e0bc 0xc000ea19b0 0xc000ea19b1}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f23b900-63d1-44fa-8d46-df5ce1f3e0bc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus Update v1 2023-01-14 04:51:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status} {kubelet Update v1 2023-01-14 04:51:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.0.234\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ljrxq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ljrxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.212,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.212,PodIP:10.52.0.234,StartTime:2023-01-14 04:51:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 04:51:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:containerd://a5ac9343092ac52d0b69737a3a94b749dc7d3481f4cf07070e91b229ac3c58e2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.52.0.234,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jan 14 04:51:50.345: INFO: Pod "webserver-deployment-7f5969cbc7-cckcj" is not available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-cckcj webserver-deployment-7f5969cbc7- deployment-2865 22a06e94-47ac-4d86-9bd5-630ea82e074f 455810 0 2023-01-14 04:51:50 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 7f23b900-63d1-44fa-8d46-df5ce1f3e0bc 0xc004c82040 0xc004c82041}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f23b900-63d1-44fa-8d46-df5ce1f3e0bc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xhp2f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xhp2f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.106,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jan 14 04:51:50.345: INFO: Pod "webserver-deployment-7f5969cbc7-gjwg9" is available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-gjwg9 webserver-deployment-7f5969cbc7- deployment-2865 5e7019b0-36a8-44eb-be6b-99c82958aa5d 455670 0 2023-01-14 04:51:44 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.1.54" + ], + "mac": "92:05:ec:22:b1:a9", + "default": true, + "dns": {} +}]] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 7f23b900-63d1-44fa-8d46-df5ce1f3e0bc 0xc004c821a0 0xc004c821a1}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f23b900-63d1-44fa-8d46-df5ce1f3e0bc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus Update v1 2023-01-14 04:51:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status} {kubelet Update v1 2023-01-14 04:51:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.1.54\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-s2c9t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s2c9t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.106,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.106,PodIP:10.52.1.54,StartTime:2023-01-14 04:51:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 04:51:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:containerd://918b789023fc602e17839867629c3c0e9c89027d024a9fd5eac5d1fae2062dbe,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.52.1.54,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jan 14 04:51:50.345: INFO: Pod "webserver-deployment-7f5969cbc7-hhbrt" is not available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-hhbrt webserver-deployment-7f5969cbc7- deployment-2865 c6c6a6b9-345d-4b8e-9e9b-cc0d0f2bb4f6 455815 0 2023-01-14 04:51:50 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 7f23b900-63d1-44fa-8d46-df5ce1f3e0bc 0xc004c823a0 0xc004c823a1}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f23b900-63d1-44fa-8d46-df5ce1f3e0bc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rnw87,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rnw87,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jan 14 04:51:50.345: INFO: Pod "webserver-deployment-7f5969cbc7-hktw7" is available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-hktw7 webserver-deployment-7f5969cbc7- deployment-2865 320406ba-c168-4e87-ac5f-e051ad5b888e 455686 0 2023-01-14 04:51:44 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.0.233" + ], + "mac": "5a:da:c0:85:7e:11", + "default": true, + "dns": {} +}]] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 7f23b900-63d1-44fa-8d46-df5ce1f3e0bc 0xc004c824e0 0xc004c824e1}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f23b900-63d1-44fa-8d46-df5ce1f3e0bc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus Update v1 2023-01-14 04:51:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status} {kubelet Update v1 2023-01-14 04:51:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.0.233\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-cgdfc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cgdfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.212,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.212,PodIP:10.52.0.233,StartTime:2023-01-14 04:51:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 04:51:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:containerd://6c303e4eac6237cad1b250138d0e55e8cda2ffc32972aa12da7672f47b871107,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.52.0.233,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jan 14 04:51:50.346: INFO: Pod "webserver-deployment-7f5969cbc7-jh6pf" is available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-jh6pf webserver-deployment-7f5969cbc7- deployment-2865 6eb6e9f3-9b0d-41a5-8dc1-7cc2bca206a4 455696 0 2023-01-14 04:51:44 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.0.232" + ], + "mac": "86:4f:a1:9f:9b:74", + "default": true, + "dns": {} +}]] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 7f23b900-63d1-44fa-8d46-df5ce1f3e0bc 0xc004c826f0 0xc004c826f1}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f23b900-63d1-44fa-8d46-df5ce1f3e0bc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus Update v1 2023-01-14 04:51:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status} {kubelet Update v1 2023-01-14 04:51:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.0.232\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9xjnd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9xjnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.212,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.212,PodIP:10.52.0.232,StartTime:2023-01-14 04:51:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 04:51:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:containerd://67802f648a498ac1322131700f833a09bfc7bc04bb684e78bfd36dcd079589e9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.52.0.232,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jan 14 04:51:50.346: INFO: Pod "webserver-deployment-7f5969cbc7-l5cf8" is available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-l5cf8 webserver-deployment-7f5969cbc7- deployment-2865 54d6d46c-6ca4-4af4-9498-d788ebef64e0 455687 0 2023-01-14 04:51:44 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.1.55" + ], + "mac": "76:75:04:23:7a:37", + "default": true, + "dns": {} +}]] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 7f23b900-63d1-44fa-8d46-df5ce1f3e0bc 0xc004c828f0 0xc004c828f1}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f23b900-63d1-44fa-8d46-df5ce1f3e0bc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus Update v1 2023-01-14 04:51:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status} {kubelet Update v1 2023-01-14 04:51:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.1.55\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-b8mgb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b8mgb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.106,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.106,PodIP:10.52.1.55,StartTime:2023-01-14 04:51:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 04:51:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:containerd://aa7306b2eff4a5d07205bcf97cc0eea2bd0dbe05811a7a411511b4dc08cf2fdd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.52.1.55,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jan 14 04:51:50.346: INFO: Pod "webserver-deployment-7f5969cbc7-lz2nz" is available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-lz2nz webserver-deployment-7f5969cbc7- deployment-2865 e38138cd-1e3a-4f28-9e9b-abc1f0a56fd9 455682 0 2023-01-14 04:51:44 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.1.78" + ], + "mac": "aa:0e:83:a7:c5:97", + "default": true, + "dns": {} +}]] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 7f23b900-63d1-44fa-8d46-df5ce1f3e0bc 0xc004c82af0 0xc004c82af1}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f23b900-63d1-44fa-8d46-df5ce1f3e0bc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus Update v1 2023-01-14 04:51:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status} {kubelet Update v1 2023-01-14 04:51:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.1.78\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6mzq5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6mzq5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.99,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.99,PodIP:10.52.1.78,StartTime:2023-01-14 04:51:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 04:51:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:containerd://2f0e8940f9d775557ff63903e915eb681d666480a0c57f58f9ae1f3a0ad8bd84,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.52.1.78,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jan 14 04:51:50.346: INFO: Pod "webserver-deployment-7f5969cbc7-md4zf" is not available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-md4zf webserver-deployment-7f5969cbc7- deployment-2865 29b3607c-d287-4c22-91ce-f6b389df215c 455813 0 2023-01-14 04:51:50 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 7f23b900-63d1-44fa-8d46-df5ce1f3e0bc 0xc004c82cf0 0xc004c82cf1}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f23b900-63d1-44fa-8d46-df5ce1f3e0bc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8nhpk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8nhpk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jan 14 04:51:50.346: INFO: Pod "webserver-deployment-7f5969cbc7-n8p7b" is not available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-n8p7b webserver-deployment-7f5969cbc7- deployment-2865 53a45cb8-6afe-45ed-ab9d-c461aa4de760 455803 0 2023-01-14 04:51:50 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 7f23b900-63d1-44fa-8d46-df5ce1f3e0bc 0xc004c82e30 0xc004c82e31}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f23b900-63d1-44fa-8d46-df5ce1f3e0bc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mmrnx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mmrnx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.106,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jan 14 04:51:50.346: INFO: Pod "webserver-deployment-7f5969cbc7-rbkv5" is available: +&Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-rbkv5 webserver-deployment-7f5969cbc7- deployment-2865 45ec3078-89a0-4670-a936-894e3c48096c 455681 0 2023-01-14 04:51:44 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.1.77" + ], + "mac": "4e:be:ff:9b:7f:cd", + "default": true, + "dns": {} +}]] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 7f23b900-63d1-44fa-8d46-df5ce1f3e0bc 0xc004c82f90 0xc004c82f91}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f23b900-63d1-44fa-8d46-df5ce1f3e0bc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus Update v1 2023-01-14 04:51:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status} {kubelet Update v1 2023-01-14 04:51:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.1.77\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lq7j6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lq7j6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.99,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.99,PodIP:10.52.1.77,StartTime:2023-01-14 04:51:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 04:51:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:containerd://997e84fd7d904117ee1629cb03112d1ecf28c6f9e5830a270b7f6a9e0baf4769,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.52.1.77,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jan 14 04:51:50.347: INFO: Pod "webserver-deployment-d9f79cb5-2bdvg" is not available: +&Pod{ObjectMeta:{webserver-deployment-d9f79cb5-2bdvg webserver-deployment-d9f79cb5- deployment-2865 826215de-eeaf-468a-a2f8-7d3f488ff14d 455795 0 2023-01-14 04:51:48 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.1.59" + ], + "mac": "6e:75:cd:fb:fe:0b", + "default": true, + "dns": {} +}]] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 4e86ac37-61a5-4d10-898f-665dce57befc 0xc004c8317f 0xc004c83190}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e86ac37-61a5-4d10-898f-665dce57befc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus Update v1 2023-01-14 04:51:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status} {kubelet Update v1 2023-01-14 04:51:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-znl5j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-znl5j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.106,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.106,PodIP:,StartTime:2023-01-14 04:51:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jan 14 04:51:50.347: INFO: Pod "webserver-deployment-d9f79cb5-ddvpp" is not available: +&Pod{ObjectMeta:{webserver-deployment-d9f79cb5-ddvpp webserver-deployment-d9f79cb5- deployment-2865 fc3dad32-98f3-47c7-bffc-be11f183ce35 455777 0 2023-01-14 04:51:48 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.1.81" + ], + "mac": "e2:36:1f:1d:03:2a", + "default": true, + "dns": {} +}]] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 4e86ac37-61a5-4d10-898f-665dce57befc 0xc004c8337f 0xc004c83390}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e86ac37-61a5-4d10-898f-665dce57befc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 04:51:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:hostIP":{},"f:startTime":{}}} status} {multus Update v1 2023-01-14 04:51:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}},"f:status":{"f:containerStatuses":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dhps9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dhps9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.99,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.99,PodIP:,StartTime:2023-01-14 04:51:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:nil,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jan 14 04:51:50.347: INFO: Pod "webserver-deployment-d9f79cb5-dxh6s" is not available: +&Pod{ObjectMeta:{webserver-deployment-d9f79cb5-dxh6s webserver-deployment-d9f79cb5- deployment-2865 74061a4e-e852-47c7-9495-457b073a1e41 455818 0 2023-01-14 04:51:50 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 4e86ac37-61a5-4d10-898f-665dce57befc 0xc004c8357f 0xc004c83590}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e86ac37-61a5-4d10-898f-665dce57befc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qs8l9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qs8l9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jan 14 04:51:50.347: INFO: Pod "webserver-deployment-d9f79cb5-fqn2t" is not available: +&Pod{ObjectMeta:{webserver-deployment-d9f79cb5-fqn2t webserver-deployment-d9f79cb5- deployment-2865 62571e2f-fa54-40b5-83c0-419bfb239295 455812 0 2023-01-14 04:51:50 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 4e86ac37-61a5-4d10-898f-665dce57befc 0xc004c836cf 0xc004c836e0}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e86ac37-61a5-4d10-898f-665dce57befc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-sgsl9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sgsl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.212,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jan 14 04:51:50.347: INFO: Pod "webserver-deployment-d9f79cb5-h2bjh" is not available: +&Pod{ObjectMeta:{webserver-deployment-d9f79cb5-h2bjh webserver-deployment-d9f79cb5- deployment-2865 a861ec11-6313-42f0-974f-5e3ece0175e0 455816 0 2023-01-14 04:51:50 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 4e86ac37-61a5-4d10-898f-665dce57befc 0xc004c8384f 0xc004c83860}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e86ac37-61a5-4d10-898f-665dce57befc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-p5p5v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p5p5v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jan 14 04:51:50.348: INFO: Pod "webserver-deployment-d9f79cb5-p7fxb" is not available: +&Pod{ObjectMeta:{webserver-deployment-d9f79cb5-p7fxb webserver-deployment-d9f79cb5- deployment-2865 887caf4d-0190-477c-b5f7-9ec6eeb38729 455773 0 2023-01-14 04:51:48 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.1.80" + ], + "mac": "f6:28:52:cc:c4:8e", + "default": true, + "dns": {} +}]] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 4e86ac37-61a5-4d10-898f-665dce57befc 0xc004c8399f 0xc004c839b0}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e86ac37-61a5-4d10-898f-665dce57befc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 04:51:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:hostIP":{},"f:startTime":{}}} status} {multus Update v1 2023-01-14 04:51:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}},"f:status":{"f:containerStatuses":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2j9t5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2j9t5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.99,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.99,PodIP:,StartTime:2023-01-14 04:51:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:nil,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jan 14 04:51:50.348: INFO: Pod "webserver-deployment-d9f79cb5-vzd2g" is not available: +&Pod{ObjectMeta:{webserver-deployment-d9f79cb5-vzd2g webserver-deployment-d9f79cb5- deployment-2865 7c4b64f8-bc64-434b-899e-1f2287ec9843 455772 0 2023-01-14 04:51:48 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.0.235" + ], + "mac": "86:63:ad:e0:ad:32", + "default": true, + "dns": {} +}]] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 4e86ac37-61a5-4d10-898f-665dce57befc 0xc004c83b9f 0xc004c83bb0}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e86ac37-61a5-4d10-898f-665dce57befc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 04:51:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:hostIP":{},"f:startTime":{}}} status} {multus Update v1 2023-01-14 04:51:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}},"f:status":{"f:containerStatuses":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-tq956,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tq956,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.212,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.212,PodIP:,StartTime:2023-01-14 04:51:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:nil,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +Jan 14 04:51:50.348: INFO: Pod "webserver-deployment-d9f79cb5-wl628" is not available: +&Pod{ObjectMeta:{webserver-deployment-d9f79cb5-wl628 webserver-deployment-d9f79cb5- deployment-2865 eb7680cd-b7b6-40a6-b443-13ae8bf69ae0 455796 0 2023-01-14 04:51:48 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.1.58" + ], + "mac": "a6:cd:57:1c:8b:cf", + "default": true, + "dns": {} +}]] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 4e86ac37-61a5-4d10-898f-665dce57befc 0xc004c83daf 0xc004c83dc0}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e86ac37-61a5-4d10-898f-665dce57befc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus Update v1 2023-01-14 04:51:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status} {kubelet Update v1 2023-01-14 04:51:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vjsbg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vjsbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.106,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.106,PodIP:,StartTime:2023-01-14 04:51:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + test/e2e/framework/node/init/init.go:32 +Jan 14 04:51:50.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Deployment + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Deployment + tear down framework | framework.go:193 +STEP: Destroying namespace "deployment-2865" for this suite. 01/14/23 04:51:50.37 +------------------------------ +• [SLOW TEST] [6.215 seconds] +[sig-apps] Deployment +test/e2e/apps/framework.go:23 + deployment should support proportional scaling [Conformance] + test/e2e/apps/deployment.go:160 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Deployment + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:51:44.167 + Jan 14 04:51:44.167: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename deployment 01/14/23 04:51:44.168 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:51:44.188 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:51:44.191 + [BeforeEach] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 + [It] deployment should support proportional scaling [Conformance] + test/e2e/apps/deployment.go:160 + Jan 14 04:51:44.193: INFO: Creating deployment "webserver-deployment" + Jan 14 04:51:44.200: INFO: Waiting for observed generation 1 + Jan 14 04:51:46.222: INFO: Waiting for all required pods to come up + Jan 14 04:51:46.228: INFO: Pod name httpd: Found 10 pods out of 10 + STEP: ensuring each pod is running 01/14/23 04:51:46.228 + Jan 14 04:51:46.228: INFO: Waiting up to 5m0s for pod "webserver-deployment-7f5969cbc7-bx7s4" in namespace "deployment-2865" to be "running" + Jan 14 04:51:46.228: INFO: Waiting up to 5m0s for pod "webserver-deployment-7f5969cbc7-dcvm2" in namespace "deployment-2865" to be "running" + Jan 14 04:51:46.228: INFO: Waiting up to 5m0s for pod "webserver-deployment-7f5969cbc7-jh6pf" in namespace "deployment-2865" to be "running" + Jan 14 04:51:46.231: INFO: Pod "webserver-deployment-7f5969cbc7-dcvm2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.115283ms + Jan 14 04:51:46.232: INFO: Pod "webserver-deployment-7f5969cbc7-bx7s4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.91834ms + Jan 14 04:51:46.232: INFO: Pod "webserver-deployment-7f5969cbc7-jh6pf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.873515ms + Jan 14 04:51:48.236: INFO: Pod "webserver-deployment-7f5969cbc7-jh6pf": Phase="Running", Reason="", readiness=true. Elapsed: 2.007971046s + Jan 14 04:51:48.236: INFO: Pod "webserver-deployment-7f5969cbc7-jh6pf" satisfied condition "running" + Jan 14 04:51:48.236: INFO: Pod "webserver-deployment-7f5969cbc7-bx7s4": Phase="Running", Reason="", readiness=true. Elapsed: 2.008565622s + Jan 14 04:51:48.236: INFO: Pod "webserver-deployment-7f5969cbc7-bx7s4" satisfied condition "running" + Jan 14 04:51:48.236: INFO: Pod "webserver-deployment-7f5969cbc7-dcvm2": Phase="Running", Reason="", readiness=true. Elapsed: 2.008587881s + Jan 14 04:51:48.236: INFO: Pod "webserver-deployment-7f5969cbc7-dcvm2" satisfied condition "running" + Jan 14 04:51:48.237: INFO: Waiting for deployment "webserver-deployment" to complete + Jan 14 04:51:48.244: INFO: Updating deployment "webserver-deployment" with a non-existent image + Jan 14 04:51:48.254: INFO: Updating deployment webserver-deployment + Jan 14 04:51:48.254: INFO: Waiting for observed generation 2 + Jan 14 04:51:50.261: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 + Jan 14 04:51:50.264: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 + Jan 14 04:51:50.267: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas + Jan 14 04:51:50.275: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 + Jan 14 04:51:50.275: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 + Jan 14 04:51:50.277: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas + Jan 14 04:51:50.282: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas + Jan 14 04:51:50.282: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 + Jan 14 04:51:50.292: INFO: Updating deployment webserver-deployment + Jan 14 04:51:50.292: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas + Jan 14 04:51:50.299: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 + Jan 14 04:51:50.302: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 + [AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 + Jan 14 04:51:50.313: INFO: Deployment "webserver-deployment": + &Deployment{ObjectMeta:{webserver-deployment deployment-2865 1cb73e88-de82-493c-8850-ac8e2075caf8 455797 3 2023-01-14 04:51:44 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{kube-controller-manager Update apps/v1 2023-01-14 04:51:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status} {e2e.test Update apps/v1 2023-01-14 04:51:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005753518 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-01-14 04:51:46 +0000 UTC,LastTransitionTime:2023-01-14 04:51:46 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-d9f79cb5" is progressing.,LastUpdateTime:2023-01-14 04:51:48 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} + + Jan 14 04:51:50.328: INFO: New ReplicaSet "webserver-deployment-d9f79cb5" of Deployment "webserver-deployment": + &ReplicaSet{ObjectMeta:{webserver-deployment-d9f79cb5 deployment-2865 4e86ac37-61a5-4d10-898f-665dce57befc 455801 3 2023-01-14 04:51:48 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 1cb73e88-de82-493c-8850-ac8e2075caf8 0xc005753b37 0xc005753b38}] [] [{kube-controller-manager Update apps/v1 2023-01-14 04:51:48 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status} {kube-controller-manager Update apps/v1 2023-01-14 04:51:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1cb73e88-de82-493c-8850-ac8e2075caf8\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: d9f79cb5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005753bd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + Jan 14 04:51:50.328: INFO: All old ReplicaSets of Deployment "webserver-deployment": + Jan 14 04:51:50.329: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-7f5969cbc7 deployment-2865 7f23b900-63d1-44fa-8d46-df5ce1f3e0bc 455798 3 2023-01-14 04:51:44 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 1cb73e88-de82-493c-8850-ac8e2075caf8 0xc005753a47 0xc005753a48}] [] [{kube-controller-manager Update apps/v1 2023-01-14 04:51:48 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status} {kube-controller-manager Update apps/v1 2023-01-14 04:51:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1cb73e88-de82-493c-8850-ac8e2075caf8\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 7f5969cbc7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005753ad8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} + Jan 14 04:51:50.344: INFO: Pod "webserver-deployment-7f5969cbc7-4mvcd" is available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-4mvcd webserver-deployment-7f5969cbc7- deployment-2865 9be3aa23-f7b6-43f4-be63-6289a212d79b 455679 0 2023-01-14 04:51:44 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.1.79" + ], + "mac": "1e:8f:a9:c0:67:24", + "default": true, + "dns": {} + }]] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 7f23b900-63d1-44fa-8d46-df5ce1f3e0bc 0xc008207e57 0xc008207e58}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f23b900-63d1-44fa-8d46-df5ce1f3e0bc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus Update v1 2023-01-14 04:51:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status} {kubelet Update v1 2023-01-14 04:51:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.1.79\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-st98b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-st98b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.99,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.99,PodIP:10.52.1.79,StartTime:2023-01-14 04:51:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 04:51:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:containerd://5c8988b36c3cbf78a77ea543a95cf030a5564ddef485fdc6a8882564cdf2aad3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.52.1.79,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jan 14 04:51:50.344: INFO: Pod "webserver-deployment-7f5969cbc7-4qpfq" is not available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-4qpfq webserver-deployment-7f5969cbc7- deployment-2865 db478ac4-cebd-4b81-bcab-42e7e0ad4aed 455819 0 2023-01-14 04:51:50 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 7f23b900-63d1-44fa-8d46-df5ce1f3e0bc 0xc000ea0150 0xc000ea0151}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f23b900-63d1-44fa-8d46-df5ce1f3e0bc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4dcqt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4dcqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jan 14 04:51:50.344: INFO: Pod "webserver-deployment-7f5969cbc7-8sgvj" is not available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-8sgvj webserver-deployment-7f5969cbc7- deployment-2865 c2820e2b-bb39-45b9-a06a-19dd35886911 455811 0 2023-01-14 04:51:50 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 7f23b900-63d1-44fa-8d46-df5ce1f3e0bc 0xc000ea10f0 0xc000ea10f1}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f23b900-63d1-44fa-8d46-df5ce1f3e0bc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-q45rn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q45rn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.212,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jan 14 04:51:50.345: INFO: Pod "webserver-deployment-7f5969cbc7-bhh5f" is not available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-bhh5f webserver-deployment-7f5969cbc7- deployment-2865 324fbc58-2b90-41fe-b6fb-b94af8d3b1ec 455814 0 2023-01-14 04:51:50 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 7f23b900-63d1-44fa-8d46-df5ce1f3e0bc 0xc000ea1520 0xc000ea1521}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f23b900-63d1-44fa-8d46-df5ce1f3e0bc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4z6mf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4z6mf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jan 14 04:51:50.345: INFO: Pod "webserver-deployment-7f5969cbc7-bx7s4" is available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-bx7s4 webserver-deployment-7f5969cbc7- deployment-2865 7c40cb7a-8951-4d81-8941-86c4f1cd5762 455703 0 2023-01-14 04:51:44 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.0.234" + ], + "mac": "d2:35:10:8f:a1:b7", + "default": true, + "dns": {} + }]] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 7f23b900-63d1-44fa-8d46-df5ce1f3e0bc 0xc000ea19b0 0xc000ea19b1}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f23b900-63d1-44fa-8d46-df5ce1f3e0bc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus Update v1 2023-01-14 04:51:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status} {kubelet Update v1 2023-01-14 04:51:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.0.234\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ljrxq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ljrxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.212,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.212,PodIP:10.52.0.234,StartTime:2023-01-14 04:51:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 04:51:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:containerd://a5ac9343092ac52d0b69737a3a94b749dc7d3481f4cf07070e91b229ac3c58e2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.52.0.234,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jan 14 04:51:50.345: INFO: Pod "webserver-deployment-7f5969cbc7-cckcj" is not available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-cckcj webserver-deployment-7f5969cbc7- deployment-2865 22a06e94-47ac-4d86-9bd5-630ea82e074f 455810 0 2023-01-14 04:51:50 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 7f23b900-63d1-44fa-8d46-df5ce1f3e0bc 0xc004c82040 0xc004c82041}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f23b900-63d1-44fa-8d46-df5ce1f3e0bc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xhp2f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xhp2f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.106,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jan 14 04:51:50.345: INFO: Pod "webserver-deployment-7f5969cbc7-gjwg9" is available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-gjwg9 webserver-deployment-7f5969cbc7- deployment-2865 5e7019b0-36a8-44eb-be6b-99c82958aa5d 455670 0 2023-01-14 04:51:44 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.1.54" + ], + "mac": "92:05:ec:22:b1:a9", + "default": true, + "dns": {} + }]] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 7f23b900-63d1-44fa-8d46-df5ce1f3e0bc 0xc004c821a0 0xc004c821a1}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f23b900-63d1-44fa-8d46-df5ce1f3e0bc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus Update v1 2023-01-14 04:51:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status} {kubelet Update v1 2023-01-14 04:51:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.1.54\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-s2c9t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s2c9t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.106,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.106,PodIP:10.52.1.54,StartTime:2023-01-14 04:51:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 04:51:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:containerd://918b789023fc602e17839867629c3c0e9c89027d024a9fd5eac5d1fae2062dbe,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.52.1.54,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jan 14 04:51:50.345: INFO: Pod "webserver-deployment-7f5969cbc7-hhbrt" is not available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-hhbrt webserver-deployment-7f5969cbc7- deployment-2865 c6c6a6b9-345d-4b8e-9e9b-cc0d0f2bb4f6 455815 0 2023-01-14 04:51:50 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 7f23b900-63d1-44fa-8d46-df5ce1f3e0bc 0xc004c823a0 0xc004c823a1}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f23b900-63d1-44fa-8d46-df5ce1f3e0bc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rnw87,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rnw87,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jan 14 04:51:50.345: INFO: Pod "webserver-deployment-7f5969cbc7-hktw7" is available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-hktw7 webserver-deployment-7f5969cbc7- deployment-2865 320406ba-c168-4e87-ac5f-e051ad5b888e 455686 0 2023-01-14 04:51:44 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.0.233" + ], + "mac": "5a:da:c0:85:7e:11", + "default": true, + "dns": {} + }]] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 7f23b900-63d1-44fa-8d46-df5ce1f3e0bc 0xc004c824e0 0xc004c824e1}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f23b900-63d1-44fa-8d46-df5ce1f3e0bc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus Update v1 2023-01-14 04:51:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status} {kubelet Update v1 2023-01-14 04:51:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.0.233\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-cgdfc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cgdfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.212,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.212,PodIP:10.52.0.233,StartTime:2023-01-14 04:51:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 04:51:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:containerd://6c303e4eac6237cad1b250138d0e55e8cda2ffc32972aa12da7672f47b871107,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.52.0.233,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jan 14 04:51:50.346: INFO: Pod "webserver-deployment-7f5969cbc7-jh6pf" is available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-jh6pf webserver-deployment-7f5969cbc7- deployment-2865 6eb6e9f3-9b0d-41a5-8dc1-7cc2bca206a4 455696 0 2023-01-14 04:51:44 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.0.232" + ], + "mac": "86:4f:a1:9f:9b:74", + "default": true, + "dns": {} + }]] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 7f23b900-63d1-44fa-8d46-df5ce1f3e0bc 0xc004c826f0 0xc004c826f1}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f23b900-63d1-44fa-8d46-df5ce1f3e0bc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus Update v1 2023-01-14 04:51:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status} {kubelet Update v1 2023-01-14 04:51:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.0.232\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9xjnd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9xjnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.212,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.212,PodIP:10.52.0.232,StartTime:2023-01-14 04:51:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 04:51:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:containerd://67802f648a498ac1322131700f833a09bfc7bc04bb684e78bfd36dcd079589e9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.52.0.232,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jan 14 04:51:50.346: INFO: Pod "webserver-deployment-7f5969cbc7-l5cf8" is available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-l5cf8 webserver-deployment-7f5969cbc7- deployment-2865 54d6d46c-6ca4-4af4-9498-d788ebef64e0 455687 0 2023-01-14 04:51:44 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.1.55" + ], + "mac": "76:75:04:23:7a:37", + "default": true, + "dns": {} + }]] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 7f23b900-63d1-44fa-8d46-df5ce1f3e0bc 0xc004c828f0 0xc004c828f1}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f23b900-63d1-44fa-8d46-df5ce1f3e0bc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus Update v1 2023-01-14 04:51:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status} {kubelet Update v1 2023-01-14 04:51:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.1.55\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-b8mgb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b8mgb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.106,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.106,PodIP:10.52.1.55,StartTime:2023-01-14 04:51:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 04:51:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:containerd://aa7306b2eff4a5d07205bcf97cc0eea2bd0dbe05811a7a411511b4dc08cf2fdd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.52.1.55,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jan 14 04:51:50.346: INFO: Pod "webserver-deployment-7f5969cbc7-lz2nz" is available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-lz2nz webserver-deployment-7f5969cbc7- deployment-2865 e38138cd-1e3a-4f28-9e9b-abc1f0a56fd9 455682 0 2023-01-14 04:51:44 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.1.78" + ], + "mac": "aa:0e:83:a7:c5:97", + "default": true, + "dns": {} + }]] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 7f23b900-63d1-44fa-8d46-df5ce1f3e0bc 0xc004c82af0 0xc004c82af1}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f23b900-63d1-44fa-8d46-df5ce1f3e0bc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus Update v1 2023-01-14 04:51:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status} {kubelet Update v1 2023-01-14 04:51:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.1.78\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6mzq5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6mzq5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.99,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.99,PodIP:10.52.1.78,StartTime:2023-01-14 04:51:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 04:51:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:containerd://2f0e8940f9d775557ff63903e915eb681d666480a0c57f58f9ae1f3a0ad8bd84,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.52.1.78,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jan 14 04:51:50.346: INFO: Pod "webserver-deployment-7f5969cbc7-md4zf" is not available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-md4zf webserver-deployment-7f5969cbc7- deployment-2865 29b3607c-d287-4c22-91ce-f6b389df215c 455813 0 2023-01-14 04:51:50 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 7f23b900-63d1-44fa-8d46-df5ce1f3e0bc 0xc004c82cf0 0xc004c82cf1}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f23b900-63d1-44fa-8d46-df5ce1f3e0bc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8nhpk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8nhpk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jan 14 04:51:50.346: INFO: Pod "webserver-deployment-7f5969cbc7-n8p7b" is not available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-n8p7b webserver-deployment-7f5969cbc7- deployment-2865 53a45cb8-6afe-45ed-ab9d-c461aa4de760 455803 0 2023-01-14 04:51:50 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 7f23b900-63d1-44fa-8d46-df5ce1f3e0bc 0xc004c82e30 0xc004c82e31}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f23b900-63d1-44fa-8d46-df5ce1f3e0bc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mmrnx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mmrnx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.106,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jan 14 04:51:50.346: INFO: Pod "webserver-deployment-7f5969cbc7-rbkv5" is available: + &Pod{ObjectMeta:{webserver-deployment-7f5969cbc7-rbkv5 webserver-deployment-7f5969cbc7- deployment-2865 45ec3078-89a0-4670-a936-894e3c48096c 455681 0 2023-01-14 04:51:44 +0000 UTC map[name:httpd pod-template-hash:7f5969cbc7] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.1.77" + ], + "mac": "4e:be:ff:9b:7f:cd", + "default": true, + "dns": {} + }]] [{apps/v1 ReplicaSet webserver-deployment-7f5969cbc7 7f23b900-63d1-44fa-8d46-df5ce1f3e0bc 0xc004c82f90 0xc004c82f91}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7f23b900-63d1-44fa-8d46-df5ce1f3e0bc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus Update v1 2023-01-14 04:51:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status} {kubelet Update v1 2023-01-14 04:51:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.1.77\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lq7j6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lq7j6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.99,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.99,PodIP:10.52.1.77,StartTime:2023-01-14 04:51:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 04:51:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:containerd://997e84fd7d904117ee1629cb03112d1ecf28c6f9e5830a270b7f6a9e0baf4769,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.52.1.77,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jan 14 04:51:50.347: INFO: Pod "webserver-deployment-d9f79cb5-2bdvg" is not available: + &Pod{ObjectMeta:{webserver-deployment-d9f79cb5-2bdvg webserver-deployment-d9f79cb5- deployment-2865 826215de-eeaf-468a-a2f8-7d3f488ff14d 455795 0 2023-01-14 04:51:48 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.1.59" + ], + "mac": "6e:75:cd:fb:fe:0b", + "default": true, + "dns": {} + }]] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 4e86ac37-61a5-4d10-898f-665dce57befc 0xc004c8317f 0xc004c83190}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e86ac37-61a5-4d10-898f-665dce57befc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus Update v1 2023-01-14 04:51:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status} {kubelet Update v1 2023-01-14 04:51:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-znl5j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-znl5j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.106,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.106,PodIP:,StartTime:2023-01-14 04:51:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jan 14 04:51:50.347: INFO: Pod "webserver-deployment-d9f79cb5-ddvpp" is not available: + &Pod{ObjectMeta:{webserver-deployment-d9f79cb5-ddvpp webserver-deployment-d9f79cb5- deployment-2865 fc3dad32-98f3-47c7-bffc-be11f183ce35 455777 0 2023-01-14 04:51:48 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.1.81" + ], + "mac": "e2:36:1f:1d:03:2a", + "default": true, + "dns": {} + }]] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 4e86ac37-61a5-4d10-898f-665dce57befc 0xc004c8337f 0xc004c83390}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e86ac37-61a5-4d10-898f-665dce57befc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 04:51:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:hostIP":{},"f:startTime":{}}} status} {multus Update v1 2023-01-14 04:51:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}},"f:status":{"f:containerStatuses":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dhps9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dhps9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.99,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.99,PodIP:,StartTime:2023-01-14 04:51:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:nil,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jan 14 04:51:50.347: INFO: Pod "webserver-deployment-d9f79cb5-dxh6s" is not available: + &Pod{ObjectMeta:{webserver-deployment-d9f79cb5-dxh6s webserver-deployment-d9f79cb5- deployment-2865 74061a4e-e852-47c7-9495-457b073a1e41 455818 0 2023-01-14 04:51:50 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 4e86ac37-61a5-4d10-898f-665dce57befc 0xc004c8357f 0xc004c83590}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e86ac37-61a5-4d10-898f-665dce57befc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qs8l9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qs8l9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jan 14 04:51:50.347: INFO: Pod "webserver-deployment-d9f79cb5-fqn2t" is not available: + &Pod{ObjectMeta:{webserver-deployment-d9f79cb5-fqn2t webserver-deployment-d9f79cb5- deployment-2865 62571e2f-fa54-40b5-83c0-419bfb239295 455812 0 2023-01-14 04:51:50 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 4e86ac37-61a5-4d10-898f-665dce57befc 0xc004c836cf 0xc004c836e0}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e86ac37-61a5-4d10-898f-665dce57befc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-sgsl9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sgsl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.212,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jan 14 04:51:50.347: INFO: Pod "webserver-deployment-d9f79cb5-h2bjh" is not available: + &Pod{ObjectMeta:{webserver-deployment-d9f79cb5-h2bjh webserver-deployment-d9f79cb5- deployment-2865 a861ec11-6313-42f0-974f-5e3ece0175e0 455816 0 2023-01-14 04:51:50 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 4e86ac37-61a5-4d10-898f-665dce57befc 0xc004c8384f 0xc004c83860}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:50 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e86ac37-61a5-4d10-898f-665dce57befc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-p5p5v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p5p5v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jan 14 04:51:50.348: INFO: Pod "webserver-deployment-d9f79cb5-p7fxb" is not available: + &Pod{ObjectMeta:{webserver-deployment-d9f79cb5-p7fxb webserver-deployment-d9f79cb5- deployment-2865 887caf4d-0190-477c-b5f7-9ec6eeb38729 455773 0 2023-01-14 04:51:48 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.1.80" + ], + "mac": "f6:28:52:cc:c4:8e", + "default": true, + "dns": {} + }]] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 4e86ac37-61a5-4d10-898f-665dce57befc 0xc004c8399f 0xc004c839b0}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e86ac37-61a5-4d10-898f-665dce57befc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 04:51:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:hostIP":{},"f:startTime":{}}} status} {multus Update v1 2023-01-14 04:51:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}},"f:status":{"f:containerStatuses":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2j9t5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2j9t5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.99,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.99,PodIP:,StartTime:2023-01-14 04:51:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:nil,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jan 14 04:51:50.348: INFO: Pod "webserver-deployment-d9f79cb5-vzd2g" is not available: + &Pod{ObjectMeta:{webserver-deployment-d9f79cb5-vzd2g webserver-deployment-d9f79cb5- deployment-2865 7c4b64f8-bc64-434b-899e-1f2287ec9843 455772 0 2023-01-14 04:51:48 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.0.235" + ], + "mac": "86:63:ad:e0:ad:32", + "default": true, + "dns": {} + }]] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 4e86ac37-61a5-4d10-898f-665dce57befc 0xc004c83b9f 0xc004c83bb0}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e86ac37-61a5-4d10-898f-665dce57befc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 04:51:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:hostIP":{},"f:startTime":{}}} status} {multus Update v1 2023-01-14 04:51:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}},"f:status":{"f:containerStatuses":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-tq956,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tq956,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.212,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.212,PodIP:,StartTime:2023-01-14 04:51:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:nil,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + Jan 14 04:51:50.348: INFO: Pod "webserver-deployment-d9f79cb5-wl628" is not available: + &Pod{ObjectMeta:{webserver-deployment-d9f79cb5-wl628 webserver-deployment-d9f79cb5- deployment-2865 eb7680cd-b7b6-40a6-b443-13ae8bf69ae0 455796 0 2023-01-14 04:51:48 +0000 UTC map[name:httpd pod-template-hash:d9f79cb5] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.1.58" + ], + "mac": "a6:cd:57:1c:8b:cf", + "default": true, + "dns": {} + }]] [{apps/v1 ReplicaSet webserver-deployment-d9f79cb5 4e86ac37-61a5-4d10-898f-665dce57befc 0xc004c83daf 0xc004c83dc0}] [] [{kube-controller-manager Update v1 2023-01-14 04:51:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e86ac37-61a5-4d10-898f-665dce57befc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus Update v1 2023-01-14 04:51:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status} {kubelet Update v1 2023-01-14 04:51:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vjsbg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vjsbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.106,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:51:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.106,PodIP:,StartTime:2023-01-14 04:51:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + [AfterEach] [sig-apps] Deployment + test/e2e/framework/node/init/init.go:32 + Jan 14 04:51:50.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Deployment + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Deployment + tear down framework | framework.go:193 + STEP: Destroying namespace "deployment-2865" for this suite. 01/14/23 04:51:50.37 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide podname only [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:53 +[BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:51:50.384 +Jan 14 04:51:50.384: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename projected 01/14/23 04:51:50.385 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:51:50.403 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:51:50.406 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 +[It] should provide podname only [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:53 +STEP: Creating a pod to test downward API volume plugin 01/14/23 04:51:50.408 +Jan 14 04:51:50.422: INFO: Waiting up to 5m0s for pod "downwardapi-volume-366bda8a-10af-409a-8058-b01058735130" in namespace "projected-722" to be "Succeeded or Failed" +Jan 14 04:51:50.426: INFO: Pod "downwardapi-volume-366bda8a-10af-409a-8058-b01058735130": Phase="Pending", Reason="", readiness=false. Elapsed: 4.197801ms +Jan 14 04:51:52.431: INFO: Pod "downwardapi-volume-366bda8a-10af-409a-8058-b01058735130": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009186299s +Jan 14 04:51:54.430: INFO: Pod "downwardapi-volume-366bda8a-10af-409a-8058-b01058735130": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008626528s +Jan 14 04:51:56.430: INFO: Pod "downwardapi-volume-366bda8a-10af-409a-8058-b01058735130": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0082457s +Jan 14 04:51:58.431: INFO: Pod "downwardapi-volume-366bda8a-10af-409a-8058-b01058735130": Phase="Pending", Reason="", readiness=false. Elapsed: 8.009176769s +Jan 14 04:52:00.431: INFO: Pod "downwardapi-volume-366bda8a-10af-409a-8058-b01058735130": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.00899463s +STEP: Saw pod success 01/14/23 04:52:00.431 +Jan 14 04:52:00.431: INFO: Pod "downwardapi-volume-366bda8a-10af-409a-8058-b01058735130" satisfied condition "Succeeded or Failed" +Jan 14 04:52:00.434: INFO: Trying to get logs from node 10.0.1.106 pod downwardapi-volume-366bda8a-10af-409a-8058-b01058735130 container client-container: +STEP: delete the pod 01/14/23 04:52:00.44 +Jan 14 04:52:00.451: INFO: Waiting for pod downwardapi-volume-366bda8a-10af-409a-8058-b01058735130 to disappear +Jan 14 04:52:00.454: INFO: Pod downwardapi-volume-366bda8a-10af-409a-8058-b01058735130 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 +Jan 14 04:52:00.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-722" for this suite. 01/14/23 04:52:00.459 +------------------------------ +• [SLOW TEST] [10.081 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should provide podname only [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:53 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:51:50.384 + Jan 14 04:51:50.384: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename projected 01/14/23 04:51:50.385 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:51:50.403 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:51:50.406 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 + [It] should provide podname only [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:53 + STEP: Creating a pod to test downward API volume plugin 01/14/23 04:51:50.408 + Jan 14 04:51:50.422: INFO: Waiting up to 5m0s for pod "downwardapi-volume-366bda8a-10af-409a-8058-b01058735130" in namespace "projected-722" to be "Succeeded or Failed" + Jan 14 04:51:50.426: INFO: Pod "downwardapi-volume-366bda8a-10af-409a-8058-b01058735130": Phase="Pending", Reason="", readiness=false. Elapsed: 4.197801ms + Jan 14 04:51:52.431: INFO: Pod "downwardapi-volume-366bda8a-10af-409a-8058-b01058735130": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009186299s + Jan 14 04:51:54.430: INFO: Pod "downwardapi-volume-366bda8a-10af-409a-8058-b01058735130": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008626528s + Jan 14 04:51:56.430: INFO: Pod "downwardapi-volume-366bda8a-10af-409a-8058-b01058735130": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0082457s + Jan 14 04:51:58.431: INFO: Pod "downwardapi-volume-366bda8a-10af-409a-8058-b01058735130": Phase="Pending", Reason="", readiness=false. Elapsed: 8.009176769s + Jan 14 04:52:00.431: INFO: Pod "downwardapi-volume-366bda8a-10af-409a-8058-b01058735130": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.00899463s + STEP: Saw pod success 01/14/23 04:52:00.431 + Jan 14 04:52:00.431: INFO: Pod "downwardapi-volume-366bda8a-10af-409a-8058-b01058735130" satisfied condition "Succeeded or Failed" + Jan 14 04:52:00.434: INFO: Trying to get logs from node 10.0.1.106 pod downwardapi-volume-366bda8a-10af-409a-8058-b01058735130 container client-container: + STEP: delete the pod 01/14/23 04:52:00.44 + Jan 14 04:52:00.451: INFO: Waiting for pod downwardapi-volume-366bda8a-10af-409a-8058-b01058735130 to disappear + Jan 14 04:52:00.454: INFO: Pod downwardapi-volume-366bda8a-10af-409a-8058-b01058735130 no longer exists + [AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 + Jan 14 04:52:00.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-722" for this suite. 01/14/23 04:52:00.459 + << End Captured GinkgoWriter Output +------------------------------ +SSS +------------------------------ +[sig-apps] Job + should delete a job [Conformance] + test/e2e/apps/job.go:481 +[BeforeEach] [sig-apps] Job + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:52:00.465 +Jan 14 04:52:00.465: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename job 01/14/23 04:52:00.466 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:52:00.478 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:52:00.48 +[BeforeEach] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:31 +[It] should delete a job [Conformance] + test/e2e/apps/job.go:481 +STEP: Creating a job 01/14/23 04:52:00.482 +STEP: Ensuring active pods == parallelism 01/14/23 04:52:00.488 +STEP: delete a job 01/14/23 04:52:04.492 +STEP: deleting Job.batch foo in namespace job-8279, will wait for the garbage collector to delete the pods 01/14/23 04:52:04.492 +Jan 14 04:52:04.552: INFO: Deleting Job.batch foo took: 6.196123ms +Jan 14 04:52:04.653: INFO: Terminating Job.batch foo pods took: 101.149059ms +STEP: Ensuring job was deleted 01/14/23 04:52:35.054 +[AfterEach] [sig-apps] Job + test/e2e/framework/node/init/init.go:32 +Jan 14 04:52:35.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Job + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Job + tear down framework | framework.go:193 +STEP: Destroying namespace "job-8279" for this suite. 01/14/23 04:52:35.062 +------------------------------ +• [SLOW TEST] [34.604 seconds] +[sig-apps] Job +test/e2e/apps/framework.go:23 + should delete a job [Conformance] + test/e2e/apps/job.go:481 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Job + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:52:00.465 + Jan 14 04:52:00.465: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename job 01/14/23 04:52:00.466 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:52:00.478 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:52:00.48 + [BeforeEach] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:31 + [It] should delete a job [Conformance] + test/e2e/apps/job.go:481 + STEP: Creating a job 01/14/23 04:52:00.482 + STEP: Ensuring active pods == parallelism 01/14/23 04:52:00.488 + STEP: delete a job 01/14/23 04:52:04.492 + STEP: deleting Job.batch foo in namespace job-8279, will wait for the garbage collector to delete the pods 01/14/23 04:52:04.492 + Jan 14 04:52:04.552: INFO: Deleting Job.batch foo took: 6.196123ms + Jan 14 04:52:04.653: INFO: Terminating Job.batch foo pods took: 101.149059ms + STEP: Ensuring job was deleted 01/14/23 04:52:35.054 + [AfterEach] [sig-apps] Job + test/e2e/framework/node/init/init.go:32 + Jan 14 04:52:35.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Job + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Job + tear down framework | framework.go:193 + STEP: Destroying namespace "job-8279" for this suite. 01/14/23 04:52:35.062 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-storage] Downward API volume + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:68 +[BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:52:35.07 +Jan 14 04:52:35.070: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename downward-api 01/14/23 04:52:35.07 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:52:35.085 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:52:35.088 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 +[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:68 +STEP: Creating a pod to test downward API volume plugin 01/14/23 04:52:35.09 +Jan 14 04:52:35.101: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b1930aa5-a54e-4179-8da7-4c244649acfc" in namespace "downward-api-8948" to be "Succeeded or Failed" +Jan 14 04:52:35.104: INFO: Pod "downwardapi-volume-b1930aa5-a54e-4179-8da7-4c244649acfc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.195476ms +Jan 14 04:52:37.108: INFO: Pod "downwardapi-volume-b1930aa5-a54e-4179-8da7-4c244649acfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007331796s +Jan 14 04:52:39.110: INFO: Pod "downwardapi-volume-b1930aa5-a54e-4179-8da7-4c244649acfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009298045s +STEP: Saw pod success 01/14/23 04:52:39.11 +Jan 14 04:52:39.110: INFO: Pod "downwardapi-volume-b1930aa5-a54e-4179-8da7-4c244649acfc" satisfied condition "Succeeded or Failed" +Jan 14 04:52:39.113: INFO: Trying to get logs from node 10.0.1.106 pod downwardapi-volume-b1930aa5-a54e-4179-8da7-4c244649acfc container client-container: +STEP: delete the pod 01/14/23 04:52:39.119 +Jan 14 04:52:39.131: INFO: Waiting for pod downwardapi-volume-b1930aa5-a54e-4179-8da7-4c244649acfc to disappear +Jan 14 04:52:39.134: INFO: Pod downwardapi-volume-b1930aa5-a54e-4179-8da7-4c244649acfc no longer exists +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 +Jan 14 04:52:39.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 +STEP: Destroying namespace "downward-api-8948" for this suite. 01/14/23 04:52:39.139 +------------------------------ +• [4.076 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:68 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:52:35.07 + Jan 14 04:52:35.070: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename downward-api 01/14/23 04:52:35.07 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:52:35.085 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:52:35.088 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 + [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:68 + STEP: Creating a pod to test downward API volume plugin 01/14/23 04:52:35.09 + Jan 14 04:52:35.101: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b1930aa5-a54e-4179-8da7-4c244649acfc" in namespace "downward-api-8948" to be "Succeeded or Failed" + Jan 14 04:52:35.104: INFO: Pod "downwardapi-volume-b1930aa5-a54e-4179-8da7-4c244649acfc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.195476ms + Jan 14 04:52:37.108: INFO: Pod "downwardapi-volume-b1930aa5-a54e-4179-8da7-4c244649acfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007331796s + Jan 14 04:52:39.110: INFO: Pod "downwardapi-volume-b1930aa5-a54e-4179-8da7-4c244649acfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009298045s + STEP: Saw pod success 01/14/23 04:52:39.11 + Jan 14 04:52:39.110: INFO: Pod "downwardapi-volume-b1930aa5-a54e-4179-8da7-4c244649acfc" satisfied condition "Succeeded or Failed" + Jan 14 04:52:39.113: INFO: Trying to get logs from node 10.0.1.106 pod downwardapi-volume-b1930aa5-a54e-4179-8da7-4c244649acfc container client-container: + STEP: delete the pod 01/14/23 04:52:39.119 + Jan 14 04:52:39.131: INFO: Waiting for pod downwardapi-volume-b1930aa5-a54e-4179-8da7-4c244649acfc to disappear + Jan 14 04:52:39.134: INFO: Pod downwardapi-volume-b1930aa5-a54e-4179-8da7-4c244649acfc no longer exists + [AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 + Jan 14 04:52:39.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 + STEP: Destroying namespace "downward-api-8948" for this suite. 01/14/23 04:52:39.139 + << End Captured GinkgoWriter Output +------------------------------ +[sig-apps] Daemon set [Serial] + should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + test/e2e/apps/daemon_set.go:374 +[BeforeEach] [sig-apps] Daemon set [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:52:39.146 +Jan 14 04:52:39.146: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename daemonsets 01/14/23 04:52:39.147 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:52:39.161 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:52:39.163 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 +[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + test/e2e/apps/daemon_set.go:374 +Jan 14 04:52:39.181: INFO: Creating simple daemon set daemon-set +STEP: Check that daemon pods launch on every node of the cluster. 01/14/23 04:52:39.187 +Jan 14 04:52:39.191: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:52:39.191: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:52:39.191: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:52:39.195: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jan 14 04:52:39.195: INFO: Node 10.0.1.106 is running 0 daemon pod, expected 1 +Jan 14 04:52:40.201: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:52:40.201: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:52:40.201: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:52:40.204: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Jan 14 04:52:40.204: INFO: Node 10.0.1.212 is running 0 daemon pod, expected 1 +Jan 14 04:52:41.201: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:52:41.201: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:52:41.202: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:52:41.205: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 +Jan 14 04:52:41.205: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set +STEP: Update daemon pods image. 01/14/23 04:52:41.218 +STEP: Check that daemon pods images are updated. 01/14/23 04:52:41.228 +Jan 14 04:52:41.232: INFO: Wrong image for pod: daemon-set-8gwtp. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. +Jan 14 04:52:41.232: INFO: Wrong image for pod: daemon-set-dpmcl. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. +Jan 14 04:52:41.232: INFO: Wrong image for pod: daemon-set-hj8sp. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. +Jan 14 04:52:41.237: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:52:41.237: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:52:41.237: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:52:42.242: INFO: Wrong image for pod: daemon-set-dpmcl. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. +Jan 14 04:52:42.242: INFO: Wrong image for pod: daemon-set-hj8sp. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. +Jan 14 04:52:42.247: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:52:42.247: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:52:42.247: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:52:43.242: INFO: Wrong image for pod: daemon-set-dpmcl. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. +Jan 14 04:52:43.242: INFO: Wrong image for pod: daemon-set-hj8sp. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. +Jan 14 04:52:43.242: INFO: Pod daemon-set-vdhlk is not available +Jan 14 04:52:43.247: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:52:43.247: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:52:43.247: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:52:44.241: INFO: Wrong image for pod: daemon-set-dpmcl. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. +Jan 14 04:52:44.246: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:52:44.246: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:52:44.246: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:52:45.242: INFO: Wrong image for pod: daemon-set-dpmcl. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. +Jan 14 04:52:45.242: INFO: Pod daemon-set-xlnd7 is not available +Jan 14 04:52:45.248: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:52:45.248: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:52:45.248: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:52:46.247: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:52:46.247: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:52:46.247: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:52:47.242: INFO: Pod daemon-set-bp8qc is not available +Jan 14 04:52:47.247: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:52:47.247: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:52:47.247: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +STEP: Check that daemon pods are still running on every node of the cluster. 01/14/23 04:52:47.247 +Jan 14 04:52:47.252: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:52:47.252: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:52:47.252: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:52:47.256: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Jan 14 04:52:47.256: INFO: Node 10.0.1.212 is running 0 daemon pod, expected 1 +Jan 14 04:52:48.264: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:52:48.264: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:52:48.264: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:52:48.268: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 +Jan 14 04:52:48.268: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 +STEP: Deleting DaemonSet "daemon-set" 01/14/23 04:52:48.283 +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-717, will wait for the garbage collector to delete the pods 01/14/23 04:52:48.283 +Jan 14 04:52:48.342: INFO: Deleting DaemonSet.extensions daemon-set took: 6.290224ms +Jan 14 04:52:48.443: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.565018ms +Jan 14 04:52:50.747: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jan 14 04:52:50.747: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Jan 14 04:52:50.749: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"456640"},"items":null} + +Jan 14 04:52:50.752: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"456640"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:52:50.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "daemonsets-717" for this suite. 01/14/23 04:52:50.769 +------------------------------ +• [SLOW TEST] [11.628 seconds] +[sig-apps] Daemon set [Serial] +test/e2e/apps/framework.go:23 + should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + test/e2e/apps/daemon_set.go:374 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Daemon set [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:52:39.146 + Jan 14 04:52:39.146: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename daemonsets 01/14/23 04:52:39.147 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:52:39.161 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:52:39.163 + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 + [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + test/e2e/apps/daemon_set.go:374 + Jan 14 04:52:39.181: INFO: Creating simple daemon set daemon-set + STEP: Check that daemon pods launch on every node of the cluster. 01/14/23 04:52:39.187 + Jan 14 04:52:39.191: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:52:39.191: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:52:39.191: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:52:39.195: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jan 14 04:52:39.195: INFO: Node 10.0.1.106 is running 0 daemon pod, expected 1 + Jan 14 04:52:40.201: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:52:40.201: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:52:40.201: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:52:40.204: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Jan 14 04:52:40.204: INFO: Node 10.0.1.212 is running 0 daemon pod, expected 1 + Jan 14 04:52:41.201: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:52:41.201: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:52:41.202: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:52:41.205: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 + Jan 14 04:52:41.205: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set + STEP: Update daemon pods image. 01/14/23 04:52:41.218 + STEP: Check that daemon pods images are updated. 01/14/23 04:52:41.228 + Jan 14 04:52:41.232: INFO: Wrong image for pod: daemon-set-8gwtp. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. + Jan 14 04:52:41.232: INFO: Wrong image for pod: daemon-set-dpmcl. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. + Jan 14 04:52:41.232: INFO: Wrong image for pod: daemon-set-hj8sp. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. + Jan 14 04:52:41.237: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:52:41.237: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:52:41.237: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:52:42.242: INFO: Wrong image for pod: daemon-set-dpmcl. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. + Jan 14 04:52:42.242: INFO: Wrong image for pod: daemon-set-hj8sp. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. + Jan 14 04:52:42.247: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:52:42.247: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:52:42.247: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:52:43.242: INFO: Wrong image for pod: daemon-set-dpmcl. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. + Jan 14 04:52:43.242: INFO: Wrong image for pod: daemon-set-hj8sp. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. + Jan 14 04:52:43.242: INFO: Pod daemon-set-vdhlk is not available + Jan 14 04:52:43.247: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:52:43.247: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:52:43.247: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:52:44.241: INFO: Wrong image for pod: daemon-set-dpmcl. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. + Jan 14 04:52:44.246: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:52:44.246: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:52:44.246: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:52:45.242: INFO: Wrong image for pod: daemon-set-dpmcl. Expected: registry.k8s.io/e2e-test-images/agnhost:2.43, got: registry.k8s.io/e2e-test-images/httpd:2.4.38-4. + Jan 14 04:52:45.242: INFO: Pod daemon-set-xlnd7 is not available + Jan 14 04:52:45.248: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:52:45.248: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:52:45.248: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:52:46.247: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:52:46.247: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:52:46.247: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:52:47.242: INFO: Pod daemon-set-bp8qc is not available + Jan 14 04:52:47.247: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:52:47.247: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:52:47.247: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + STEP: Check that daemon pods are still running on every node of the cluster. 01/14/23 04:52:47.247 + Jan 14 04:52:47.252: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:52:47.252: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:52:47.252: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:52:47.256: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Jan 14 04:52:47.256: INFO: Node 10.0.1.212 is running 0 daemon pod, expected 1 + Jan 14 04:52:48.264: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:52:48.264: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:52:48.264: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:52:48.268: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 + Jan 14 04:52:48.268: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 + STEP: Deleting DaemonSet "daemon-set" 01/14/23 04:52:48.283 + STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-717, will wait for the garbage collector to delete the pods 01/14/23 04:52:48.283 + Jan 14 04:52:48.342: INFO: Deleting DaemonSet.extensions daemon-set took: 6.290224ms + Jan 14 04:52:48.443: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.565018ms + Jan 14 04:52:50.747: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jan 14 04:52:50.747: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set + Jan 14 04:52:50.749: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"456640"},"items":null} + + Jan 14 04:52:50.752: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"456640"},"items":null} + + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:52:50.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "daemonsets-717" for this suite. 01/14/23 04:52:50.769 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should honor timeout [Conformance] + test/e2e/apimachinery/webhook.go:381 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:52:50.775 +Jan 14 04:52:50.776: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename webhook 01/14/23 04:52:50.776 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:52:50.789 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:52:50.791 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 01/14/23 04:52:50.805 +STEP: Create role binding to let webhook read extension-apiserver-authentication 01/14/23 04:52:50.998 +STEP: Deploying the webhook pod 01/14/23 04:52:51.005 +STEP: Wait for the deployment to be ready 01/14/23 04:52:51.019 +Jan 14 04:52:51.029: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 01/14/23 04:52:53.04 +STEP: Verifying the service has paired with the endpoint 01/14/23 04:52:53.049 +Jan 14 04:52:54.050: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should honor timeout [Conformance] + test/e2e/apimachinery/webhook.go:381 +STEP: Setting timeout (1s) shorter than webhook latency (5s) 01/14/23 04:52:54.054 +STEP: Registering slow webhook via the AdmissionRegistration API 01/14/23 04:52:54.054 +STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) 01/14/23 04:52:54.067 +STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore 01/14/23 04:52:55.079 +STEP: Registering slow webhook via the AdmissionRegistration API 01/14/23 04:52:55.079 +STEP: Having no error when timeout is longer than webhook latency 01/14/23 04:52:56.104 +STEP: Registering slow webhook via the AdmissionRegistration API 01/14/23 04:52:56.104 +STEP: Having no error when timeout is empty (defaulted to 10s in v1) 01/14/23 04:53:01.135 +STEP: Registering slow webhook via the AdmissionRegistration API 01/14/23 04:53:01.135 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:53:06.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "webhook-8893" for this suite. 01/14/23 04:53:06.208 +STEP: Destroying namespace "webhook-8893-markers" for this suite. 01/14/23 04:53:06.213 +------------------------------ +• [SLOW TEST] [15.446 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should honor timeout [Conformance] + test/e2e/apimachinery/webhook.go:381 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:52:50.775 + Jan 14 04:52:50.776: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename webhook 01/14/23 04:52:50.776 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:52:50.789 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:52:50.791 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 01/14/23 04:52:50.805 + STEP: Create role binding to let webhook read extension-apiserver-authentication 01/14/23 04:52:50.998 + STEP: Deploying the webhook pod 01/14/23 04:52:51.005 + STEP: Wait for the deployment to be ready 01/14/23 04:52:51.019 + Jan 14 04:52:51.029: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 01/14/23 04:52:53.04 + STEP: Verifying the service has paired with the endpoint 01/14/23 04:52:53.049 + Jan 14 04:52:54.050: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should honor timeout [Conformance] + test/e2e/apimachinery/webhook.go:381 + STEP: Setting timeout (1s) shorter than webhook latency (5s) 01/14/23 04:52:54.054 + STEP: Registering slow webhook via the AdmissionRegistration API 01/14/23 04:52:54.054 + STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) 01/14/23 04:52:54.067 + STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore 01/14/23 04:52:55.079 + STEP: Registering slow webhook via the AdmissionRegistration API 01/14/23 04:52:55.079 + STEP: Having no error when timeout is longer than webhook latency 01/14/23 04:52:56.104 + STEP: Registering slow webhook via the AdmissionRegistration API 01/14/23 04:52:56.104 + STEP: Having no error when timeout is empty (defaulted to 10s in v1) 01/14/23 04:53:01.135 + STEP: Registering slow webhook via the AdmissionRegistration API 01/14/23 04:53:01.135 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:53:06.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "webhook-8893" for this suite. 01/14/23 04:53:06.208 + STEP: Destroying namespace "webhook-8893-markers" for this suite. 01/14/23 04:53:06.213 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + removes definition from spec when one version gets changed to not be served [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:442 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:53:06.223 +Jan 14 04:53:06.223: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename crd-publish-openapi 01/14/23 04:53:06.224 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:53:06.237 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:53:06.239 +[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[It] removes definition from spec when one version gets changed to not be served [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:442 +STEP: set up a multi version CRD 01/14/23 04:53:06.242 +Jan 14 04:53:06.242: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: mark a version not serverd 01/14/23 04:53:10.091 +STEP: check the unserved version gets removed 01/14/23 04:53:10.108 +STEP: check the other version is not changed 01/14/23 04:53:11.86 +[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:53:14.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "crd-publish-openapi-4940" for this suite. 01/14/23 04:53:14.979 +------------------------------ +• [SLOW TEST] [8.763 seconds] +[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + removes definition from spec when one version gets changed to not be served [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:442 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:53:06.223 + Jan 14 04:53:06.223: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename crd-publish-openapi 01/14/23 04:53:06.224 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:53:06.237 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:53:06.239 + [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [It] removes definition from spec when one version gets changed to not be served [Conformance] + test/e2e/apimachinery/crd_publish_openapi.go:442 + STEP: set up a multi version CRD 01/14/23 04:53:06.242 + Jan 14 04:53:06.242: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: mark a version not serverd 01/14/23 04:53:10.091 + STEP: check the unserved version gets removed 01/14/23 04:53:10.108 + STEP: check the other version is not changed 01/14/23 04:53:11.86 + [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:53:14.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "crd-publish-openapi-4940" for this suite. 01/14/23 04:53:14.979 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-network] HostPort + validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + test/e2e/network/hostport.go:63 +[BeforeEach] [sig-network] HostPort + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:53:14.987 +Jan 14 04:53:14.987: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename hostport 01/14/23 04:53:14.987 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:53:15.005 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:53:15.007 +[BeforeEach] [sig-network] HostPort + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] HostPort + test/e2e/network/hostport.go:49 +[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + test/e2e/network/hostport.go:63 +STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled 01/14/23 04:53:15.014 +Jan 14 04:53:15.030: INFO: Waiting up to 5m0s for pod "pod1" in namespace "hostport-5960" to be "running and ready" +Jan 14 04:53:15.033: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.040659ms +Jan 14 04:53:15.033: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Jan 14 04:53:17.038: INFO: Pod "pod1": Phase="Running", Reason="", readiness=true. Elapsed: 2.007608937s +Jan 14 04:53:17.038: INFO: The phase of Pod pod1 is Running (Ready = true) +Jan 14 04:53:17.038: INFO: Pod "pod1" satisfied condition "running and ready" +STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 10.0.1.99 on the node which pod1 resides and expect scheduled 01/14/23 04:53:17.038 +Jan 14 04:53:17.063: INFO: Waiting up to 5m0s for pod "pod2" in namespace "hostport-5960" to be "running and ready" +Jan 14 04:53:17.083: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 19.56812ms +Jan 14 04:53:17.083: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Jan 14 04:53:19.087: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 2.024095959s +Jan 14 04:53:19.087: INFO: The phase of Pod pod2 is Running (Ready = true) +Jan 14 04:53:19.087: INFO: Pod "pod2" satisfied condition "running and ready" +STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 10.0.1.99 but use UDP protocol on the node which pod2 resides 01/14/23 04:53:19.087 +Jan 14 04:53:19.098: INFO: Waiting up to 5m0s for pod "pod3" in namespace "hostport-5960" to be "running and ready" +Jan 14 04:53:19.103: INFO: Pod "pod3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.514859ms +Jan 14 04:53:19.103: INFO: The phase of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) +Jan 14 04:53:21.108: INFO: Pod "pod3": Phase="Running", Reason="", readiness=true. Elapsed: 2.010577509s +Jan 14 04:53:21.108: INFO: The phase of Pod pod3 is Running (Ready = true) +Jan 14 04:53:21.108: INFO: Pod "pod3" satisfied condition "running and ready" +Jan 14 04:53:21.116: INFO: Waiting up to 5m0s for pod "e2e-host-exec" in namespace "hostport-5960" to be "running and ready" +Jan 14 04:53:21.119: INFO: Pod "e2e-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.982156ms +Jan 14 04:53:21.119: INFO: The phase of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) +Jan 14 04:53:23.123: INFO: Pod "e2e-host-exec": Phase="Running", Reason="", readiness=true. Elapsed: 2.007362555s +Jan 14 04:53:23.123: INFO: The phase of Pod e2e-host-exec is Running (Ready = true) +Jan 14 04:53:23.123: INFO: Pod "e2e-host-exec" satisfied condition "running and ready" +STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 01/14/23 04:53:23.127 +Jan 14 04:53:23.127: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 10.0.1.99 http://127.0.0.1:54323/hostname] Namespace:hostport-5960 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jan 14 04:53:23.127: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +Jan 14 04:53:23.127: INFO: ExecWithOptions: Clientset creation +Jan 14 04:53:23.127: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/hostport-5960/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+--interface+10.0.1.99+http%3A%2F%2F127.0.0.1%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) +STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.0.1.99, port: 54323 01/14/23 04:53:23.18 +Jan 14 04:53:23.180: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://10.0.1.99:54323/hostname] Namespace:hostport-5960 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jan 14 04:53:23.180: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +Jan 14 04:53:23.180: INFO: ExecWithOptions: Clientset creation +Jan 14 04:53:23.180: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/hostport-5960/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+http%3A%2F%2F10.0.1.99%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) +STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.0.1.99, port: 54323 UDP 01/14/23 04:53:23.231 +Jan 14 04:53:23.231: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostname | nc -u -w 5 10.0.1.99 54323] Namespace:hostport-5960 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} +Jan 14 04:53:23.231: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +Jan 14 04:53:23.232: INFO: ExecWithOptions: Clientset creation +Jan 14 04:53:23.232: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/hostport-5960/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostname+%7C+nc+-u+-w+5+10.0.1.99+54323&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) +[AfterEach] [sig-network] HostPort + test/e2e/framework/node/init/init.go:32 +Jan 14 04:53:28.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] HostPort + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] HostPort + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] HostPort + tear down framework | framework.go:193 +STEP: Destroying namespace "hostport-5960" for this suite. 01/14/23 04:53:28.292 +------------------------------ +• [SLOW TEST] [13.311 seconds] +[sig-network] HostPort +test/e2e/network/common/framework.go:23 + validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + test/e2e/network/hostport.go:63 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] HostPort + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:53:14.987 + Jan 14 04:53:14.987: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename hostport 01/14/23 04:53:14.987 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:53:15.005 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:53:15.007 + [BeforeEach] [sig-network] HostPort + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] HostPort + test/e2e/network/hostport.go:49 + [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] + test/e2e/network/hostport.go:63 + STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled 01/14/23 04:53:15.014 + Jan 14 04:53:15.030: INFO: Waiting up to 5m0s for pod "pod1" in namespace "hostport-5960" to be "running and ready" + Jan 14 04:53:15.033: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.040659ms + Jan 14 04:53:15.033: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) + Jan 14 04:53:17.038: INFO: Pod "pod1": Phase="Running", Reason="", readiness=true. Elapsed: 2.007608937s + Jan 14 04:53:17.038: INFO: The phase of Pod pod1 is Running (Ready = true) + Jan 14 04:53:17.038: INFO: Pod "pod1" satisfied condition "running and ready" + STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 10.0.1.99 on the node which pod1 resides and expect scheduled 01/14/23 04:53:17.038 + Jan 14 04:53:17.063: INFO: Waiting up to 5m0s for pod "pod2" in namespace "hostport-5960" to be "running and ready" + Jan 14 04:53:17.083: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 19.56812ms + Jan 14 04:53:17.083: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) + Jan 14 04:53:19.087: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 2.024095959s + Jan 14 04:53:19.087: INFO: The phase of Pod pod2 is Running (Ready = true) + Jan 14 04:53:19.087: INFO: Pod "pod2" satisfied condition "running and ready" + STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 10.0.1.99 but use UDP protocol on the node which pod2 resides 01/14/23 04:53:19.087 + Jan 14 04:53:19.098: INFO: Waiting up to 5m0s for pod "pod3" in namespace "hostport-5960" to be "running and ready" + Jan 14 04:53:19.103: INFO: Pod "pod3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.514859ms + Jan 14 04:53:19.103: INFO: The phase of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) + Jan 14 04:53:21.108: INFO: Pod "pod3": Phase="Running", Reason="", readiness=true. Elapsed: 2.010577509s + Jan 14 04:53:21.108: INFO: The phase of Pod pod3 is Running (Ready = true) + Jan 14 04:53:21.108: INFO: Pod "pod3" satisfied condition "running and ready" + Jan 14 04:53:21.116: INFO: Waiting up to 5m0s for pod "e2e-host-exec" in namespace "hostport-5960" to be "running and ready" + Jan 14 04:53:21.119: INFO: Pod "e2e-host-exec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.982156ms + Jan 14 04:53:21.119: INFO: The phase of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) + Jan 14 04:53:23.123: INFO: Pod "e2e-host-exec": Phase="Running", Reason="", readiness=true. Elapsed: 2.007362555s + Jan 14 04:53:23.123: INFO: The phase of Pod e2e-host-exec is Running (Ready = true) + Jan 14 04:53:23.123: INFO: Pod "e2e-host-exec" satisfied condition "running and ready" + STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 01/14/23 04:53:23.127 + Jan 14 04:53:23.127: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 10.0.1.99 http://127.0.0.1:54323/hostname] Namespace:hostport-5960 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jan 14 04:53:23.127: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + Jan 14 04:53:23.127: INFO: ExecWithOptions: Clientset creation + Jan 14 04:53:23.127: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/hostport-5960/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+--interface+10.0.1.99+http%3A%2F%2F127.0.0.1%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) + STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.0.1.99, port: 54323 01/14/23 04:53:23.18 + Jan 14 04:53:23.180: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://10.0.1.99:54323/hostname] Namespace:hostport-5960 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jan 14 04:53:23.180: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + Jan 14 04:53:23.180: INFO: ExecWithOptions: Clientset creation + Jan 14 04:53:23.180: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/hostport-5960/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+http%3A%2F%2F10.0.1.99%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) + STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.0.1.99, port: 54323 UDP 01/14/23 04:53:23.231 + Jan 14 04:53:23.231: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostname | nc -u -w 5 10.0.1.99 54323] Namespace:hostport-5960 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} + Jan 14 04:53:23.231: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + Jan 14 04:53:23.232: INFO: ExecWithOptions: Clientset creation + Jan 14 04:53:23.232: INFO: ExecWithOptions: execute(POST https://10.55.252.1:443/api/v1/namespaces/hostport-5960/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostname+%7C+nc+-u+-w+5+10.0.1.99+54323&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true) + [AfterEach] [sig-network] HostPort + test/e2e/framework/node/init/init.go:32 + Jan 14 04:53:28.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] HostPort + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] HostPort + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] HostPort + tear down framework | framework.go:193 + STEP: Destroying namespace "hostport-5960" for this suite. 01/14/23 04:53:28.292 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should delete a collection of services [Conformance] + test/e2e/network/service.go:3654 +[BeforeEach] [sig-network] Services + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:53:28.302 +Jan 14 04:53:28.302: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename services 01/14/23 04:53:28.303 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:53:28.315 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:53:28.318 +[BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should delete a collection of services [Conformance] + test/e2e/network/service.go:3654 +STEP: creating a collection of services 01/14/23 04:53:28.32 +Jan 14 04:53:28.320: INFO: Creating e2e-svc-a-9rbjd +Jan 14 04:53:28.329: INFO: Creating e2e-svc-b-5vhmk +Jan 14 04:53:28.339: INFO: Creating e2e-svc-c-hm9cf +STEP: deleting service collection 01/14/23 04:53:28.35 +Jan 14 04:53:28.378: INFO: Collection of services has been deleted +[AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 +Jan 14 04:53:28.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 +STEP: Destroying namespace "services-2515" for this suite. 01/14/23 04:53:28.385 +------------------------------ +• [0.092 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should delete a collection of services [Conformance] + test/e2e/network/service.go:3654 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:53:28.302 + Jan 14 04:53:28.302: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename services 01/14/23 04:53:28.303 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:53:28.315 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:53:28.318 + [BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should delete a collection of services [Conformance] + test/e2e/network/service.go:3654 + STEP: creating a collection of services 01/14/23 04:53:28.32 + Jan 14 04:53:28.320: INFO: Creating e2e-svc-a-9rbjd + Jan 14 04:53:28.329: INFO: Creating e2e-svc-b-5vhmk + Jan 14 04:53:28.339: INFO: Creating e2e-svc-c-hm9cf + STEP: deleting service collection 01/14/23 04:53:28.35 + Jan 14 04:53:28.378: INFO: Collection of services has been deleted + [AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 + Jan 14 04:53:28.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 + STEP: Destroying namespace "services-2515" for this suite. 01/14/23 04:53:28.385 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition + getting/updating/patching custom resource definition status sub-resource works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:145 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:53:28.394 +Jan 14 04:53:28.394: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename custom-resource-definition 01/14/23 04:53:28.395 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:53:28.411 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:53:28.413 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[It] getting/updating/patching custom resource definition status sub-resource works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:145 +Jan 14 04:53:28.416: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:53:28.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "custom-resource-definition-6308" for this suite. 01/14/23 04:53:28.972 +------------------------------ +• [0.583 seconds] +[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + Simple CustomResourceDefinition + test/e2e/apimachinery/custom_resource_definition.go:50 + getting/updating/patching custom resource definition status sub-resource works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:145 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:53:28.394 + Jan 14 04:53:28.394: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename custom-resource-definition 01/14/23 04:53:28.395 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:53:28.411 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:53:28.413 + [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] + test/e2e/apimachinery/custom_resource_definition.go:145 + Jan 14 04:53:28.416: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:53:28.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "custom-resource-definition-6308" for this suite. 01/14/23 04:53:28.972 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should retry creating failed daemon pods [Conformance] + test/e2e/apps/daemon_set.go:294 +[BeforeEach] [sig-apps] Daemon set [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:53:28.977 +Jan 14 04:53:28.978: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename daemonsets 01/14/23 04:53:28.978 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:53:28.994 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:53:28.996 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 +[It] should retry creating failed daemon pods [Conformance] + test/e2e/apps/daemon_set.go:294 +STEP: Creating a simple DaemonSet "daemon-set" 01/14/23 04:53:29.017 +STEP: Check that daemon pods launch on every node of the cluster. 01/14/23 04:53:29.023 +Jan 14 04:53:29.028: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:53:29.028: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:53:29.028: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:53:29.031: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jan 14 04:53:29.031: INFO: Node 10.0.1.106 is running 0 daemon pod, expected 1 +Jan 14 04:53:30.037: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:53:30.037: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:53:30.037: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:53:30.041: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Jan 14 04:53:30.041: INFO: Node 10.0.1.106 is running 0 daemon pod, expected 1 +Jan 14 04:53:31.037: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:53:31.037: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:53:31.037: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:53:31.041: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 +Jan 14 04:53:31.041: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set +STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. 01/14/23 04:53:31.044 +Jan 14 04:53:31.060: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:53:31.060: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:53:31.060: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:53:31.065: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 +Jan 14 04:53:31.065: INFO: Node 10.0.1.212 is running 0 daemon pod, expected 1 +Jan 14 04:53:32.071: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:53:32.072: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:53:32.072: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jan 14 04:53:32.075: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 +Jan 14 04:53:32.075: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set +STEP: Wait for the failed daemon pod to be completely deleted. 01/14/23 04:53:32.075 +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 +STEP: Deleting DaemonSet "daemon-set" 01/14/23 04:53:32.082 +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3425, will wait for the garbage collector to delete the pods 01/14/23 04:53:32.082 +Jan 14 04:53:32.142: INFO: Deleting DaemonSet.extensions daemon-set took: 7.101012ms +Jan 14 04:53:32.243: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.801529ms +Jan 14 04:53:34.847: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jan 14 04:53:34.847: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Jan 14 04:53:34.849: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"457189"},"items":null} + +Jan 14 04:53:34.852: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"457189"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:53:34.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "daemonsets-3425" for this suite. 01/14/23 04:53:34.869 +------------------------------ +• [SLOW TEST] [5.897 seconds] +[sig-apps] Daemon set [Serial] +test/e2e/apps/framework.go:23 + should retry creating failed daemon pods [Conformance] + test/e2e/apps/daemon_set.go:294 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Daemon set [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:53:28.977 + Jan 14 04:53:28.978: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename daemonsets 01/14/23 04:53:28.978 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:53:28.994 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:53:28.996 + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 + [It] should retry creating failed daemon pods [Conformance] + test/e2e/apps/daemon_set.go:294 + STEP: Creating a simple DaemonSet "daemon-set" 01/14/23 04:53:29.017 + STEP: Check that daemon pods launch on every node of the cluster. 01/14/23 04:53:29.023 + Jan 14 04:53:29.028: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:53:29.028: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:53:29.028: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:53:29.031: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jan 14 04:53:29.031: INFO: Node 10.0.1.106 is running 0 daemon pod, expected 1 + Jan 14 04:53:30.037: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:53:30.037: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:53:30.037: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:53:30.041: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Jan 14 04:53:30.041: INFO: Node 10.0.1.106 is running 0 daemon pod, expected 1 + Jan 14 04:53:31.037: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:53:31.037: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:53:31.037: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:53:31.041: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 + Jan 14 04:53:31.041: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set + STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. 01/14/23 04:53:31.044 + Jan 14 04:53:31.060: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:53:31.060: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:53:31.060: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:53:31.065: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2 + Jan 14 04:53:31.065: INFO: Node 10.0.1.212 is running 0 daemon pod, expected 1 + Jan 14 04:53:32.071: INFO: DaemonSet pods can't tolerate node 10.0.1.14 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:53:32.072: INFO: DaemonSet pods can't tolerate node 10.0.1.160 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:53:32.072: INFO: DaemonSet pods can't tolerate node 10.0.1.231 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node + Jan 14 04:53:32.075: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3 + Jan 14 04:53:32.075: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set + STEP: Wait for the failed daemon pod to be completely deleted. 01/14/23 04:53:32.075 + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 + STEP: Deleting DaemonSet "daemon-set" 01/14/23 04:53:32.082 + STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3425, will wait for the garbage collector to delete the pods 01/14/23 04:53:32.082 + Jan 14 04:53:32.142: INFO: Deleting DaemonSet.extensions daemon-set took: 7.101012ms + Jan 14 04:53:32.243: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.801529ms + Jan 14 04:53:34.847: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jan 14 04:53:34.847: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set + Jan 14 04:53:34.849: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"457189"},"items":null} + + Jan 14 04:53:34.852: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"457189"},"items":null} + + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:53:34.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "daemonsets-3425" for this suite. 01/14/23 04:53:34.869 + << End Captured GinkgoWriter Output +------------------------------ +SSSS +------------------------------ +[sig-node] Variable Expansion + should allow substituting values in a container's args [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:92 +[BeforeEach] [sig-node] Variable Expansion + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:53:34.875 +Jan 14 04:53:34.875: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename var-expansion 01/14/23 04:53:34.875 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:53:34.891 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:53:34.893 +[BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:31 +[It] should allow substituting values in a container's args [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:92 +STEP: Creating a pod to test substitution in container's args 01/14/23 04:53:34.896 +Jan 14 04:53:34.904: INFO: Waiting up to 5m0s for pod "var-expansion-17250670-12dd-42b3-8d33-4c93c185c5bb" in namespace "var-expansion-4020" to be "Succeeded or Failed" +Jan 14 04:53:34.907: INFO: Pod "var-expansion-17250670-12dd-42b3-8d33-4c93c185c5bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.801333ms +Jan 14 04:53:36.912: INFO: Pod "var-expansion-17250670-12dd-42b3-8d33-4c93c185c5bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007920241s +Jan 14 04:53:38.911: INFO: Pod "var-expansion-17250670-12dd-42b3-8d33-4c93c185c5bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007084579s +STEP: Saw pod success 01/14/23 04:53:38.911 +Jan 14 04:53:38.911: INFO: Pod "var-expansion-17250670-12dd-42b3-8d33-4c93c185c5bb" satisfied condition "Succeeded or Failed" +Jan 14 04:53:38.914: INFO: Trying to get logs from node 10.0.1.106 pod var-expansion-17250670-12dd-42b3-8d33-4c93c185c5bb container dapi-container: +STEP: delete the pod 01/14/23 04:53:38.925 +Jan 14 04:53:38.938: INFO: Waiting for pod var-expansion-17250670-12dd-42b3-8d33-4c93c185c5bb to disappear +Jan 14 04:53:38.941: INFO: Pod var-expansion-17250670-12dd-42b3-8d33-4c93c185c5bb no longer exists +[AfterEach] [sig-node] Variable Expansion + test/e2e/framework/node/init/init.go:32 +Jan 14 04:53:38.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Variable Expansion + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Variable Expansion + tear down framework | framework.go:193 +STEP: Destroying namespace "var-expansion-4020" for this suite. 01/14/23 04:53:38.949 +------------------------------ +• [4.080 seconds] +[sig-node] Variable Expansion +test/e2e/common/node/framework.go:23 + should allow substituting values in a container's args [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:92 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Variable Expansion + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:53:34.875 + Jan 14 04:53:34.875: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename var-expansion 01/14/23 04:53:34.875 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:53:34.891 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:53:34.893 + [BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:31 + [It] should allow substituting values in a container's args [NodeConformance] [Conformance] + test/e2e/common/node/expansion.go:92 + STEP: Creating a pod to test substitution in container's args 01/14/23 04:53:34.896 + Jan 14 04:53:34.904: INFO: Waiting up to 5m0s for pod "var-expansion-17250670-12dd-42b3-8d33-4c93c185c5bb" in namespace "var-expansion-4020" to be "Succeeded or Failed" + Jan 14 04:53:34.907: INFO: Pod "var-expansion-17250670-12dd-42b3-8d33-4c93c185c5bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.801333ms + Jan 14 04:53:36.912: INFO: Pod "var-expansion-17250670-12dd-42b3-8d33-4c93c185c5bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007920241s + Jan 14 04:53:38.911: INFO: Pod "var-expansion-17250670-12dd-42b3-8d33-4c93c185c5bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007084579s + STEP: Saw pod success 01/14/23 04:53:38.911 + Jan 14 04:53:38.911: INFO: Pod "var-expansion-17250670-12dd-42b3-8d33-4c93c185c5bb" satisfied condition "Succeeded or Failed" + Jan 14 04:53:38.914: INFO: Trying to get logs from node 10.0.1.106 pod var-expansion-17250670-12dd-42b3-8d33-4c93c185c5bb container dapi-container: + STEP: delete the pod 01/14/23 04:53:38.925 + Jan 14 04:53:38.938: INFO: Waiting for pod var-expansion-17250670-12dd-42b3-8d33-4c93c185c5bb to disappear + Jan 14 04:53:38.941: INFO: Pod var-expansion-17250670-12dd-42b3-8d33-4c93c185c5bb no longer exists + [AfterEach] [sig-node] Variable Expansion + test/e2e/framework/node/init/init.go:32 + Jan 14 04:53:38.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Variable Expansion + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Variable Expansion + tear down framework | framework.go:193 + STEP: Destroying namespace "var-expansion-4020" for this suite. 01/14/23 04:53:38.949 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-node] Probing container + with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:108 +[BeforeEach] [sig-node] Probing container + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:53:38.955 +Jan 14 04:53:38.955: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename container-probe 01/14/23 04:53:38.956 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:53:38.97 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:53:38.972 +[BeforeEach] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 +[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:108 +[AfterEach] [sig-node] Probing container + test/e2e/framework/node/init/init.go:32 +Jan 14 04:54:38.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Probing container + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Probing container + tear down framework | framework.go:193 +STEP: Destroying namespace "container-probe-7541" for this suite. 01/14/23 04:54:38.991 +------------------------------ +• [SLOW TEST] [60.042 seconds] +[sig-node] Probing container +test/e2e/common/node/framework.go:23 + with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:108 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Probing container + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:53:38.955 + Jan 14 04:53:38.955: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename container-probe 01/14/23 04:53:38.956 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:53:38.97 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:53:38.972 + [BeforeEach] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 + [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:108 + [AfterEach] [sig-node] Probing container + test/e2e/framework/node/init/init.go:32 + Jan 14 04:54:38.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Probing container + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Probing container + tear down framework | framework.go:193 + STEP: Destroying namespace "container-probe-7541" for this suite. 01/14/23 04:54:38.991 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + should unconditionally reject operations on fail closed webhook [Conformance] + test/e2e/apimachinery/webhook.go:239 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:54:38.999 +Jan 14 04:54:38.999: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename webhook 01/14/23 04:54:39 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:54:39.017 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:54:39.019 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 01/14/23 04:54:39.03 +STEP: Create role binding to let webhook read extension-apiserver-authentication 01/14/23 04:54:39.685 +STEP: Deploying the webhook pod 01/14/23 04:54:39.696 +STEP: Wait for the deployment to be ready 01/14/23 04:54:39.709 +Jan 14 04:54:39.717: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created +STEP: Deploying the webhook service 01/14/23 04:54:41.729 +STEP: Verifying the service has paired with the endpoint 01/14/23 04:54:41.738 +Jan 14 04:54:42.739: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] should unconditionally reject operations on fail closed webhook [Conformance] + test/e2e/apimachinery/webhook.go:239 +STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API 01/14/23 04:54:42.743 +STEP: create a namespace for the webhook 01/14/23 04:54:42.756 +STEP: create a configmap should be unconditionally rejected by the webhook 01/14/23 04:54:42.762 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:54:42.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "webhook-3967" for this suite. 01/14/23 04:54:42.822 +STEP: Destroying namespace "webhook-3967-markers" for this suite. 01/14/23 04:54:42.829 +------------------------------ +• [3.836 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + should unconditionally reject operations on fail closed webhook [Conformance] + test/e2e/apimachinery/webhook.go:239 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:54:38.999 + Jan 14 04:54:38.999: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename webhook 01/14/23 04:54:39 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:54:39.017 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:54:39.019 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 01/14/23 04:54:39.03 + STEP: Create role binding to let webhook read extension-apiserver-authentication 01/14/23 04:54:39.685 + STEP: Deploying the webhook pod 01/14/23 04:54:39.696 + STEP: Wait for the deployment to be ready 01/14/23 04:54:39.709 + Jan 14 04:54:39.717: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created + STEP: Deploying the webhook service 01/14/23 04:54:41.729 + STEP: Verifying the service has paired with the endpoint 01/14/23 04:54:41.738 + Jan 14 04:54:42.739: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] should unconditionally reject operations on fail closed webhook [Conformance] + test/e2e/apimachinery/webhook.go:239 + STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API 01/14/23 04:54:42.743 + STEP: create a namespace for the webhook 01/14/23 04:54:42.756 + STEP: create a configmap should be unconditionally rejected by the webhook 01/14/23 04:54:42.762 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:54:42.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "webhook-3967" for this suite. 01/14/23 04:54:42.822 + STEP: Destroying namespace "webhook-3967-markers" for this suite. 01/14/23 04:54:42.829 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's memory limit [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:207 +[BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:54:42.836 +Jan 14 04:54:42.837: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename downward-api 01/14/23 04:54:42.837 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:54:42.852 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:54:42.855 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 +[It] should provide container's memory limit [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:207 +STEP: Creating a pod to test downward API volume plugin 01/14/23 04:54:42.857 +Jan 14 04:54:42.867: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a2e3715d-5b08-4a5e-aa93-f89b10bc0830" in namespace "downward-api-4806" to be "Succeeded or Failed" +Jan 14 04:54:42.870: INFO: Pod "downwardapi-volume-a2e3715d-5b08-4a5e-aa93-f89b10bc0830": Phase="Pending", Reason="", readiness=false. Elapsed: 3.018478ms +Jan 14 04:54:44.874: INFO: Pod "downwardapi-volume-a2e3715d-5b08-4a5e-aa93-f89b10bc0830": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007537001s +Jan 14 04:54:46.875: INFO: Pod "downwardapi-volume-a2e3715d-5b08-4a5e-aa93-f89b10bc0830": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008428669s +STEP: Saw pod success 01/14/23 04:54:46.875 +Jan 14 04:54:46.875: INFO: Pod "downwardapi-volume-a2e3715d-5b08-4a5e-aa93-f89b10bc0830" satisfied condition "Succeeded or Failed" +Jan 14 04:54:46.879: INFO: Trying to get logs from node 10.0.1.106 pod downwardapi-volume-a2e3715d-5b08-4a5e-aa93-f89b10bc0830 container client-container: +STEP: delete the pod 01/14/23 04:54:46.884 +Jan 14 04:54:46.897: INFO: Waiting for pod downwardapi-volume-a2e3715d-5b08-4a5e-aa93-f89b10bc0830 to disappear +Jan 14 04:54:46.900: INFO: Pod downwardapi-volume-a2e3715d-5b08-4a5e-aa93-f89b10bc0830 no longer exists +[AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 +Jan 14 04:54:46.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 +STEP: Destroying namespace "downward-api-4806" for this suite. 01/14/23 04:54:46.904 +------------------------------ +• [4.073 seconds] +[sig-storage] Downward API volume +test/e2e/common/storage/framework.go:23 + should provide container's memory limit [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:207 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Downward API volume + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:54:42.836 + Jan 14 04:54:42.837: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename downward-api 01/14/23 04:54:42.837 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:54:42.852 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:54:42.855 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Downward API volume + test/e2e/common/storage/downwardapi_volume.go:44 + [It] should provide container's memory limit [NodeConformance] [Conformance] + test/e2e/common/storage/downwardapi_volume.go:207 + STEP: Creating a pod to test downward API volume plugin 01/14/23 04:54:42.857 + Jan 14 04:54:42.867: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a2e3715d-5b08-4a5e-aa93-f89b10bc0830" in namespace "downward-api-4806" to be "Succeeded or Failed" + Jan 14 04:54:42.870: INFO: Pod "downwardapi-volume-a2e3715d-5b08-4a5e-aa93-f89b10bc0830": Phase="Pending", Reason="", readiness=false. Elapsed: 3.018478ms + Jan 14 04:54:44.874: INFO: Pod "downwardapi-volume-a2e3715d-5b08-4a5e-aa93-f89b10bc0830": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007537001s + Jan 14 04:54:46.875: INFO: Pod "downwardapi-volume-a2e3715d-5b08-4a5e-aa93-f89b10bc0830": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008428669s + STEP: Saw pod success 01/14/23 04:54:46.875 + Jan 14 04:54:46.875: INFO: Pod "downwardapi-volume-a2e3715d-5b08-4a5e-aa93-f89b10bc0830" satisfied condition "Succeeded or Failed" + Jan 14 04:54:46.879: INFO: Trying to get logs from node 10.0.1.106 pod downwardapi-volume-a2e3715d-5b08-4a5e-aa93-f89b10bc0830 container client-container: + STEP: delete the pod 01/14/23 04:54:46.884 + Jan 14 04:54:46.897: INFO: Waiting for pod downwardapi-volume-a2e3715d-5b08-4a5e-aa93-f89b10bc0830 to disappear + Jan 14 04:54:46.900: INFO: Pod downwardapi-volume-a2e3715d-5b08-4a5e-aa93-f89b10bc0830 no longer exists + [AfterEach] [sig-storage] Downward API volume + test/e2e/framework/node/init/init.go:32 + Jan 14 04:54:46.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Downward API volume + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Downward API volume + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Downward API volume + tear down framework | framework.go:193 + STEP: Destroying namespace "downward-api-4806" for this suite. 01/14/23 04:54:46.904 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:423 +[BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:54:46.91 +Jan 14 04:54:46.910: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename configmap 01/14/23 04:54:46.911 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:54:46.925 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:54:46.927 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:423 +STEP: Creating configMap with name configmap-test-volume-879d3462-12b9-49a1-b73e-174df76f450a 01/14/23 04:54:46.929 +STEP: Creating a pod to test consume configMaps 01/14/23 04:54:46.933 +Jan 14 04:54:46.942: INFO: Waiting up to 5m0s for pod "pod-configmaps-7ae77fb2-288e-417f-a954-22a8a3556330" in namespace "configmap-3571" to be "Succeeded or Failed" +Jan 14 04:54:46.945: INFO: Pod "pod-configmaps-7ae77fb2-288e-417f-a954-22a8a3556330": Phase="Pending", Reason="", readiness=false. Elapsed: 2.638105ms +Jan 14 04:54:48.950: INFO: Pod "pod-configmaps-7ae77fb2-288e-417f-a954-22a8a3556330": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007239088s +Jan 14 04:54:50.950: INFO: Pod "pod-configmaps-7ae77fb2-288e-417f-a954-22a8a3556330": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008094482s +STEP: Saw pod success 01/14/23 04:54:50.951 +Jan 14 04:54:50.951: INFO: Pod "pod-configmaps-7ae77fb2-288e-417f-a954-22a8a3556330" satisfied condition "Succeeded or Failed" +Jan 14 04:54:50.954: INFO: Trying to get logs from node 10.0.1.106 pod pod-configmaps-7ae77fb2-288e-417f-a954-22a8a3556330 container configmap-volume-test: +STEP: delete the pod 01/14/23 04:54:50.961 +Jan 14 04:54:51.012: INFO: Waiting for pod pod-configmaps-7ae77fb2-288e-417f-a954-22a8a3556330 to disappear +Jan 14 04:54:51.016: INFO: Pod pod-configmaps-7ae77fb2-288e-417f-a954-22a8a3556330 no longer exists +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 +Jan 14 04:54:51.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 +STEP: Destroying namespace "configmap-3571" for this suite. 01/14/23 04:54:51.021 +------------------------------ +• [4.118 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:423 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:54:46.91 + Jan 14 04:54:46.910: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename configmap 01/14/23 04:54:46.911 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:54:46.925 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:54:46.927 + [BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:423 + STEP: Creating configMap with name configmap-test-volume-879d3462-12b9-49a1-b73e-174df76f450a 01/14/23 04:54:46.929 + STEP: Creating a pod to test consume configMaps 01/14/23 04:54:46.933 + Jan 14 04:54:46.942: INFO: Waiting up to 5m0s for pod "pod-configmaps-7ae77fb2-288e-417f-a954-22a8a3556330" in namespace "configmap-3571" to be "Succeeded or Failed" + Jan 14 04:54:46.945: INFO: Pod "pod-configmaps-7ae77fb2-288e-417f-a954-22a8a3556330": Phase="Pending", Reason="", readiness=false. Elapsed: 2.638105ms + Jan 14 04:54:48.950: INFO: Pod "pod-configmaps-7ae77fb2-288e-417f-a954-22a8a3556330": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007239088s + Jan 14 04:54:50.950: INFO: Pod "pod-configmaps-7ae77fb2-288e-417f-a954-22a8a3556330": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008094482s + STEP: Saw pod success 01/14/23 04:54:50.951 + Jan 14 04:54:50.951: INFO: Pod "pod-configmaps-7ae77fb2-288e-417f-a954-22a8a3556330" satisfied condition "Succeeded or Failed" + Jan 14 04:54:50.954: INFO: Trying to get logs from node 10.0.1.106 pod pod-configmaps-7ae77fb2-288e-417f-a954-22a8a3556330 container configmap-volume-test: + STEP: delete the pod 01/14/23 04:54:50.961 + Jan 14 04:54:51.012: INFO: Waiting for pod pod-configmaps-7ae77fb2-288e-417f-a954-22a8a3556330 to disappear + Jan 14 04:54:51.016: INFO: Pod pod-configmaps-7ae77fb2-288e-417f-a954-22a8a3556330 no longer exists + [AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 + Jan 14 04:54:51.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 + STEP: Destroying namespace "configmap-3571" for this suite. 01/14/23 04:54:51.021 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Job + should adopt matching orphans and release non-matching pods [Conformance] + test/e2e/apps/job.go:507 +[BeforeEach] [sig-apps] Job + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:54:51.029 +Jan 14 04:54:51.029: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename job 01/14/23 04:54:51.029 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:54:51.045 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:54:51.048 +[BeforeEach] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:31 +[It] should adopt matching orphans and release non-matching pods [Conformance] + test/e2e/apps/job.go:507 +STEP: Creating a job 01/14/23 04:54:51.05 +STEP: Ensuring active pods == parallelism 01/14/23 04:54:51.055 +STEP: Orphaning one of the Job's Pods 01/14/23 04:54:53.061 +Jan 14 04:54:53.581: INFO: Successfully updated pod "adopt-release-4m2c9" +STEP: Checking that the Job readopts the Pod 01/14/23 04:54:53.581 +Jan 14 04:54:53.581: INFO: Waiting up to 15m0s for pod "adopt-release-4m2c9" in namespace "job-9590" to be "adopted" +Jan 14 04:54:53.584: INFO: Pod "adopt-release-4m2c9": Phase="Running", Reason="", readiness=true. Elapsed: 3.430159ms +Jan 14 04:54:55.589: INFO: Pod "adopt-release-4m2c9": Phase="Running", Reason="", readiness=true. Elapsed: 2.008075137s +Jan 14 04:54:55.589: INFO: Pod "adopt-release-4m2c9" satisfied condition "adopted" +STEP: Removing the labels from the Job's Pod 01/14/23 04:54:55.589 +Jan 14 04:54:56.101: INFO: Successfully updated pod "adopt-release-4m2c9" +STEP: Checking that the Job releases the Pod 01/14/23 04:54:56.102 +Jan 14 04:54:56.102: INFO: Waiting up to 15m0s for pod "adopt-release-4m2c9" in namespace "job-9590" to be "released" +Jan 14 04:54:56.105: INFO: Pod "adopt-release-4m2c9": Phase="Running", Reason="", readiness=true. Elapsed: 3.514735ms +Jan 14 04:54:58.111: INFO: Pod "adopt-release-4m2c9": Phase="Running", Reason="", readiness=true. Elapsed: 2.009165234s +Jan 14 04:54:58.111: INFO: Pod "adopt-release-4m2c9" satisfied condition "released" +[AfterEach] [sig-apps] Job + test/e2e/framework/node/init/init.go:32 +Jan 14 04:54:58.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Job + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Job + tear down framework | framework.go:193 +STEP: Destroying namespace "job-9590" for this suite. 01/14/23 04:54:58.116 +------------------------------ +• [SLOW TEST] [7.093 seconds] +[sig-apps] Job +test/e2e/apps/framework.go:23 + should adopt matching orphans and release non-matching pods [Conformance] + test/e2e/apps/job.go:507 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Job + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:54:51.029 + Jan 14 04:54:51.029: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename job 01/14/23 04:54:51.029 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:54:51.045 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:54:51.048 + [BeforeEach] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:31 + [It] should adopt matching orphans and release non-matching pods [Conformance] + test/e2e/apps/job.go:507 + STEP: Creating a job 01/14/23 04:54:51.05 + STEP: Ensuring active pods == parallelism 01/14/23 04:54:51.055 + STEP: Orphaning one of the Job's Pods 01/14/23 04:54:53.061 + Jan 14 04:54:53.581: INFO: Successfully updated pod "adopt-release-4m2c9" + STEP: Checking that the Job readopts the Pod 01/14/23 04:54:53.581 + Jan 14 04:54:53.581: INFO: Waiting up to 15m0s for pod "adopt-release-4m2c9" in namespace "job-9590" to be "adopted" + Jan 14 04:54:53.584: INFO: Pod "adopt-release-4m2c9": Phase="Running", Reason="", readiness=true. Elapsed: 3.430159ms + Jan 14 04:54:55.589: INFO: Pod "adopt-release-4m2c9": Phase="Running", Reason="", readiness=true. Elapsed: 2.008075137s + Jan 14 04:54:55.589: INFO: Pod "adopt-release-4m2c9" satisfied condition "adopted" + STEP: Removing the labels from the Job's Pod 01/14/23 04:54:55.589 + Jan 14 04:54:56.101: INFO: Successfully updated pod "adopt-release-4m2c9" + STEP: Checking that the Job releases the Pod 01/14/23 04:54:56.102 + Jan 14 04:54:56.102: INFO: Waiting up to 15m0s for pod "adopt-release-4m2c9" in namespace "job-9590" to be "released" + Jan 14 04:54:56.105: INFO: Pod "adopt-release-4m2c9": Phase="Running", Reason="", readiness=true. Elapsed: 3.514735ms + Jan 14 04:54:58.111: INFO: Pod "adopt-release-4m2c9": Phase="Running", Reason="", readiness=true. Elapsed: 2.009165234s + Jan 14 04:54:58.111: INFO: Pod "adopt-release-4m2c9" satisfied condition "released" + [AfterEach] [sig-apps] Job + test/e2e/framework/node/init/init.go:32 + Jan 14 04:54:58.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Job + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Job + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Job + tear down framework | framework.go:193 + STEP: Destroying namespace "job-9590" for this suite. 01/14/23 04:54:58.116 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:99 +[BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:54:58.123 +Jan 14 04:54:58.123: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename configmap 01/14/23 04:54:58.124 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:54:58.139 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:54:58.141 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:99 +STEP: Creating configMap with name configmap-test-volume-map-6ab200d0-f53f-4bee-9a50-958007b55b3d 01/14/23 04:54:58.143 +STEP: Creating a pod to test consume configMaps 01/14/23 04:54:58.149 +Jan 14 04:54:58.158: INFO: Waiting up to 5m0s for pod "pod-configmaps-1a1a21c1-8e58-40bf-88e5-9ab54bcc866c" in namespace "configmap-5500" to be "Succeeded or Failed" +Jan 14 04:54:58.160: INFO: Pod "pod-configmaps-1a1a21c1-8e58-40bf-88e5-9ab54bcc866c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.759534ms +Jan 14 04:55:00.165: INFO: Pod "pod-configmaps-1a1a21c1-8e58-40bf-88e5-9ab54bcc866c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007543598s +Jan 14 04:55:02.166: INFO: Pod "pod-configmaps-1a1a21c1-8e58-40bf-88e5-9ab54bcc866c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007895275s +STEP: Saw pod success 01/14/23 04:55:02.166 +Jan 14 04:55:02.166: INFO: Pod "pod-configmaps-1a1a21c1-8e58-40bf-88e5-9ab54bcc866c" satisfied condition "Succeeded or Failed" +Jan 14 04:55:02.169: INFO: Trying to get logs from node 10.0.1.99 pod pod-configmaps-1a1a21c1-8e58-40bf-88e5-9ab54bcc866c container agnhost-container: +STEP: delete the pod 01/14/23 04:55:02.184 +Jan 14 04:55:02.201: INFO: Waiting for pod pod-configmaps-1a1a21c1-8e58-40bf-88e5-9ab54bcc866c to disappear +Jan 14 04:55:02.204: INFO: Pod pod-configmaps-1a1a21c1-8e58-40bf-88e5-9ab54bcc866c no longer exists +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 +Jan 14 04:55:02.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 +STEP: Destroying namespace "configmap-5500" for this suite. 01/14/23 04:55:02.209 +------------------------------ +• [4.091 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:99 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:54:58.123 + Jan 14 04:54:58.123: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename configmap 01/14/23 04:54:58.124 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:54:58.139 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:54:58.141 + [BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:99 + STEP: Creating configMap with name configmap-test-volume-map-6ab200d0-f53f-4bee-9a50-958007b55b3d 01/14/23 04:54:58.143 + STEP: Creating a pod to test consume configMaps 01/14/23 04:54:58.149 + Jan 14 04:54:58.158: INFO: Waiting up to 5m0s for pod "pod-configmaps-1a1a21c1-8e58-40bf-88e5-9ab54bcc866c" in namespace "configmap-5500" to be "Succeeded or Failed" + Jan 14 04:54:58.160: INFO: Pod "pod-configmaps-1a1a21c1-8e58-40bf-88e5-9ab54bcc866c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.759534ms + Jan 14 04:55:00.165: INFO: Pod "pod-configmaps-1a1a21c1-8e58-40bf-88e5-9ab54bcc866c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007543598s + Jan 14 04:55:02.166: INFO: Pod "pod-configmaps-1a1a21c1-8e58-40bf-88e5-9ab54bcc866c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007895275s + STEP: Saw pod success 01/14/23 04:55:02.166 + Jan 14 04:55:02.166: INFO: Pod "pod-configmaps-1a1a21c1-8e58-40bf-88e5-9ab54bcc866c" satisfied condition "Succeeded or Failed" + Jan 14 04:55:02.169: INFO: Trying to get logs from node 10.0.1.99 pod pod-configmaps-1a1a21c1-8e58-40bf-88e5-9ab54bcc866c container agnhost-container: + STEP: delete the pod 01/14/23 04:55:02.184 + Jan 14 04:55:02.201: INFO: Waiting for pod pod-configmaps-1a1a21c1-8e58-40bf-88e5-9ab54bcc866c to disappear + Jan 14 04:55:02.204: INFO: Pod pod-configmaps-1a1a21c1-8e58-40bf-88e5-9ab54bcc866c no longer exists + [AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 + Jan 14 04:55:02.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 + STEP: Destroying namespace "configmap-5500" for this suite. 01/14/23 04:55:02.209 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Container Runtime blackbox test on terminated container + should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:216 +[BeforeEach] [sig-node] Container Runtime + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:55:02.215 +Jan 14 04:55:02.215: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename container-runtime 01/14/23 04:55:02.216 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:55:02.232 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:55:02.235 +[BeforeEach] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:31 +[It] should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:216 +STEP: create the container 01/14/23 04:55:02.237 +STEP: wait for the container to reach Failed 01/14/23 04:55:02.247 +STEP: get the container status 01/14/23 04:55:05.268 +STEP: the container should be terminated 01/14/23 04:55:05.271 +STEP: the termination message should be set 01/14/23 04:55:05.271 +Jan 14 04:55:05.271: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- +STEP: delete the container 01/14/23 04:55:05.271 +[AfterEach] [sig-node] Container Runtime + test/e2e/framework/node/init/init.go:32 +Jan 14 04:55:05.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Container Runtime + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Container Runtime + tear down framework | framework.go:193 +STEP: Destroying namespace "container-runtime-3120" for this suite. 01/14/23 04:55:05.297 +------------------------------ +• [3.088 seconds] +[sig-node] Container Runtime +test/e2e/common/node/framework.go:23 + blackbox test + test/e2e/common/node/runtime.go:44 + on terminated container + test/e2e/common/node/runtime.go:137 + should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:216 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Container Runtime + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:55:02.215 + Jan 14 04:55:02.215: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename container-runtime 01/14/23 04:55:02.216 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:55:02.232 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:55:02.235 + [BeforeEach] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:31 + [It] should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] + test/e2e/common/node/runtime.go:216 + STEP: create the container 01/14/23 04:55:02.237 + STEP: wait for the container to reach Failed 01/14/23 04:55:02.247 + STEP: get the container status 01/14/23 04:55:05.268 + STEP: the container should be terminated 01/14/23 04:55:05.271 + STEP: the termination message should be set 01/14/23 04:55:05.271 + Jan 14 04:55:05.271: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- + STEP: delete the container 01/14/23 04:55:05.271 + [AfterEach] [sig-node] Container Runtime + test/e2e/framework/node/init/init.go:32 + Jan 14 04:55:05.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Container Runtime + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Container Runtime + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Container Runtime + tear down framework | framework.go:193 + STEP: Destroying namespace "container-runtime-3120" for this suite. 01/14/23 04:55:05.297 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:47 +[BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:55:05.303 +Jan 14 04:55:05.303: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename configmap 01/14/23 04:55:05.304 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:55:05.318 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:55:05.321 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:47 +STEP: Creating configMap with name configmap-test-volume-8f9fb8f6-8da5-47ce-a8ef-53159784ced9 01/14/23 04:55:05.323 +STEP: Creating a pod to test consume configMaps 01/14/23 04:55:05.327 +Jan 14 04:55:05.337: INFO: Waiting up to 5m0s for pod "pod-configmaps-ed52a190-e15e-4714-9bf3-b16273663d00" in namespace "configmap-1663" to be "Succeeded or Failed" +Jan 14 04:55:05.340: INFO: Pod "pod-configmaps-ed52a190-e15e-4714-9bf3-b16273663d00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.99122ms +Jan 14 04:55:07.345: INFO: Pod "pod-configmaps-ed52a190-e15e-4714-9bf3-b16273663d00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007611141s +Jan 14 04:55:09.346: INFO: Pod "pod-configmaps-ed52a190-e15e-4714-9bf3-b16273663d00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00890252s +STEP: Saw pod success 01/14/23 04:55:09.346 +Jan 14 04:55:09.346: INFO: Pod "pod-configmaps-ed52a190-e15e-4714-9bf3-b16273663d00" satisfied condition "Succeeded or Failed" +Jan 14 04:55:09.349: INFO: Trying to get logs from node 10.0.1.99 pod pod-configmaps-ed52a190-e15e-4714-9bf3-b16273663d00 container agnhost-container: +STEP: delete the pod 01/14/23 04:55:09.355 +Jan 14 04:55:09.372: INFO: Waiting for pod pod-configmaps-ed52a190-e15e-4714-9bf3-b16273663d00 to disappear +Jan 14 04:55:09.375: INFO: Pod pod-configmaps-ed52a190-e15e-4714-9bf3-b16273663d00 no longer exists +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 +Jan 14 04:55:09.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 +STEP: Destroying namespace "configmap-1663" for this suite. 01/14/23 04:55:09.38 +------------------------------ +• [4.082 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:47 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:55:05.303 + Jan 14 04:55:05.303: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename configmap 01/14/23 04:55:05.304 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:55:05.318 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:55:05.321 + [BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:47 + STEP: Creating configMap with name configmap-test-volume-8f9fb8f6-8da5-47ce-a8ef-53159784ced9 01/14/23 04:55:05.323 + STEP: Creating a pod to test consume configMaps 01/14/23 04:55:05.327 + Jan 14 04:55:05.337: INFO: Waiting up to 5m0s for pod "pod-configmaps-ed52a190-e15e-4714-9bf3-b16273663d00" in namespace "configmap-1663" to be "Succeeded or Failed" + Jan 14 04:55:05.340: INFO: Pod "pod-configmaps-ed52a190-e15e-4714-9bf3-b16273663d00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.99122ms + Jan 14 04:55:07.345: INFO: Pod "pod-configmaps-ed52a190-e15e-4714-9bf3-b16273663d00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007611141s + Jan 14 04:55:09.346: INFO: Pod "pod-configmaps-ed52a190-e15e-4714-9bf3-b16273663d00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00890252s + STEP: Saw pod success 01/14/23 04:55:09.346 + Jan 14 04:55:09.346: INFO: Pod "pod-configmaps-ed52a190-e15e-4714-9bf3-b16273663d00" satisfied condition "Succeeded or Failed" + Jan 14 04:55:09.349: INFO: Trying to get logs from node 10.0.1.99 pod pod-configmaps-ed52a190-e15e-4714-9bf3-b16273663d00 container agnhost-container: + STEP: delete the pod 01/14/23 04:55:09.355 + Jan 14 04:55:09.372: INFO: Waiting for pod pod-configmaps-ed52a190-e15e-4714-9bf3-b16273663d00 to disappear + Jan 14 04:55:09.375: INFO: Pod pod-configmaps-ed52a190-e15e-4714-9bf3-b16273663d00 no longer exists + [AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 + Jan 14 04:55:09.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 + STEP: Destroying namespace "configmap-1663" for this suite. 01/14/23 04:55:09.38 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Probing container + should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:135 +[BeforeEach] [sig-node] Probing container + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:55:09.387 +Jan 14 04:55:09.388: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename container-probe 01/14/23 04:55:09.388 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:55:09.404 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:55:09.406 +[BeforeEach] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 +[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:135 +STEP: Creating pod busybox-521ea933-8ca3-4f2e-acea-65e70f065030 in namespace container-probe-9837 01/14/23 04:55:09.409 +Jan 14 04:55:09.420: INFO: Waiting up to 5m0s for pod "busybox-521ea933-8ca3-4f2e-acea-65e70f065030" in namespace "container-probe-9837" to be "not pending" +Jan 14 04:55:09.426: INFO: Pod "busybox-521ea933-8ca3-4f2e-acea-65e70f065030": Phase="Pending", Reason="", readiness=false. Elapsed: 5.292624ms +Jan 14 04:55:11.430: INFO: Pod "busybox-521ea933-8ca3-4f2e-acea-65e70f065030": Phase="Running", Reason="", readiness=true. Elapsed: 2.009697791s +Jan 14 04:55:11.430: INFO: Pod "busybox-521ea933-8ca3-4f2e-acea-65e70f065030" satisfied condition "not pending" +Jan 14 04:55:11.430: INFO: Started pod busybox-521ea933-8ca3-4f2e-acea-65e70f065030 in namespace container-probe-9837 +STEP: checking the pod's current state and verifying that restartCount is present 01/14/23 04:55:11.43 +Jan 14 04:55:11.434: INFO: Initial restart count of pod busybox-521ea933-8ca3-4f2e-acea-65e70f065030 is 0 +Jan 14 04:56:01.566: INFO: Restart count of pod container-probe-9837/busybox-521ea933-8ca3-4f2e-acea-65e70f065030 is now 1 (50.131987877s elapsed) +STEP: deleting the pod 01/14/23 04:56:01.566 +[AfterEach] [sig-node] Probing container + test/e2e/framework/node/init/init.go:32 +Jan 14 04:56:01.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Probing container + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Probing container + tear down framework | framework.go:193 +STEP: Destroying namespace "container-probe-9837" for this suite. 01/14/23 04:56:01.591 +------------------------------ +• [SLOW TEST] [52.209 seconds] +[sig-node] Probing container +test/e2e/common/node/framework.go:23 + should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:135 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Probing container + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:55:09.387 + Jan 14 04:55:09.388: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename container-probe 01/14/23 04:55:09.388 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:55:09.404 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:55:09.406 + [BeforeEach] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Probing container + test/e2e/common/node/container_probe.go:63 + [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + test/e2e/common/node/container_probe.go:135 + STEP: Creating pod busybox-521ea933-8ca3-4f2e-acea-65e70f065030 in namespace container-probe-9837 01/14/23 04:55:09.409 + Jan 14 04:55:09.420: INFO: Waiting up to 5m0s for pod "busybox-521ea933-8ca3-4f2e-acea-65e70f065030" in namespace "container-probe-9837" to be "not pending" + Jan 14 04:55:09.426: INFO: Pod "busybox-521ea933-8ca3-4f2e-acea-65e70f065030": Phase="Pending", Reason="", readiness=false. Elapsed: 5.292624ms + Jan 14 04:55:11.430: INFO: Pod "busybox-521ea933-8ca3-4f2e-acea-65e70f065030": Phase="Running", Reason="", readiness=true. Elapsed: 2.009697791s + Jan 14 04:55:11.430: INFO: Pod "busybox-521ea933-8ca3-4f2e-acea-65e70f065030" satisfied condition "not pending" + Jan 14 04:55:11.430: INFO: Started pod busybox-521ea933-8ca3-4f2e-acea-65e70f065030 in namespace container-probe-9837 + STEP: checking the pod's current state and verifying that restartCount is present 01/14/23 04:55:11.43 + Jan 14 04:55:11.434: INFO: Initial restart count of pod busybox-521ea933-8ca3-4f2e-acea-65e70f065030 is 0 + Jan 14 04:56:01.566: INFO: Restart count of pod container-probe-9837/busybox-521ea933-8ca3-4f2e-acea-65e70f065030 is now 1 (50.131987877s elapsed) + STEP: deleting the pod 01/14/23 04:56:01.566 + [AfterEach] [sig-node] Probing container + test/e2e/framework/node/init/init.go:32 + Jan 14 04:56:01.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Probing container + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Probing container + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Probing container + tear down framework | framework.go:193 + STEP: Destroying namespace "container-probe-9837" for this suite. 01/14/23 04:56:01.591 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] RuntimeClass + should support RuntimeClasses API operations [Conformance] + test/e2e/common/node/runtimeclass.go:189 +[BeforeEach] [sig-node] RuntimeClass + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:56:01.598 +Jan 14 04:56:01.598: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename runtimeclass 01/14/23 04:56:01.598 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:56:01.615 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:56:01.618 +[BeforeEach] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:31 +[It] should support RuntimeClasses API operations [Conformance] + test/e2e/common/node/runtimeclass.go:189 +STEP: getting /apis 01/14/23 04:56:01.62 +STEP: getting /apis/node.k8s.io 01/14/23 04:56:01.622 +STEP: getting /apis/node.k8s.io/v1 01/14/23 04:56:01.623 +STEP: creating 01/14/23 04:56:01.624 +STEP: watching 01/14/23 04:56:01.642 +Jan 14 04:56:01.642: INFO: starting watch +STEP: getting 01/14/23 04:56:01.648 +STEP: listing 01/14/23 04:56:01.652 +STEP: patching 01/14/23 04:56:01.655 +STEP: updating 01/14/23 04:56:01.682 +Jan 14 04:56:01.689: INFO: waiting for watch events with expected annotations +STEP: deleting 01/14/23 04:56:01.69 +STEP: deleting a collection 01/14/23 04:56:01.702 +[AfterEach] [sig-node] RuntimeClass + test/e2e/framework/node/init/init.go:32 +Jan 14 04:56:01.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] RuntimeClass + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] RuntimeClass + tear down framework | framework.go:193 +STEP: Destroying namespace "runtimeclass-3130" for this suite. 01/14/23 04:56:01.725 +------------------------------ +• [0.134 seconds] +[sig-node] RuntimeClass +test/e2e/common/node/framework.go:23 + should support RuntimeClasses API operations [Conformance] + test/e2e/common/node/runtimeclass.go:189 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] RuntimeClass + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:56:01.598 + Jan 14 04:56:01.598: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename runtimeclass 01/14/23 04:56:01.598 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:56:01.615 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:56:01.618 + [BeforeEach] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:31 + [It] should support RuntimeClasses API operations [Conformance] + test/e2e/common/node/runtimeclass.go:189 + STEP: getting /apis 01/14/23 04:56:01.62 + STEP: getting /apis/node.k8s.io 01/14/23 04:56:01.622 + STEP: getting /apis/node.k8s.io/v1 01/14/23 04:56:01.623 + STEP: creating 01/14/23 04:56:01.624 + STEP: watching 01/14/23 04:56:01.642 + Jan 14 04:56:01.642: INFO: starting watch + STEP: getting 01/14/23 04:56:01.648 + STEP: listing 01/14/23 04:56:01.652 + STEP: patching 01/14/23 04:56:01.655 + STEP: updating 01/14/23 04:56:01.682 + Jan 14 04:56:01.689: INFO: waiting for watch events with expected annotations + STEP: deleting 01/14/23 04:56:01.69 + STEP: deleting a collection 01/14/23 04:56:01.702 + [AfterEach] [sig-node] RuntimeClass + test/e2e/framework/node/init/init.go:32 + Jan 14 04:56:01.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] RuntimeClass + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] RuntimeClass + tear down framework | framework.go:193 + STEP: Destroying namespace "runtimeclass-3130" for this suite. 01/14/23 04:56:01.725 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + deployment should support rollover [Conformance] + test/e2e/apps/deployment.go:132 +[BeforeEach] [sig-apps] Deployment + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:56:01.732 +Jan 14 04:56:01.732: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename deployment 01/14/23 04:56:01.733 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:56:01.754 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:56:01.757 +[BeforeEach] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] deployment should support rollover [Conformance] + test/e2e/apps/deployment.go:132 +Jan 14 04:56:01.768: INFO: Pod name rollover-pod: Found 0 pods out of 1 +Jan 14 04:56:06.775: INFO: Pod name rollover-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running 01/14/23 04:56:06.775 +Jan 14 04:56:06.775: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready +Jan 14 04:56:08.780: INFO: Creating deployment "test-rollover-deployment" +Jan 14 04:56:08.790: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations +Jan 14 04:56:10.797: INFO: Check revision of new replica set for deployment "test-rollover-deployment" +Jan 14 04:56:10.804: INFO: Ensure that both replica sets have 1 created replica +Jan 14 04:56:10.811: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update +Jan 14 04:56:10.820: INFO: Updating deployment test-rollover-deployment +Jan 14 04:56:10.820: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller +Jan 14 04:56:12.828: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 +Jan 14 04:56:12.835: INFO: Make sure deployment "test-rollover-deployment" is complete +Jan 14 04:56:12.842: INFO: all replica sets need to contain the pod-template-hash label +Jan 14 04:56:12.842: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 56, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 56, 8, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 56, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 56, 8, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jan 14 04:56:14.850: INFO: all replica sets need to contain the pod-template-hash label +Jan 14 04:56:14.850: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 56, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 56, 8, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 56, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 56, 8, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jan 14 04:56:16.852: INFO: all replica sets need to contain the pod-template-hash label +Jan 14 04:56:16.852: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 56, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 56, 8, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 56, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 56, 8, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jan 14 04:56:18.850: INFO: all replica sets need to contain the pod-template-hash label +Jan 14 04:56:18.850: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 56, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 56, 8, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 56, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 56, 8, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jan 14 04:56:20.850: INFO: all replica sets need to contain the pod-template-hash label +Jan 14 04:56:20.850: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 56, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 56, 8, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 56, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 56, 8, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jan 14 04:56:22.850: INFO: +Jan 14 04:56:22.850: INFO: Ensure that both old replica sets have no replicas +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Jan 14 04:56:22.859: INFO: Deployment "test-rollover-deployment": +&Deployment{ObjectMeta:{test-rollover-deployment deployment-7265 454e87fb-03ab-4684-8d97-bf2dc0b253db 458489 2 2023-01-14 04:56:08 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-01-14 04:56:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 04:56:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00530de48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-01-14 04:56:08 +0000 UTC,LastTransitionTime:2023-01-14 04:56:08 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-6c6df9974f" has successfully progressed.,LastUpdateTime:2023-01-14 04:56:22 +0000 UTC,LastTransitionTime:2023-01-14 04:56:08 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + +Jan 14 04:56:22.863: INFO: New ReplicaSet "test-rollover-deployment-6c6df9974f" of Deployment "test-rollover-deployment": +&ReplicaSet{ObjectMeta:{test-rollover-deployment-6c6df9974f deployment-7265 545604a2-7e73-4781-9379-f96dd75bea05 458479 2 2023-01-14 04:56:10 +0000 UTC map[name:rollover-pod pod-template-hash:6c6df9974f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 454e87fb-03ab-4684-8d97-bf2dc0b253db 0xc00824b247 0xc00824b248}] [] [{kube-controller-manager Update apps/v1 2023-01-14 04:56:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"454e87fb-03ab-4684-8d97-bf2dc0b253db\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 04:56:22 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 6c6df9974f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:6c6df9974f] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00824b2f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} +Jan 14 04:56:22.863: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": +Jan 14 04:56:22.863: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-7265 91a0bfc6-c830-4357-9cd1-e3df6793b61a 458488 2 2023-01-14 04:56:01 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 454e87fb-03ab-4684-8d97-bf2dc0b253db 0xc00824b107 0xc00824b108}] [] [{e2e.test Update apps/v1 2023-01-14 04:56:01 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 04:56:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"454e87fb-03ab-4684-8d97-bf2dc0b253db\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2023-01-14 04:56:22 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00824b1c8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Jan 14 04:56:22.863: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-768dcbc65b deployment-7265 c966f4d2-2ff5-429b-9ada-668f6dcac635 458411 2 2023-01-14 04:56:08 +0000 UTC map[name:rollover-pod pod-template-hash:768dcbc65b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 454e87fb-03ab-4684-8d97-bf2dc0b253db 0xc00824b367 0xc00824b368}] [] [{kube-controller-manager Update apps/v1 2023-01-14 04:56:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"454e87fb-03ab-4684-8d97-bf2dc0b253db\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 04:56:10 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 768dcbc65b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:768dcbc65b] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00824b418 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Jan 14 04:56:22.866: INFO: Pod "test-rollover-deployment-6c6df9974f-g5ssg" is available: +&Pod{ObjectMeta:{test-rollover-deployment-6c6df9974f-g5ssg test-rollover-deployment-6c6df9974f- deployment-7265 b0b15d6c-6573-4112-9694-938d8c64a3cf 458429 0 2023-01-14 04:56:10 +0000 UTC map[name:rollover-pod pod-template-hash:6c6df9974f] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.1.18" + ], + "mac": "de:8b:19:29:05:69", + "default": true, + "dns": {} +}]] [{apps/v1 ReplicaSet test-rollover-deployment-6c6df9974f 545604a2-7e73-4781-9379-f96dd75bea05 0xc00824b987 0xc00824b988}] [] [{kube-controller-manager Update v1 2023-01-14 04:56:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"545604a2-7e73-4781-9379-f96dd75bea05\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus Update v1 2023-01-14 04:56:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status} {kubelet Update v1 2023-01-14 04:56:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.1.18\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7b6c4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7b6c4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.106,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:56:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:56:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:56:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:56:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.106,PodIP:10.52.1.18,StartTime:2023-01-14 04:56:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 04:56:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,ImageID:registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:containerd://cde483fe4208b717cc302cb14db05aaeb15a274d45df23794dfc871ee55e16a7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.52.1.18,},},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + test/e2e/framework/node/init/init.go:32 +Jan 14 04:56:22.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Deployment + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Deployment + tear down framework | framework.go:193 +STEP: Destroying namespace "deployment-7265" for this suite. 01/14/23 04:56:22.871 +------------------------------ +• [SLOW TEST] [21.144 seconds] +[sig-apps] Deployment +test/e2e/apps/framework.go:23 + deployment should support rollover [Conformance] + test/e2e/apps/deployment.go:132 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Deployment + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:56:01.732 + Jan 14 04:56:01.732: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename deployment 01/14/23 04:56:01.733 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:56:01.754 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:56:01.757 + [BeforeEach] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 + [It] deployment should support rollover [Conformance] + test/e2e/apps/deployment.go:132 + Jan 14 04:56:01.768: INFO: Pod name rollover-pod: Found 0 pods out of 1 + Jan 14 04:56:06.775: INFO: Pod name rollover-pod: Found 1 pods out of 1 + STEP: ensuring each pod is running 01/14/23 04:56:06.775 + Jan 14 04:56:06.775: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready + Jan 14 04:56:08.780: INFO: Creating deployment "test-rollover-deployment" + Jan 14 04:56:08.790: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations + Jan 14 04:56:10.797: INFO: Check revision of new replica set for deployment "test-rollover-deployment" + Jan 14 04:56:10.804: INFO: Ensure that both replica sets have 1 created replica + Jan 14 04:56:10.811: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update + Jan 14 04:56:10.820: INFO: Updating deployment test-rollover-deployment + Jan 14 04:56:10.820: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller + Jan 14 04:56:12.828: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 + Jan 14 04:56:12.835: INFO: Make sure deployment "test-rollover-deployment" is complete + Jan 14 04:56:12.842: INFO: all replica sets need to contain the pod-template-hash label + Jan 14 04:56:12.842: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 56, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 56, 8, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 56, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 56, 8, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jan 14 04:56:14.850: INFO: all replica sets need to contain the pod-template-hash label + Jan 14 04:56:14.850: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 56, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 56, 8, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 56, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 56, 8, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jan 14 04:56:16.852: INFO: all replica sets need to contain the pod-template-hash label + Jan 14 04:56:16.852: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 56, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 56, 8, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 56, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 56, 8, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jan 14 04:56:18.850: INFO: all replica sets need to contain the pod-template-hash label + Jan 14 04:56:18.850: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 56, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 56, 8, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 56, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 56, 8, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jan 14 04:56:20.850: INFO: all replica sets need to contain the pod-template-hash label + Jan 14 04:56:20.850: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 56, 8, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 56, 8, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 14, 4, 56, 12, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 14, 4, 56, 8, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6c6df9974f\" is progressing."}}, CollisionCount:(*int32)(nil)} + Jan 14 04:56:22.850: INFO: + Jan 14 04:56:22.850: INFO: Ensure that both old replica sets have no replicas + [AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 + Jan 14 04:56:22.859: INFO: Deployment "test-rollover-deployment": + &Deployment{ObjectMeta:{test-rollover-deployment deployment-7265 454e87fb-03ab-4684-8d97-bf2dc0b253db 458489 2 2023-01-14 04:56:08 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-01-14 04:56:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 04:56:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00530de48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-01-14 04:56:08 +0000 UTC,LastTransitionTime:2023-01-14 04:56:08 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-6c6df9974f" has successfully progressed.,LastUpdateTime:2023-01-14 04:56:22 +0000 UTC,LastTransitionTime:2023-01-14 04:56:08 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} + + Jan 14 04:56:22.863: INFO: New ReplicaSet "test-rollover-deployment-6c6df9974f" of Deployment "test-rollover-deployment": + &ReplicaSet{ObjectMeta:{test-rollover-deployment-6c6df9974f deployment-7265 545604a2-7e73-4781-9379-f96dd75bea05 458479 2 2023-01-14 04:56:10 +0000 UTC map[name:rollover-pod pod-template-hash:6c6df9974f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 454e87fb-03ab-4684-8d97-bf2dc0b253db 0xc00824b247 0xc00824b248}] [] [{kube-controller-manager Update apps/v1 2023-01-14 04:56:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"454e87fb-03ab-4684-8d97-bf2dc0b253db\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 04:56:22 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 6c6df9974f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:6c6df9974f] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00824b2f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} + Jan 14 04:56:22.863: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": + Jan 14 04:56:22.863: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-7265 91a0bfc6-c830-4357-9cd1-e3df6793b61a 458488 2 2023-01-14 04:56:01 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 454e87fb-03ab-4684-8d97-bf2dc0b253db 0xc00824b107 0xc00824b108}] [] [{e2e.test Update apps/v1 2023-01-14 04:56:01 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 04:56:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"454e87fb-03ab-4684-8d97-bf2dc0b253db\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2023-01-14 04:56:22 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00824b1c8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + Jan 14 04:56:22.863: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-768dcbc65b deployment-7265 c966f4d2-2ff5-429b-9ada-668f6dcac635 458411 2 2023-01-14 04:56:08 +0000 UTC map[name:rollover-pod pod-template-hash:768dcbc65b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 454e87fb-03ab-4684-8d97-bf2dc0b253db 0xc00824b367 0xc00824b368}] [] [{kube-controller-manager Update apps/v1 2023-01-14 04:56:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"454e87fb-03ab-4684-8d97-bf2dc0b253db\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 04:56:10 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 768dcbc65b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:768dcbc65b] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00824b418 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + Jan 14 04:56:22.866: INFO: Pod "test-rollover-deployment-6c6df9974f-g5ssg" is available: + &Pod{ObjectMeta:{test-rollover-deployment-6c6df9974f-g5ssg test-rollover-deployment-6c6df9974f- deployment-7265 b0b15d6c-6573-4112-9694-938d8c64a3cf 458429 0 2023-01-14 04:56:10 +0000 UTC map[name:rollover-pod pod-template-hash:6c6df9974f] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.1.18" + ], + "mac": "de:8b:19:29:05:69", + "default": true, + "dns": {} + }]] [{apps/v1 ReplicaSet test-rollover-deployment-6c6df9974f 545604a2-7e73-4781-9379-f96dd75bea05 0xc00824b987 0xc00824b988}] [] [{kube-controller-manager Update v1 2023-01-14 04:56:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"545604a2-7e73-4781-9379-f96dd75bea05\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus Update v1 2023-01-14 04:56:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status} {kubelet Update v1 2023-01-14 04:56:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.1.18\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7b6c4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7b6c4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.106,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:56:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:56:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:56:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 04:56:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.106,PodIP:10.52.1.18,StartTime:2023-01-14 04:56:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 04:56:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/agnhost:2.43,ImageID:registry.k8s.io/e2e-test-images/agnhost@sha256:16bbf38c463a4223d8cfe4da12bc61010b082a79b4bb003e2d3ba3ece5dd5f9e,ContainerID:containerd://cde483fe4208b717cc302cb14db05aaeb15a274d45df23794dfc871ee55e16a7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.52.1.18,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + [AfterEach] [sig-apps] Deployment + test/e2e/framework/node/init/init.go:32 + Jan 14 04:56:22.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Deployment + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Deployment + tear down framework | framework.go:193 + STEP: Destroying namespace "deployment-7265" for this suite. 01/14/23 04:56:22.871 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-node] Containers + should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:73 +[BeforeEach] [sig-node] Containers + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:56:22.877 +Jan 14 04:56:22.877: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename containers 01/14/23 04:56:22.878 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:56:22.891 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:56:22.894 +[BeforeEach] [sig-node] Containers + test/e2e/framework/metrics/init/init.go:31 +[It] should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:73 +STEP: Creating a pod to test override command 01/14/23 04:56:22.896 +Jan 14 04:56:22.906: INFO: Waiting up to 5m0s for pod "client-containers-10489014-d839-4c58-b915-0de1de7d2caf" in namespace "containers-3815" to be "Succeeded or Failed" +Jan 14 04:56:22.909: INFO: Pod "client-containers-10489014-d839-4c58-b915-0de1de7d2caf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.98797ms +Jan 14 04:56:24.914: INFO: Pod "client-containers-10489014-d839-4c58-b915-0de1de7d2caf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007677162s +Jan 14 04:56:26.915: INFO: Pod "client-containers-10489014-d839-4c58-b915-0de1de7d2caf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008940833s +STEP: Saw pod success 01/14/23 04:56:26.915 +Jan 14 04:56:26.915: INFO: Pod "client-containers-10489014-d839-4c58-b915-0de1de7d2caf" satisfied condition "Succeeded or Failed" +Jan 14 04:56:26.919: INFO: Trying to get logs from node 10.0.1.106 pod client-containers-10489014-d839-4c58-b915-0de1de7d2caf container agnhost-container: +STEP: delete the pod 01/14/23 04:56:26.93 +Jan 14 04:56:26.942: INFO: Waiting for pod client-containers-10489014-d839-4c58-b915-0de1de7d2caf to disappear +Jan 14 04:56:26.947: INFO: Pod client-containers-10489014-d839-4c58-b915-0de1de7d2caf no longer exists +[AfterEach] [sig-node] Containers + test/e2e/framework/node/init/init.go:32 +Jan 14 04:56:26.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Containers + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Containers + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Containers + tear down framework | framework.go:193 +STEP: Destroying namespace "containers-3815" for this suite. 01/14/23 04:56:26.951 +------------------------------ +• [4.080 seconds] +[sig-node] Containers +test/e2e/common/node/framework.go:23 + should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:73 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Containers + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:56:22.877 + Jan 14 04:56:22.877: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename containers 01/14/23 04:56:22.878 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:56:22.891 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:56:22.894 + [BeforeEach] [sig-node] Containers + test/e2e/framework/metrics/init/init.go:31 + [It] should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance] + test/e2e/common/node/containers.go:73 + STEP: Creating a pod to test override command 01/14/23 04:56:22.896 + Jan 14 04:56:22.906: INFO: Waiting up to 5m0s for pod "client-containers-10489014-d839-4c58-b915-0de1de7d2caf" in namespace "containers-3815" to be "Succeeded or Failed" + Jan 14 04:56:22.909: INFO: Pod "client-containers-10489014-d839-4c58-b915-0de1de7d2caf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.98797ms + Jan 14 04:56:24.914: INFO: Pod "client-containers-10489014-d839-4c58-b915-0de1de7d2caf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007677162s + Jan 14 04:56:26.915: INFO: Pod "client-containers-10489014-d839-4c58-b915-0de1de7d2caf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008940833s + STEP: Saw pod success 01/14/23 04:56:26.915 + Jan 14 04:56:26.915: INFO: Pod "client-containers-10489014-d839-4c58-b915-0de1de7d2caf" satisfied condition "Succeeded or Failed" + Jan 14 04:56:26.919: INFO: Trying to get logs from node 10.0.1.106 pod client-containers-10489014-d839-4c58-b915-0de1de7d2caf container agnhost-container: + STEP: delete the pod 01/14/23 04:56:26.93 + Jan 14 04:56:26.942: INFO: Waiting for pod client-containers-10489014-d839-4c58-b915-0de1de7d2caf to disappear + Jan 14 04:56:26.947: INFO: Pod client-containers-10489014-d839-4c58-b915-0de1de7d2caf no longer exists + [AfterEach] [sig-node] Containers + test/e2e/framework/node/init/init.go:32 + Jan 14 04:56:26.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Containers + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Containers + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Containers + tear down framework | framework.go:193 + STEP: Destroying namespace "containers-3815" for this suite. 01/14/23 04:56:26.951 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Variable Expansion + should allow substituting values in a volume subpath [Conformance] + test/e2e/common/node/expansion.go:112 +[BeforeEach] [sig-node] Variable Expansion + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:56:26.958 +Jan 14 04:56:26.958: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename var-expansion 01/14/23 04:56:26.959 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:56:26.974 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:56:26.977 +[BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:31 +[It] should allow substituting values in a volume subpath [Conformance] + test/e2e/common/node/expansion.go:112 +STEP: Creating a pod to test substitution in volume subpath 01/14/23 04:56:26.979 +Jan 14 04:56:26.989: INFO: Waiting up to 5m0s for pod "var-expansion-3bb7f396-cc4a-48a8-903b-1c8ef60e391d" in namespace "var-expansion-7368" to be "Succeeded or Failed" +Jan 14 04:56:26.992: INFO: Pod "var-expansion-3bb7f396-cc4a-48a8-903b-1c8ef60e391d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.310662ms +Jan 14 04:56:28.997: INFO: Pod "var-expansion-3bb7f396-cc4a-48a8-903b-1c8ef60e391d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0080581s +Jan 14 04:56:30.998: INFO: Pod "var-expansion-3bb7f396-cc4a-48a8-903b-1c8ef60e391d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009525858s +STEP: Saw pod success 01/14/23 04:56:30.998 +Jan 14 04:56:30.999: INFO: Pod "var-expansion-3bb7f396-cc4a-48a8-903b-1c8ef60e391d" satisfied condition "Succeeded or Failed" +Jan 14 04:56:31.002: INFO: Trying to get logs from node 10.0.1.106 pod var-expansion-3bb7f396-cc4a-48a8-903b-1c8ef60e391d container dapi-container: +STEP: delete the pod 01/14/23 04:56:31.008 +Jan 14 04:56:31.022: INFO: Waiting for pod var-expansion-3bb7f396-cc4a-48a8-903b-1c8ef60e391d to disappear +Jan 14 04:56:31.025: INFO: Pod var-expansion-3bb7f396-cc4a-48a8-903b-1c8ef60e391d no longer exists +[AfterEach] [sig-node] Variable Expansion + test/e2e/framework/node/init/init.go:32 +Jan 14 04:56:31.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Variable Expansion + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Variable Expansion + tear down framework | framework.go:193 +STEP: Destroying namespace "var-expansion-7368" for this suite. 01/14/23 04:56:31.031 +------------------------------ +• [4.078 seconds] +[sig-node] Variable Expansion +test/e2e/common/node/framework.go:23 + should allow substituting values in a volume subpath [Conformance] + test/e2e/common/node/expansion.go:112 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Variable Expansion + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:56:26.958 + Jan 14 04:56:26.958: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename var-expansion 01/14/23 04:56:26.959 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:56:26.974 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:56:26.977 + [BeforeEach] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:31 + [It] should allow substituting values in a volume subpath [Conformance] + test/e2e/common/node/expansion.go:112 + STEP: Creating a pod to test substitution in volume subpath 01/14/23 04:56:26.979 + Jan 14 04:56:26.989: INFO: Waiting up to 5m0s for pod "var-expansion-3bb7f396-cc4a-48a8-903b-1c8ef60e391d" in namespace "var-expansion-7368" to be "Succeeded or Failed" + Jan 14 04:56:26.992: INFO: Pod "var-expansion-3bb7f396-cc4a-48a8-903b-1c8ef60e391d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.310662ms + Jan 14 04:56:28.997: INFO: Pod "var-expansion-3bb7f396-cc4a-48a8-903b-1c8ef60e391d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0080581s + Jan 14 04:56:30.998: INFO: Pod "var-expansion-3bb7f396-cc4a-48a8-903b-1c8ef60e391d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009525858s + STEP: Saw pod success 01/14/23 04:56:30.998 + Jan 14 04:56:30.999: INFO: Pod "var-expansion-3bb7f396-cc4a-48a8-903b-1c8ef60e391d" satisfied condition "Succeeded or Failed" + Jan 14 04:56:31.002: INFO: Trying to get logs from node 10.0.1.106 pod var-expansion-3bb7f396-cc4a-48a8-903b-1c8ef60e391d container dapi-container: + STEP: delete the pod 01/14/23 04:56:31.008 + Jan 14 04:56:31.022: INFO: Waiting for pod var-expansion-3bb7f396-cc4a-48a8-903b-1c8ef60e391d to disappear + Jan 14 04:56:31.025: INFO: Pod var-expansion-3bb7f396-cc4a-48a8-903b-1c8ef60e391d no longer exists + [AfterEach] [sig-node] Variable Expansion + test/e2e/framework/node/init/init.go:32 + Jan 14 04:56:31.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Variable Expansion + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Variable Expansion + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Variable Expansion + tear down framework | framework.go:193 + STEP: Destroying namespace "var-expansion-7368" for this suite. 01/14/23 04:56:31.031 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + listing mutating webhooks should work [Conformance] + test/e2e/apimachinery/webhook.go:656 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:56:31.036 +Jan 14 04:56:31.036: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename webhook 01/14/23 04:56:31.037 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:56:31.052 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:56:31.054 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 +STEP: Setting up server cert 01/14/23 04:56:31.069 +STEP: Create role binding to let webhook read extension-apiserver-authentication 01/14/23 04:56:31.415 +STEP: Deploying the webhook pod 01/14/23 04:56:31.424 +STEP: Wait for the deployment to be ready 01/14/23 04:56:31.435 +Jan 14 04:56:31.443: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set +STEP: Deploying the webhook service 01/14/23 04:56:33.453 +STEP: Verifying the service has paired with the endpoint 01/14/23 04:56:33.463 +Jan 14 04:56:34.464: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 +[It] listing mutating webhooks should work [Conformance] + test/e2e/apimachinery/webhook.go:656 +STEP: Listing all of the created validation webhooks 01/14/23 04:56:34.524 +STEP: Creating a configMap that should be mutated 01/14/23 04:56:34.538 +STEP: Deleting the collection of validation webhooks 01/14/23 04:56:34.564 +STEP: Creating a configMap that should not be mutated 01/14/23 04:56:34.607 +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:56:34.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 +STEP: Destroying namespace "webhook-7513" for this suite. 01/14/23 04:56:34.657 +STEP: Destroying namespace "webhook-7513-markers" for this suite. 01/14/23 04:56:34.664 +------------------------------ +• [3.633 seconds] +[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] +test/e2e/apimachinery/framework.go:23 + listing mutating webhooks should work [Conformance] + test/e2e/apimachinery/webhook.go:656 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:56:31.036 + Jan 14 04:56:31.036: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename webhook 01/14/23 04:56:31.037 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:56:31.052 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:56:31.054 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:90 + STEP: Setting up server cert 01/14/23 04:56:31.069 + STEP: Create role binding to let webhook read extension-apiserver-authentication 01/14/23 04:56:31.415 + STEP: Deploying the webhook pod 01/14/23 04:56:31.424 + STEP: Wait for the deployment to be ready 01/14/23 04:56:31.435 + Jan 14 04:56:31.443: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set + STEP: Deploying the webhook service 01/14/23 04:56:33.453 + STEP: Verifying the service has paired with the endpoint 01/14/23 04:56:33.463 + Jan 14 04:56:34.464: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 + [It] listing mutating webhooks should work [Conformance] + test/e2e/apimachinery/webhook.go:656 + STEP: Listing all of the created validation webhooks 01/14/23 04:56:34.524 + STEP: Creating a configMap that should be mutated 01/14/23 04:56:34.538 + STEP: Deleting the collection of validation webhooks 01/14/23 04:56:34.564 + STEP: Creating a configMap that should not be mutated 01/14/23 04:56:34.607 + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:56:34.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/apimachinery/webhook.go:105 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] + tear down framework | framework.go:193 + STEP: Destroying namespace "webhook-7513" for this suite. 01/14/23 04:56:34.657 + STEP: Destroying namespace "webhook-7513-markers" for this suite. 01/14/23 04:56:34.664 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] RuntimeClass + should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:55 +[BeforeEach] [sig-node] RuntimeClass + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:56:34.671 +Jan 14 04:56:34.671: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename runtimeclass 01/14/23 04:56:34.671 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:56:34.687 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:56:34.689 +[BeforeEach] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:31 +[It] should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:55 +[AfterEach] [sig-node] RuntimeClass + test/e2e/framework/node/init/init.go:32 +Jan 14 04:56:34.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] RuntimeClass + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] RuntimeClass + tear down framework | framework.go:193 +STEP: Destroying namespace "runtimeclass-1767" for this suite. 01/14/23 04:56:34.704 +------------------------------ +• [0.038 seconds] +[sig-node] RuntimeClass +test/e2e/common/node/framework.go:23 + should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:55 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] RuntimeClass + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:56:34.671 + Jan 14 04:56:34.671: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename runtimeclass 01/14/23 04:56:34.671 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:56:34.687 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:56:34.689 + [BeforeEach] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:31 + [It] should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance] + test/e2e/common/node/runtimeclass.go:55 + [AfterEach] [sig-node] RuntimeClass + test/e2e/framework/node/init/init.go:32 + Jan 14 04:56:34.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] RuntimeClass + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] RuntimeClass + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] RuntimeClass + tear down framework | framework.go:193 + STEP: Destroying namespace "runtimeclass-1767" for this suite. 01/14/23 04:56:34.704 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should patch a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:268 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:56:34.71 +Jan 14 04:56:34.710: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename namespaces 01/14/23 04:56:34.71 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:56:34.725 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:56:34.728 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:31 +[It] should patch a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:268 +STEP: creating a Namespace 01/14/23 04:56:34.73 +STEP: patching the Namespace 01/14/23 04:56:34.745 +STEP: get the Namespace and ensuring it has the label 01/14/23 04:56:34.749 +[AfterEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:56:34.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "namespaces-5648" for this suite. 01/14/23 04:56:34.756 +STEP: Destroying namespace "nspatchtest-26eeabd4-d2a1-440d-b3ac-15a81aebe9f1-1967" for this suite. 01/14/23 04:56:34.761 +------------------------------ +• [0.057 seconds] +[sig-api-machinery] Namespaces [Serial] +test/e2e/apimachinery/framework.go:23 + should patch a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:268 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Namespaces [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:56:34.71 + Jan 14 04:56:34.710: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename namespaces 01/14/23 04:56:34.71 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:56:34.725 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:56:34.728 + [BeforeEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:31 + [It] should patch a Namespace [Conformance] + test/e2e/apimachinery/namespace.go:268 + STEP: creating a Namespace 01/14/23 04:56:34.73 + STEP: patching the Namespace 01/14/23 04:56:34.745 + STEP: get the Namespace and ensuring it has the label 01/14/23 04:56:34.749 + [AfterEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:56:34.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "namespaces-5648" for this suite. 01/14/23 04:56:34.756 + STEP: Destroying namespace "nspatchtest-26eeabd4-d2a1-440d-b3ac-15a81aebe9f1-1967" for this suite. 01/14/23 04:56:34.761 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSS +------------------------------ +[sig-apps] ReplicaSet + should list and delete a collection of ReplicaSets [Conformance] + test/e2e/apps/replica_set.go:165 +[BeforeEach] [sig-apps] ReplicaSet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:56:34.767 +Jan 14 04:56:34.767: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename replicaset 01/14/23 04:56:34.768 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:56:34.782 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:56:34.784 +[BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:31 +[It] should list and delete a collection of ReplicaSets [Conformance] + test/e2e/apps/replica_set.go:165 +STEP: Create a ReplicaSet 01/14/23 04:56:34.787 +STEP: Verify that the required pods have come up 01/14/23 04:56:34.792 +Jan 14 04:56:34.796: INFO: Pod name sample-pod: Found 0 pods out of 3 +Jan 14 04:56:39.803: INFO: Pod name sample-pod: Found 3 pods out of 3 +STEP: ensuring each pod is running 01/14/23 04:56:39.803 +Jan 14 04:56:39.806: INFO: Replica Status: {Replicas:3 FullyLabeledReplicas:3 ReadyReplicas:3 AvailableReplicas:3 ObservedGeneration:1 Conditions:[]} +STEP: Listing all ReplicaSets 01/14/23 04:56:39.806 +STEP: DeleteCollection of the ReplicaSets 01/14/23 04:56:39.81 +STEP: After DeleteCollection verify that ReplicaSets have been deleted 01/14/23 04:56:39.82 +[AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/node/init/init.go:32 +Jan 14 04:56:39.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] ReplicaSet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] ReplicaSet + tear down framework | framework.go:193 +STEP: Destroying namespace "replicaset-8850" for this suite. 01/14/23 04:56:39.828 +------------------------------ +• [SLOW TEST] [5.070 seconds] +[sig-apps] ReplicaSet +test/e2e/apps/framework.go:23 + should list and delete a collection of ReplicaSets [Conformance] + test/e2e/apps/replica_set.go:165 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ReplicaSet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:56:34.767 + Jan 14 04:56:34.767: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename replicaset 01/14/23 04:56:34.768 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:56:34.782 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:56:34.784 + [BeforeEach] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:31 + [It] should list and delete a collection of ReplicaSets [Conformance] + test/e2e/apps/replica_set.go:165 + STEP: Create a ReplicaSet 01/14/23 04:56:34.787 + STEP: Verify that the required pods have come up 01/14/23 04:56:34.792 + Jan 14 04:56:34.796: INFO: Pod name sample-pod: Found 0 pods out of 3 + Jan 14 04:56:39.803: INFO: Pod name sample-pod: Found 3 pods out of 3 + STEP: ensuring each pod is running 01/14/23 04:56:39.803 + Jan 14 04:56:39.806: INFO: Replica Status: {Replicas:3 FullyLabeledReplicas:3 ReadyReplicas:3 AvailableReplicas:3 ObservedGeneration:1 Conditions:[]} + STEP: Listing all ReplicaSets 01/14/23 04:56:39.806 + STEP: DeleteCollection of the ReplicaSets 01/14/23 04:56:39.81 + STEP: After DeleteCollection verify that ReplicaSets have been deleted 01/14/23 04:56:39.82 + [AfterEach] [sig-apps] ReplicaSet + test/e2e/framework/node/init/init.go:32 + Jan 14 04:56:39.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ReplicaSet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] ReplicaSet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] ReplicaSet + tear down framework | framework.go:193 + STEP: Destroying namespace "replicaset-8850" for this suite. 01/14/23 04:56:39.828 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that NodeSelector is respected if matching [Conformance] + test/e2e/scheduling/predicates.go:466 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:56:39.838 +Jan 14 04:56:39.838: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename sched-pred 01/14/23 04:56:39.839 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:56:39.857 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:56:39.86 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:97 +Jan 14 04:56:39.862: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Jan 14 04:56:39.872: INFO: Waiting for terminating namespaces to be deleted... +Jan 14 04:56:39.875: INFO: +Logging pods the apiserver thinks is on node 10.0.1.106 before test +Jan 14 04:56:39.886: INFO: csi-cbs-node-5wf2s from kube-system started at 2023-01-13 08:11:20 +0000 UTC (2 container statuses recorded) +Jan 14 04:56:39.886: INFO: Container cbs-csi ready: true, restart count 1 +Jan 14 04:56:39.886: INFO: Container driver-registrar ready: true, restart count 0 +Jan 14 04:56:39.886: INFO: ip-masq-agent-rx9k6 from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) +Jan 14 04:56:39.886: INFO: Container ip-masq-agent ready: true, restart count 0 +Jan 14 04:56:39.886: INFO: kube-proxy-s6xxg from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) +Jan 14 04:56:39.886: INFO: Container kube-proxy ready: true, restart count 0 +Jan 14 04:56:39.886: INFO: tke-bridge-agent-frbcm from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) +Jan 14 04:56:39.886: INFO: Container tke-bridge-agent ready: true, restart count 1 +Jan 14 04:56:39.886: INFO: tke-cni-agent-nv7pn from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) +Jan 14 04:56:39.886: INFO: Container tke-cni-agent ready: true, restart count 0 +Jan 14 04:56:39.886: INFO: tke-monitor-agent-xhdhg from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) +Jan 14 04:56:39.886: INFO: Container tke-monitor-agent ready: true, restart count 0 +Jan 14 04:56:39.886: INFO: sonobuoy-systemd-logs-daemon-set-0fc25bd2d6604eac-jkjnf from sonobuoy started at 2023-01-14 03:34:54 +0000 UTC (2 container statuses recorded) +Jan 14 04:56:39.886: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jan 14 04:56:39.886: INFO: Container systemd-logs ready: true, restart count 0 +Jan 14 04:56:39.886: INFO: ss2-1 from statefulset-8862 started at 2023-01-14 04:31:56 +0000 UTC (1 container statuses recorded) +Jan 14 04:56:39.886: INFO: Container webserver ready: true, restart count 0 +Jan 14 04:56:39.886: INFO: +Logging pods the apiserver thinks is on node 10.0.1.212 before test +Jan 14 04:56:39.895: INFO: kubernetes-proxy-544fb566b4-zpvz8 from default started at 2023-01-14 03:20:34 +0000 UTC (1 container statuses recorded) +Jan 14 04:56:39.895: INFO: Container kubernetes-proxy ready: false, restart count 23 +Jan 14 04:56:39.895: INFO: csi-cbs-node-ddpcx from kube-system started at 2023-01-13 08:11:16 +0000 UTC (2 container statuses recorded) +Jan 14 04:56:39.895: INFO: Container cbs-csi ready: true, restart count 0 +Jan 14 04:56:39.895: INFO: Container driver-registrar ready: true, restart count 0 +Jan 14 04:56:39.895: INFO: ip-masq-agent-8wxxs from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) +Jan 14 04:56:39.895: INFO: Container ip-masq-agent ready: true, restart count 0 +Jan 14 04:56:39.895: INFO: kube-proxy-npt42 from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) +Jan 14 04:56:39.895: INFO: Container kube-proxy ready: true, restart count 0 +Jan 14 04:56:39.896: INFO: tke-bridge-agent-4rffd from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) +Jan 14 04:56:39.896: INFO: Container tke-bridge-agent ready: true, restart count 1 +Jan 14 04:56:39.896: INFO: tke-cni-agent-xmggs from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) +Jan 14 04:56:39.896: INFO: Container tke-cni-agent ready: true, restart count 0 +Jan 14 04:56:39.896: INFO: tke-monitor-agent-6gtt6 from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) +Jan 14 04:56:39.896: INFO: Container tke-monitor-agent ready: true, restart count 0 +Jan 14 04:56:39.896: INFO: sonobuoy-e2e-job-1b1a46fb40e34267 from sonobuoy started at 2023-01-14 03:34:54 +0000 UTC (2 container statuses recorded) +Jan 14 04:56:39.896: INFO: Container e2e ready: true, restart count 0 +Jan 14 04:56:39.896: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jan 14 04:56:39.896: INFO: sonobuoy-systemd-logs-daemon-set-0fc25bd2d6604eac-lgxk7 from sonobuoy started at 2023-01-14 03:34:54 +0000 UTC (2 container statuses recorded) +Jan 14 04:56:39.896: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jan 14 04:56:39.896: INFO: Container systemd-logs ready: true, restart count 0 +Jan 14 04:56:39.896: INFO: ss2-2 from statefulset-8862 started at 2023-01-14 02:33:30 +0000 UTC (1 container statuses recorded) +Jan 14 04:56:39.896: INFO: Container webserver ready: true, restart count 0 +Jan 14 04:56:39.896: INFO: +Logging pods the apiserver thinks is on node 10.0.1.99 before test +Jan 14 04:56:39.906: INFO: kubernetes-proxy-544fb566b4-782fh from default started at 2023-01-14 04:31:56 +0000 UTC (1 container statuses recorded) +Jan 14 04:56:39.906: INFO: Container kubernetes-proxy ready: false, restart count 9 +Jan 14 04:56:39.906: INFO: csi-cbs-node-q4l9b from kube-system started at 2023-01-13 08:11:43 +0000 UTC (2 container statuses recorded) +Jan 14 04:56:39.906: INFO: Container cbs-csi ready: true, restart count 0 +Jan 14 04:56:39.906: INFO: Container driver-registrar ready: true, restart count 0 +Jan 14 04:56:39.906: INFO: ip-masq-agent-kmrrk from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) +Jan 14 04:56:39.906: INFO: Container ip-masq-agent ready: true, restart count 0 +Jan 14 04:56:39.906: INFO: kube-proxy-g4qjh from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) +Jan 14 04:56:39.906: INFO: Container kube-proxy ready: true, restart count 0 +Jan 14 04:56:39.906: INFO: tke-bridge-agent-hzv6c from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) +Jan 14 04:56:39.906: INFO: Container tke-bridge-agent ready: true, restart count 1 +Jan 14 04:56:39.906: INFO: tke-cni-agent-7mk9b from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) +Jan 14 04:56:39.906: INFO: Container tke-cni-agent ready: true, restart count 0 +Jan 14 04:56:39.906: INFO: tke-monitor-agent-g27mp from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) +Jan 14 04:56:39.906: INFO: Container tke-monitor-agent ready: true, restart count 0 +Jan 14 04:56:39.906: INFO: sonobuoy from sonobuoy started at 2023-01-14 03:34:53 +0000 UTC (1 container statuses recorded) +Jan 14 04:56:39.906: INFO: Container kube-sonobuoy ready: true, restart count 0 +Jan 14 04:56:39.906: INFO: sonobuoy-systemd-logs-daemon-set-0fc25bd2d6604eac-nwh62 from sonobuoy started at 2023-01-14 03:34:54 +0000 UTC (2 container statuses recorded) +Jan 14 04:56:39.906: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jan 14 04:56:39.906: INFO: Container systemd-logs ready: true, restart count 0 +Jan 14 04:56:39.906: INFO: ss2-0 from statefulset-8862 started at 2023-01-14 02:33:10 +0000 UTC (1 container statuses recorded) +Jan 14 04:56:39.906: INFO: Container webserver ready: true, restart count 0 +[It] validates that NodeSelector is respected if matching [Conformance] + test/e2e/scheduling/predicates.go:466 +STEP: Trying to launch a pod without a label to get a node which can launch it. 01/14/23 04:56:39.906 +Jan 14 04:56:39.914: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-8496" to be "running" +Jan 14 04:56:39.919: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 4.508326ms +Jan 14 04:56:41.923: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.008708777s +Jan 14 04:56:41.923: INFO: Pod "without-label" satisfied condition "running" +STEP: Explicitly delete pod here to free the resource it takes. 01/14/23 04:56:41.926 +STEP: Trying to apply a random label on the found node. 01/14/23 04:56:41.939 +STEP: verifying the node has the label kubernetes.io/e2e-0f4a0fe0-b375-4e83-bda0-c826eaf74ba2 42 01/14/23 04:56:41.949 +STEP: Trying to relaunch the pod, now with labels. 01/14/23 04:56:41.952 +Jan 14 04:56:41.958: INFO: Waiting up to 5m0s for pod "with-labels" in namespace "sched-pred-8496" to be "not pending" +Jan 14 04:56:41.961: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 2.955715ms +Jan 14 04:56:43.967: INFO: Pod "with-labels": Phase="Running", Reason="", readiness=true. Elapsed: 2.008333283s +Jan 14 04:56:43.967: INFO: Pod "with-labels" satisfied condition "not pending" +STEP: removing the label kubernetes.io/e2e-0f4a0fe0-b375-4e83-bda0-c826eaf74ba2 off the node 10.0.1.106 01/14/23 04:56:43.97 +STEP: verifying the node doesn't have the label kubernetes.io/e2e-0f4a0fe0-b375-4e83-bda0-c826eaf74ba2 01/14/23 04:56:43.987 +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:56:43.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:88 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "sched-pred-8496" for this suite. 01/14/23 04:56:44.007 +------------------------------ +• [4.175 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +test/e2e/scheduling/framework.go:40 + validates that NodeSelector is respected if matching [Conformance] + test/e2e/scheduling/predicates.go:466 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:56:39.838 + Jan 14 04:56:39.838: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename sched-pred 01/14/23 04:56:39.839 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:56:39.857 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:56:39.86 + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:97 + Jan 14 04:56:39.862: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready + Jan 14 04:56:39.872: INFO: Waiting for terminating namespaces to be deleted... + Jan 14 04:56:39.875: INFO: + Logging pods the apiserver thinks is on node 10.0.1.106 before test + Jan 14 04:56:39.886: INFO: csi-cbs-node-5wf2s from kube-system started at 2023-01-13 08:11:20 +0000 UTC (2 container statuses recorded) + Jan 14 04:56:39.886: INFO: Container cbs-csi ready: true, restart count 1 + Jan 14 04:56:39.886: INFO: Container driver-registrar ready: true, restart count 0 + Jan 14 04:56:39.886: INFO: ip-masq-agent-rx9k6 from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) + Jan 14 04:56:39.886: INFO: Container ip-masq-agent ready: true, restart count 0 + Jan 14 04:56:39.886: INFO: kube-proxy-s6xxg from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) + Jan 14 04:56:39.886: INFO: Container kube-proxy ready: true, restart count 0 + Jan 14 04:56:39.886: INFO: tke-bridge-agent-frbcm from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) + Jan 14 04:56:39.886: INFO: Container tke-bridge-agent ready: true, restart count 1 + Jan 14 04:56:39.886: INFO: tke-cni-agent-nv7pn from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) + Jan 14 04:56:39.886: INFO: Container tke-cni-agent ready: true, restart count 0 + Jan 14 04:56:39.886: INFO: tke-monitor-agent-xhdhg from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) + Jan 14 04:56:39.886: INFO: Container tke-monitor-agent ready: true, restart count 0 + Jan 14 04:56:39.886: INFO: sonobuoy-systemd-logs-daemon-set-0fc25bd2d6604eac-jkjnf from sonobuoy started at 2023-01-14 03:34:54 +0000 UTC (2 container statuses recorded) + Jan 14 04:56:39.886: INFO: Container sonobuoy-worker ready: true, restart count 0 + Jan 14 04:56:39.886: INFO: Container systemd-logs ready: true, restart count 0 + Jan 14 04:56:39.886: INFO: ss2-1 from statefulset-8862 started at 2023-01-14 04:31:56 +0000 UTC (1 container statuses recorded) + Jan 14 04:56:39.886: INFO: Container webserver ready: true, restart count 0 + Jan 14 04:56:39.886: INFO: + Logging pods the apiserver thinks is on node 10.0.1.212 before test + Jan 14 04:56:39.895: INFO: kubernetes-proxy-544fb566b4-zpvz8 from default started at 2023-01-14 03:20:34 +0000 UTC (1 container statuses recorded) + Jan 14 04:56:39.895: INFO: Container kubernetes-proxy ready: false, restart count 23 + Jan 14 04:56:39.895: INFO: csi-cbs-node-ddpcx from kube-system started at 2023-01-13 08:11:16 +0000 UTC (2 container statuses recorded) + Jan 14 04:56:39.895: INFO: Container cbs-csi ready: true, restart count 0 + Jan 14 04:56:39.895: INFO: Container driver-registrar ready: true, restart count 0 + Jan 14 04:56:39.895: INFO: ip-masq-agent-8wxxs from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) + Jan 14 04:56:39.895: INFO: Container ip-masq-agent ready: true, restart count 0 + Jan 14 04:56:39.895: INFO: kube-proxy-npt42 from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) + Jan 14 04:56:39.895: INFO: Container kube-proxy ready: true, restart count 0 + Jan 14 04:56:39.896: INFO: tke-bridge-agent-4rffd from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) + Jan 14 04:56:39.896: INFO: Container tke-bridge-agent ready: true, restart count 1 + Jan 14 04:56:39.896: INFO: tke-cni-agent-xmggs from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) + Jan 14 04:56:39.896: INFO: Container tke-cni-agent ready: true, restart count 0 + Jan 14 04:56:39.896: INFO: tke-monitor-agent-6gtt6 from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) + Jan 14 04:56:39.896: INFO: Container tke-monitor-agent ready: true, restart count 0 + Jan 14 04:56:39.896: INFO: sonobuoy-e2e-job-1b1a46fb40e34267 from sonobuoy started at 2023-01-14 03:34:54 +0000 UTC (2 container statuses recorded) + Jan 14 04:56:39.896: INFO: Container e2e ready: true, restart count 0 + Jan 14 04:56:39.896: INFO: Container sonobuoy-worker ready: true, restart count 0 + Jan 14 04:56:39.896: INFO: sonobuoy-systemd-logs-daemon-set-0fc25bd2d6604eac-lgxk7 from sonobuoy started at 2023-01-14 03:34:54 +0000 UTC (2 container statuses recorded) + Jan 14 04:56:39.896: INFO: Container sonobuoy-worker ready: true, restart count 0 + Jan 14 04:56:39.896: INFO: Container systemd-logs ready: true, restart count 0 + Jan 14 04:56:39.896: INFO: ss2-2 from statefulset-8862 started at 2023-01-14 02:33:30 +0000 UTC (1 container statuses recorded) + Jan 14 04:56:39.896: INFO: Container webserver ready: true, restart count 0 + Jan 14 04:56:39.896: INFO: + Logging pods the apiserver thinks is on node 10.0.1.99 before test + Jan 14 04:56:39.906: INFO: kubernetes-proxy-544fb566b4-782fh from default started at 2023-01-14 04:31:56 +0000 UTC (1 container statuses recorded) + Jan 14 04:56:39.906: INFO: Container kubernetes-proxy ready: false, restart count 9 + Jan 14 04:56:39.906: INFO: csi-cbs-node-q4l9b from kube-system started at 2023-01-13 08:11:43 +0000 UTC (2 container statuses recorded) + Jan 14 04:56:39.906: INFO: Container cbs-csi ready: true, restart count 0 + Jan 14 04:56:39.906: INFO: Container driver-registrar ready: true, restart count 0 + Jan 14 04:56:39.906: INFO: ip-masq-agent-kmrrk from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) + Jan 14 04:56:39.906: INFO: Container ip-masq-agent ready: true, restart count 0 + Jan 14 04:56:39.906: INFO: kube-proxy-g4qjh from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) + Jan 14 04:56:39.906: INFO: Container kube-proxy ready: true, restart count 0 + Jan 14 04:56:39.906: INFO: tke-bridge-agent-hzv6c from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) + Jan 14 04:56:39.906: INFO: Container tke-bridge-agent ready: true, restart count 1 + Jan 14 04:56:39.906: INFO: tke-cni-agent-7mk9b from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) + Jan 14 04:56:39.906: INFO: Container tke-cni-agent ready: true, restart count 0 + Jan 14 04:56:39.906: INFO: tke-monitor-agent-g27mp from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) + Jan 14 04:56:39.906: INFO: Container tke-monitor-agent ready: true, restart count 0 + Jan 14 04:56:39.906: INFO: sonobuoy from sonobuoy started at 2023-01-14 03:34:53 +0000 UTC (1 container statuses recorded) + Jan 14 04:56:39.906: INFO: Container kube-sonobuoy ready: true, restart count 0 + Jan 14 04:56:39.906: INFO: sonobuoy-systemd-logs-daemon-set-0fc25bd2d6604eac-nwh62 from sonobuoy started at 2023-01-14 03:34:54 +0000 UTC (2 container statuses recorded) + Jan 14 04:56:39.906: INFO: Container sonobuoy-worker ready: true, restart count 0 + Jan 14 04:56:39.906: INFO: Container systemd-logs ready: true, restart count 0 + Jan 14 04:56:39.906: INFO: ss2-0 from statefulset-8862 started at 2023-01-14 02:33:10 +0000 UTC (1 container statuses recorded) + Jan 14 04:56:39.906: INFO: Container webserver ready: true, restart count 0 + [It] validates that NodeSelector is respected if matching [Conformance] + test/e2e/scheduling/predicates.go:466 + STEP: Trying to launch a pod without a label to get a node which can launch it. 01/14/23 04:56:39.906 + Jan 14 04:56:39.914: INFO: Waiting up to 1m0s for pod "without-label" in namespace "sched-pred-8496" to be "running" + Jan 14 04:56:39.919: INFO: Pod "without-label": Phase="Pending", Reason="", readiness=false. Elapsed: 4.508326ms + Jan 14 04:56:41.923: INFO: Pod "without-label": Phase="Running", Reason="", readiness=true. Elapsed: 2.008708777s + Jan 14 04:56:41.923: INFO: Pod "without-label" satisfied condition "running" + STEP: Explicitly delete pod here to free the resource it takes. 01/14/23 04:56:41.926 + STEP: Trying to apply a random label on the found node. 01/14/23 04:56:41.939 + STEP: verifying the node has the label kubernetes.io/e2e-0f4a0fe0-b375-4e83-bda0-c826eaf74ba2 42 01/14/23 04:56:41.949 + STEP: Trying to relaunch the pod, now with labels. 01/14/23 04:56:41.952 + Jan 14 04:56:41.958: INFO: Waiting up to 5m0s for pod "with-labels" in namespace "sched-pred-8496" to be "not pending" + Jan 14 04:56:41.961: INFO: Pod "with-labels": Phase="Pending", Reason="", readiness=false. Elapsed: 2.955715ms + Jan 14 04:56:43.967: INFO: Pod "with-labels": Phase="Running", Reason="", readiness=true. Elapsed: 2.008333283s + Jan 14 04:56:43.967: INFO: Pod "with-labels" satisfied condition "not pending" + STEP: removing the label kubernetes.io/e2e-0f4a0fe0-b375-4e83-bda0-c826eaf74ba2 off the node 10.0.1.106 01/14/23 04:56:43.97 + STEP: verifying the node doesn't have the label kubernetes.io/e2e-0f4a0fe0-b375-4e83-bda0-c826eaf74ba2 01/14/23 04:56:43.987 + [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:56:43.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:88 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "sched-pred-8496" for this suite. 01/14/23 04:56:44.007 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should serve multiport endpoints from pods [Conformance] + test/e2e/network/service.go:848 +[BeforeEach] [sig-network] Services + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:56:44.014 +Jan 14 04:56:44.014: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename services 01/14/23 04:56:44.015 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:56:44.029 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:56:44.032 +[BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should serve multiport endpoints from pods [Conformance] + test/e2e/network/service.go:848 +STEP: creating service multi-endpoint-test in namespace services-8320 01/14/23 04:56:44.035 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8320 to expose endpoints map[] 01/14/23 04:56:44.044 +Jan 14 04:56:44.047: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found +Jan 14 04:56:45.055: INFO: successfully validated that service multi-endpoint-test in namespace services-8320 exposes endpoints map[] +STEP: Creating pod pod1 in namespace services-8320 01/14/23 04:56:45.055 +Jan 14 04:56:45.064: INFO: Waiting up to 5m0s for pod "pod1" in namespace "services-8320" to be "running and ready" +Jan 14 04:56:45.067: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.064174ms +Jan 14 04:56:45.067: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) +Jan 14 04:56:47.073: INFO: Pod "pod1": Phase="Running", Reason="", readiness=true. Elapsed: 2.008561029s +Jan 14 04:56:47.073: INFO: The phase of Pod pod1 is Running (Ready = true) +Jan 14 04:56:47.073: INFO: Pod "pod1" satisfied condition "running and ready" +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8320 to expose endpoints map[pod1:[100]] 01/14/23 04:56:47.076 +Jan 14 04:56:47.087: INFO: successfully validated that service multi-endpoint-test in namespace services-8320 exposes endpoints map[pod1:[100]] +STEP: Creating pod pod2 in namespace services-8320 01/14/23 04:56:47.087 +Jan 14 04:56:47.097: INFO: Waiting up to 5m0s for pod "pod2" in namespace "services-8320" to be "running and ready" +Jan 14 04:56:47.100: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.147379ms +Jan 14 04:56:47.100: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) +Jan 14 04:56:49.104: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 2.006751142s +Jan 14 04:56:49.104: INFO: The phase of Pod pod2 is Running (Ready = true) +Jan 14 04:56:49.104: INFO: Pod "pod2" satisfied condition "running and ready" +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8320 to expose endpoints map[pod1:[100] pod2:[101]] 01/14/23 04:56:49.107 +Jan 14 04:56:49.120: INFO: successfully validated that service multi-endpoint-test in namespace services-8320 exposes endpoints map[pod1:[100] pod2:[101]] +STEP: Checking if the Service forwards traffic to pods 01/14/23 04:56:49.12 +Jan 14 04:56:49.120: INFO: Creating new exec pod +Jan 14 04:56:49.128: INFO: Waiting up to 5m0s for pod "execpodtvq49" in namespace "services-8320" to be "running" +Jan 14 04:56:49.131: INFO: Pod "execpodtvq49": Phase="Pending", Reason="", readiness=false. Elapsed: 3.075452ms +Jan 14 04:56:51.136: INFO: Pod "execpodtvq49": Phase="Running", Reason="", readiness=true. Elapsed: 2.008268531s +Jan 14 04:56:51.136: INFO: Pod "execpodtvq49" satisfied condition "running" +Jan 14 04:56:52.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-8320 exec execpodtvq49 -- /bin/sh -x -c nc -v -z -w 2 multi-endpoint-test 80' +Jan 14 04:56:52.253: INFO: stderr: "+ nc -v -z -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" +Jan 14 04:56:52.253: INFO: stdout: "" +Jan 14 04:56:52.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-8320 exec execpodtvq49 -- /bin/sh -x -c nc -v -z -w 2 10.55.252.243 80' +Jan 14 04:56:52.365: INFO: stderr: "+ nc -v -z -w 2 10.55.252.243 80\nConnection to 10.55.252.243 80 port [tcp/http] succeeded!\n" +Jan 14 04:56:52.366: INFO: stdout: "" +Jan 14 04:56:52.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-8320 exec execpodtvq49 -- /bin/sh -x -c nc -v -z -w 2 multi-endpoint-test 81' +Jan 14 04:56:52.477: INFO: stderr: "+ nc -v -z -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" +Jan 14 04:56:52.477: INFO: stdout: "" +Jan 14 04:56:52.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-8320 exec execpodtvq49 -- /bin/sh -x -c nc -v -z -w 2 10.55.252.243 81' +Jan 14 04:56:52.584: INFO: stderr: "+ nc -v -z -w 2 10.55.252.243 81\nConnection to 10.55.252.243 81 port [tcp/*] succeeded!\n" +Jan 14 04:56:52.584: INFO: stdout: "" +STEP: Deleting pod pod1 in namespace services-8320 01/14/23 04:56:52.584 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8320 to expose endpoints map[pod2:[101]] 01/14/23 04:56:52.599 +Jan 14 04:56:52.610: INFO: successfully validated that service multi-endpoint-test in namespace services-8320 exposes endpoints map[pod2:[101]] +STEP: Deleting pod pod2 in namespace services-8320 01/14/23 04:56:52.61 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8320 to expose endpoints map[] 01/14/23 04:56:52.627 +Jan 14 04:56:52.635: INFO: successfully validated that service multi-endpoint-test in namespace services-8320 exposes endpoints map[] +[AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 +Jan 14 04:56:52.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 +STEP: Destroying namespace "services-8320" for this suite. 01/14/23 04:56:52.658 +------------------------------ +• [SLOW TEST] [8.653 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should serve multiport endpoints from pods [Conformance] + test/e2e/network/service.go:848 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:56:44.014 + Jan 14 04:56:44.014: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename services 01/14/23 04:56:44.015 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:56:44.029 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:56:44.032 + [BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should serve multiport endpoints from pods [Conformance] + test/e2e/network/service.go:848 + STEP: creating service multi-endpoint-test in namespace services-8320 01/14/23 04:56:44.035 + STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8320 to expose endpoints map[] 01/14/23 04:56:44.044 + Jan 14 04:56:44.047: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found + Jan 14 04:56:45.055: INFO: successfully validated that service multi-endpoint-test in namespace services-8320 exposes endpoints map[] + STEP: Creating pod pod1 in namespace services-8320 01/14/23 04:56:45.055 + Jan 14 04:56:45.064: INFO: Waiting up to 5m0s for pod "pod1" in namespace "services-8320" to be "running and ready" + Jan 14 04:56:45.067: INFO: Pod "pod1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.064174ms + Jan 14 04:56:45.067: INFO: The phase of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) + Jan 14 04:56:47.073: INFO: Pod "pod1": Phase="Running", Reason="", readiness=true. Elapsed: 2.008561029s + Jan 14 04:56:47.073: INFO: The phase of Pod pod1 is Running (Ready = true) + Jan 14 04:56:47.073: INFO: Pod "pod1" satisfied condition "running and ready" + STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8320 to expose endpoints map[pod1:[100]] 01/14/23 04:56:47.076 + Jan 14 04:56:47.087: INFO: successfully validated that service multi-endpoint-test in namespace services-8320 exposes endpoints map[pod1:[100]] + STEP: Creating pod pod2 in namespace services-8320 01/14/23 04:56:47.087 + Jan 14 04:56:47.097: INFO: Waiting up to 5m0s for pod "pod2" in namespace "services-8320" to be "running and ready" + Jan 14 04:56:47.100: INFO: Pod "pod2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.147379ms + Jan 14 04:56:47.100: INFO: The phase of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) + Jan 14 04:56:49.104: INFO: Pod "pod2": Phase="Running", Reason="", readiness=true. Elapsed: 2.006751142s + Jan 14 04:56:49.104: INFO: The phase of Pod pod2 is Running (Ready = true) + Jan 14 04:56:49.104: INFO: Pod "pod2" satisfied condition "running and ready" + STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8320 to expose endpoints map[pod1:[100] pod2:[101]] 01/14/23 04:56:49.107 + Jan 14 04:56:49.120: INFO: successfully validated that service multi-endpoint-test in namespace services-8320 exposes endpoints map[pod1:[100] pod2:[101]] + STEP: Checking if the Service forwards traffic to pods 01/14/23 04:56:49.12 + Jan 14 04:56:49.120: INFO: Creating new exec pod + Jan 14 04:56:49.128: INFO: Waiting up to 5m0s for pod "execpodtvq49" in namespace "services-8320" to be "running" + Jan 14 04:56:49.131: INFO: Pod "execpodtvq49": Phase="Pending", Reason="", readiness=false. Elapsed: 3.075452ms + Jan 14 04:56:51.136: INFO: Pod "execpodtvq49": Phase="Running", Reason="", readiness=true. Elapsed: 2.008268531s + Jan 14 04:56:51.136: INFO: Pod "execpodtvq49" satisfied condition "running" + Jan 14 04:56:52.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-8320 exec execpodtvq49 -- /bin/sh -x -c nc -v -z -w 2 multi-endpoint-test 80' + Jan 14 04:56:52.253: INFO: stderr: "+ nc -v -z -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" + Jan 14 04:56:52.253: INFO: stdout: "" + Jan 14 04:56:52.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-8320 exec execpodtvq49 -- /bin/sh -x -c nc -v -z -w 2 10.55.252.243 80' + Jan 14 04:56:52.365: INFO: stderr: "+ nc -v -z -w 2 10.55.252.243 80\nConnection to 10.55.252.243 80 port [tcp/http] succeeded!\n" + Jan 14 04:56:52.366: INFO: stdout: "" + Jan 14 04:56:52.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-8320 exec execpodtvq49 -- /bin/sh -x -c nc -v -z -w 2 multi-endpoint-test 81' + Jan 14 04:56:52.477: INFO: stderr: "+ nc -v -z -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" + Jan 14 04:56:52.477: INFO: stdout: "" + Jan 14 04:56:52.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-8320 exec execpodtvq49 -- /bin/sh -x -c nc -v -z -w 2 10.55.252.243 81' + Jan 14 04:56:52.584: INFO: stderr: "+ nc -v -z -w 2 10.55.252.243 81\nConnection to 10.55.252.243 81 port [tcp/*] succeeded!\n" + Jan 14 04:56:52.584: INFO: stdout: "" + STEP: Deleting pod pod1 in namespace services-8320 01/14/23 04:56:52.584 + STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8320 to expose endpoints map[pod2:[101]] 01/14/23 04:56:52.599 + Jan 14 04:56:52.610: INFO: successfully validated that service multi-endpoint-test in namespace services-8320 exposes endpoints map[pod2:[101]] + STEP: Deleting pod pod2 in namespace services-8320 01/14/23 04:56:52.61 + STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8320 to expose endpoints map[] 01/14/23 04:56:52.627 + Jan 14 04:56:52.635: INFO: successfully validated that service multi-endpoint-test in namespace services-8320 exposes endpoints map[] + [AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 + Jan 14 04:56:52.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 + STEP: Destroying namespace "services-8320" for this suite. 01/14/23 04:56:52.658 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + binary data should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:175 +[BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:56:52.668 +Jan 14 04:56:52.668: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename configmap 01/14/23 04:56:52.669 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:56:52.699 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:56:52.702 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 +[It] binary data should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:175 +STEP: Creating configMap with name configmap-test-upd-91700493-b659-48b4-970d-b486a2b0faf2 01/14/23 04:56:52.709 +STEP: Creating the pod 01/14/23 04:56:52.715 +Jan 14 04:56:52.725: INFO: Waiting up to 5m0s for pod "pod-configmaps-bc3672a8-0569-4298-b58f-61b55ac3eb6f" in namespace "configmap-9112" to be "running" +Jan 14 04:56:52.728: INFO: Pod "pod-configmaps-bc3672a8-0569-4298-b58f-61b55ac3eb6f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.174811ms +Jan 14 04:56:54.733: INFO: Pod "pod-configmaps-bc3672a8-0569-4298-b58f-61b55ac3eb6f": Phase="Running", Reason="", readiness=false. Elapsed: 2.007718356s +Jan 14 04:56:54.733: INFO: Pod "pod-configmaps-bc3672a8-0569-4298-b58f-61b55ac3eb6f" satisfied condition "running" +STEP: Waiting for pod with text data 01/14/23 04:56:54.733 +STEP: Waiting for pod with binary data 01/14/23 04:56:54.741 +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 +Jan 14 04:56:54.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 +STEP: Destroying namespace "configmap-9112" for this suite. 01/14/23 04:56:54.755 +------------------------------ +• [2.095 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + binary data should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:175 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:56:52.668 + Jan 14 04:56:52.668: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename configmap 01/14/23 04:56:52.669 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:56:52.699 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:56:52.702 + [BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 + [It] binary data should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:175 + STEP: Creating configMap with name configmap-test-upd-91700493-b659-48b4-970d-b486a2b0faf2 01/14/23 04:56:52.709 + STEP: Creating the pod 01/14/23 04:56:52.715 + Jan 14 04:56:52.725: INFO: Waiting up to 5m0s for pod "pod-configmaps-bc3672a8-0569-4298-b58f-61b55ac3eb6f" in namespace "configmap-9112" to be "running" + Jan 14 04:56:52.728: INFO: Pod "pod-configmaps-bc3672a8-0569-4298-b58f-61b55ac3eb6f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.174811ms + Jan 14 04:56:54.733: INFO: Pod "pod-configmaps-bc3672a8-0569-4298-b58f-61b55ac3eb6f": Phase="Running", Reason="", readiness=false. Elapsed: 2.007718356s + Jan 14 04:56:54.733: INFO: Pod "pod-configmaps-bc3672a8-0569-4298-b58f-61b55ac3eb6f" satisfied condition "running" + STEP: Waiting for pod with text data 01/14/23 04:56:54.733 + STEP: Waiting for pod with binary data 01/14/23 04:56:54.741 + [AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 + Jan 14 04:56:54.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 + STEP: Destroying namespace "configmap-9112" for this suite. 01/14/23 04:56:54.755 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] EndpointSlice + should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] + test/e2e/network/endpointslice.go:102 +[BeforeEach] [sig-network] EndpointSlice + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:56:54.764 +Jan 14 04:56:54.764: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename endpointslice 01/14/23 04:56:54.765 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:56:54.781 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:56:54.784 +[BeforeEach] [sig-network] EndpointSlice + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] EndpointSlice + test/e2e/network/endpointslice.go:52 +[It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] + test/e2e/network/endpointslice.go:102 +[AfterEach] [sig-network] EndpointSlice + test/e2e/framework/node/init/init.go:32 +Jan 14 04:56:56.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] EndpointSlice + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] EndpointSlice + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] EndpointSlice + tear down framework | framework.go:193 +STEP: Destroying namespace "endpointslice-1040" for this suite. 01/14/23 04:56:56.832 +------------------------------ +• [2.074 seconds] +[sig-network] EndpointSlice +test/e2e/network/common/framework.go:23 + should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] + test/e2e/network/endpointslice.go:102 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] EndpointSlice + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:56:54.764 + Jan 14 04:56:54.764: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename endpointslice 01/14/23 04:56:54.765 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:56:54.781 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:56:54.784 + [BeforeEach] [sig-network] EndpointSlice + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] EndpointSlice + test/e2e/network/endpointslice.go:52 + [It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] + test/e2e/network/endpointslice.go:102 + [AfterEach] [sig-network] EndpointSlice + test/e2e/framework/node/init/init.go:32 + Jan 14 04:56:56.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] EndpointSlice + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] EndpointSlice + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] EndpointSlice + tear down framework | framework.go:193 + STEP: Destroying namespace "endpointslice-1040" for this suite. 01/14/23 04:56:56.832 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSS +------------------------------ +[sig-node] NoExecuteTaintManager Multiple Pods [Serial] + evicts pods with minTolerationSeconds [Disruptive] [Conformance] + test/e2e/node/taints.go:455 +[BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:56:56.838 +Jan 14 04:56:56.839: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename taint-multiple-pods 01/14/23 04:56:56.839 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:56:56.854 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:56:56.857 +[BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + test/e2e/node/taints.go:383 +Jan 14 04:56:56.859: INFO: Waiting up to 1m0s for all nodes to be ready +Jan 14 04:57:56.898: INFO: Waiting for terminating namespaces to be deleted... +[It] evicts pods with minTolerationSeconds [Disruptive] [Conformance] + test/e2e/node/taints.go:455 +Jan 14 04:57:56.901: INFO: Starting informer... +STEP: Starting pods... 01/14/23 04:57:56.901 +Jan 14 04:57:57.121: INFO: Pod1 is running on 10.0.1.106. Tainting Node +Jan 14 04:57:57.332: INFO: Waiting up to 5m0s for pod "taint-eviction-b1" in namespace "taint-multiple-pods-396" to be "running" +Jan 14 04:57:57.335: INFO: Pod "taint-eviction-b1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.178529ms +Jan 14 04:57:59.340: INFO: Pod "taint-eviction-b1": Phase="Running", Reason="", readiness=true. Elapsed: 2.008273765s +Jan 14 04:57:59.340: INFO: Pod "taint-eviction-b1" satisfied condition "running" +Jan 14 04:57:59.340: INFO: Waiting up to 5m0s for pod "taint-eviction-b2" in namespace "taint-multiple-pods-396" to be "running" +Jan 14 04:57:59.343: INFO: Pod "taint-eviction-b2": Phase="Running", Reason="", readiness=true. Elapsed: 3.196426ms +Jan 14 04:57:59.343: INFO: Pod "taint-eviction-b2" satisfied condition "running" +Jan 14 04:57:59.343: INFO: Pod2 is running on 10.0.1.106. Tainting Node +STEP: Trying to apply a taint on the Node 01/14/23 04:57:59.343 +STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 01/14/23 04:57:59.355 +STEP: Waiting for Pod1 and Pod2 to be deleted 01/14/23 04:57:59.359 +Jan 14 04:58:04.698: INFO: Noticed Pod "taint-eviction-b1" gets evicted. +Jan 14 04:58:24.714: INFO: Noticed Pod "taint-eviction-b2" gets evicted. +STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 01/14/23 04:58:24.725 +[AfterEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:58:24.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "taint-multiple-pods-396" for this suite. 01/14/23 04:58:24.739 +------------------------------ +• [SLOW TEST] [87.906 seconds] +[sig-node] NoExecuteTaintManager Multiple Pods [Serial] +test/e2e/node/framework.go:23 + evicts pods with minTolerationSeconds [Disruptive] [Conformance] + test/e2e/node/taints.go:455 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:56:56.838 + Jan 14 04:56:56.839: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename taint-multiple-pods 01/14/23 04:56:56.839 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:56:56.854 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:56:56.857 + [BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + test/e2e/node/taints.go:383 + Jan 14 04:56:56.859: INFO: Waiting up to 1m0s for all nodes to be ready + Jan 14 04:57:56.898: INFO: Waiting for terminating namespaces to be deleted... + [It] evicts pods with minTolerationSeconds [Disruptive] [Conformance] + test/e2e/node/taints.go:455 + Jan 14 04:57:56.901: INFO: Starting informer... + STEP: Starting pods... 01/14/23 04:57:56.901 + Jan 14 04:57:57.121: INFO: Pod1 is running on 10.0.1.106. Tainting Node + Jan 14 04:57:57.332: INFO: Waiting up to 5m0s for pod "taint-eviction-b1" in namespace "taint-multiple-pods-396" to be "running" + Jan 14 04:57:57.335: INFO: Pod "taint-eviction-b1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.178529ms + Jan 14 04:57:59.340: INFO: Pod "taint-eviction-b1": Phase="Running", Reason="", readiness=true. Elapsed: 2.008273765s + Jan 14 04:57:59.340: INFO: Pod "taint-eviction-b1" satisfied condition "running" + Jan 14 04:57:59.340: INFO: Waiting up to 5m0s for pod "taint-eviction-b2" in namespace "taint-multiple-pods-396" to be "running" + Jan 14 04:57:59.343: INFO: Pod "taint-eviction-b2": Phase="Running", Reason="", readiness=true. Elapsed: 3.196426ms + Jan 14 04:57:59.343: INFO: Pod "taint-eviction-b2" satisfied condition "running" + Jan 14 04:57:59.343: INFO: Pod2 is running on 10.0.1.106. Tainting Node + STEP: Trying to apply a taint on the Node 01/14/23 04:57:59.343 + STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 01/14/23 04:57:59.355 + STEP: Waiting for Pod1 and Pod2 to be deleted 01/14/23 04:57:59.359 + Jan 14 04:58:04.698: INFO: Noticed Pod "taint-eviction-b1" gets evicted. + Jan 14 04:58:24.714: INFO: Noticed Pod "taint-eviction-b2" gets evicted. + STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute 01/14/23 04:58:24.725 + [AfterEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:58:24.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "taint-multiple-pods-396" for this suite. 01/14/23 04:58:24.739 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates resource limits of pods that are allowed to run [Conformance] + test/e2e/scheduling/predicates.go:331 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:58:24.746 +Jan 14 04:58:24.746: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename sched-pred 01/14/23 04:58:24.747 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:58:24.76 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:58:24.762 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:97 +Jan 14 04:58:24.765: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Jan 14 04:58:24.772: INFO: Waiting for terminating namespaces to be deleted... +Jan 14 04:58:24.775: INFO: +Logging pods the apiserver thinks is on node 10.0.1.106 before test +Jan 14 04:58:24.784: INFO: csi-cbs-node-5wf2s from kube-system started at 2023-01-13 08:11:20 +0000 UTC (2 container statuses recorded) +Jan 14 04:58:24.784: INFO: Container cbs-csi ready: true, restart count 1 +Jan 14 04:58:24.784: INFO: Container driver-registrar ready: true, restart count 0 +Jan 14 04:58:24.784: INFO: ip-masq-agent-rx9k6 from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) +Jan 14 04:58:24.784: INFO: Container ip-masq-agent ready: true, restart count 0 +Jan 14 04:58:24.784: INFO: kube-proxy-s6xxg from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) +Jan 14 04:58:24.784: INFO: Container kube-proxy ready: true, restart count 0 +Jan 14 04:58:24.784: INFO: tke-bridge-agent-frbcm from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) +Jan 14 04:58:24.784: INFO: Container tke-bridge-agent ready: true, restart count 1 +Jan 14 04:58:24.784: INFO: tke-cni-agent-nv7pn from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) +Jan 14 04:58:24.784: INFO: Container tke-cni-agent ready: true, restart count 0 +Jan 14 04:58:24.784: INFO: tke-monitor-agent-xhdhg from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) +Jan 14 04:58:24.784: INFO: Container tke-monitor-agent ready: true, restart count 0 +Jan 14 04:58:24.784: INFO: sonobuoy-systemd-logs-daemon-set-0fc25bd2d6604eac-jkjnf from sonobuoy started at 2023-01-14 03:34:54 +0000 UTC (2 container statuses recorded) +Jan 14 04:58:24.784: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jan 14 04:58:24.784: INFO: Container systemd-logs ready: true, restart count 0 +Jan 14 04:58:24.784: INFO: +Logging pods the apiserver thinks is on node 10.0.1.212 before test +Jan 14 04:58:24.792: INFO: kubernetes-proxy-544fb566b4-zpvz8 from default started at 2023-01-14 03:20:34 +0000 UTC (1 container statuses recorded) +Jan 14 04:58:24.792: INFO: Container kubernetes-proxy ready: false, restart count 24 +Jan 14 04:58:24.792: INFO: csi-cbs-node-ddpcx from kube-system started at 2023-01-13 08:11:16 +0000 UTC (2 container statuses recorded) +Jan 14 04:58:24.792: INFO: Container cbs-csi ready: true, restart count 0 +Jan 14 04:58:24.792: INFO: Container driver-registrar ready: true, restart count 0 +Jan 14 04:58:24.792: INFO: ip-masq-agent-8wxxs from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) +Jan 14 04:58:24.792: INFO: Container ip-masq-agent ready: true, restart count 0 +Jan 14 04:58:24.792: INFO: kube-proxy-npt42 from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) +Jan 14 04:58:24.792: INFO: Container kube-proxy ready: true, restart count 0 +Jan 14 04:58:24.792: INFO: tke-bridge-agent-4rffd from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) +Jan 14 04:58:24.792: INFO: Container tke-bridge-agent ready: true, restart count 1 +Jan 14 04:58:24.792: INFO: tke-cni-agent-xmggs from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) +Jan 14 04:58:24.792: INFO: Container tke-cni-agent ready: true, restart count 0 +Jan 14 04:58:24.792: INFO: tke-monitor-agent-6gtt6 from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) +Jan 14 04:58:24.792: INFO: Container tke-monitor-agent ready: true, restart count 0 +Jan 14 04:58:24.792: INFO: sonobuoy-e2e-job-1b1a46fb40e34267 from sonobuoy started at 2023-01-14 03:34:54 +0000 UTC (2 container statuses recorded) +Jan 14 04:58:24.792: INFO: Container e2e ready: true, restart count 0 +Jan 14 04:58:24.792: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jan 14 04:58:24.792: INFO: sonobuoy-systemd-logs-daemon-set-0fc25bd2d6604eac-lgxk7 from sonobuoy started at 2023-01-14 03:34:54 +0000 UTC (2 container statuses recorded) +Jan 14 04:58:24.792: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jan 14 04:58:24.792: INFO: Container systemd-logs ready: true, restart count 0 +Jan 14 04:58:24.792: INFO: ss2-2 from statefulset-8862 started at 2023-01-14 02:33:30 +0000 UTC (1 container statuses recorded) +Jan 14 04:58:24.792: INFO: Container webserver ready: true, restart count 0 +Jan 14 04:58:24.792: INFO: +Logging pods the apiserver thinks is on node 10.0.1.99 before test +Jan 14 04:58:24.801: INFO: kubernetes-proxy-544fb566b4-782fh from default started at 2023-01-14 04:31:56 +0000 UTC (1 container statuses recorded) +Jan 14 04:58:24.801: INFO: Container kubernetes-proxy ready: false, restart count 10 +Jan 14 04:58:24.801: INFO: csi-cbs-node-q4l9b from kube-system started at 2023-01-13 08:11:43 +0000 UTC (2 container statuses recorded) +Jan 14 04:58:24.801: INFO: Container cbs-csi ready: true, restart count 0 +Jan 14 04:58:24.801: INFO: Container driver-registrar ready: true, restart count 0 +Jan 14 04:58:24.801: INFO: ip-masq-agent-kmrrk from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) +Jan 14 04:58:24.801: INFO: Container ip-masq-agent ready: true, restart count 0 +Jan 14 04:58:24.801: INFO: kube-proxy-g4qjh from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) +Jan 14 04:58:24.801: INFO: Container kube-proxy ready: true, restart count 0 +Jan 14 04:58:24.801: INFO: tke-bridge-agent-hzv6c from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) +Jan 14 04:58:24.801: INFO: Container tke-bridge-agent ready: true, restart count 1 +Jan 14 04:58:24.801: INFO: tke-cni-agent-7mk9b from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) +Jan 14 04:58:24.801: INFO: Container tke-cni-agent ready: true, restart count 0 +Jan 14 04:58:24.801: INFO: tke-monitor-agent-g27mp from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) +Jan 14 04:58:24.801: INFO: Container tke-monitor-agent ready: true, restart count 0 +Jan 14 04:58:24.801: INFO: sonobuoy from sonobuoy started at 2023-01-14 03:34:53 +0000 UTC (1 container statuses recorded) +Jan 14 04:58:24.801: INFO: Container kube-sonobuoy ready: true, restart count 0 +Jan 14 04:58:24.801: INFO: sonobuoy-systemd-logs-daemon-set-0fc25bd2d6604eac-nwh62 from sonobuoy started at 2023-01-14 03:34:54 +0000 UTC (2 container statuses recorded) +Jan 14 04:58:24.801: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jan 14 04:58:24.801: INFO: Container systemd-logs ready: true, restart count 0 +Jan 14 04:58:24.801: INFO: ss2-0 from statefulset-8862 started at 2023-01-14 02:33:10 +0000 UTC (1 container statuses recorded) +Jan 14 04:58:24.801: INFO: Container webserver ready: true, restart count 0 +Jan 14 04:58:24.801: INFO: ss2-1 from statefulset-8862 started at 2023-01-14 04:57:59 +0000 UTC (1 container statuses recorded) +Jan 14 04:58:24.801: INFO: Container webserver ready: true, restart count 0 +[It] validates resource limits of pods that are allowed to run [Conformance] + test/e2e/scheduling/predicates.go:331 +STEP: verifying the node has the label node 10.0.1.106 01/14/23 04:58:24.824 +STEP: verifying the node has the label node 10.0.1.212 01/14/23 04:58:24.836 +STEP: verifying the node has the label node 10.0.1.99 01/14/23 04:58:24.848 +Jan 14 04:58:24.871: INFO: Pod kubernetes-proxy-544fb566b4-782fh requesting resource cpu=100m on Node 10.0.1.99 +Jan 14 04:58:24.871: INFO: Pod kubernetes-proxy-544fb566b4-zpvz8 requesting resource cpu=100m on Node 10.0.1.212 +Jan 14 04:58:24.871: INFO: Pod csi-cbs-node-5wf2s requesting resource cpu=0m on Node 10.0.1.106 +Jan 14 04:58:24.871: INFO: Pod csi-cbs-node-ddpcx requesting resource cpu=0m on Node 10.0.1.212 +Jan 14 04:58:24.871: INFO: Pod csi-cbs-node-q4l9b requesting resource cpu=0m on Node 10.0.1.99 +Jan 14 04:58:24.871: INFO: Pod ip-masq-agent-8wxxs requesting resource cpu=0m on Node 10.0.1.212 +Jan 14 04:58:24.871: INFO: Pod ip-masq-agent-kmrrk requesting resource cpu=0m on Node 10.0.1.99 +Jan 14 04:58:24.871: INFO: Pod ip-masq-agent-rx9k6 requesting resource cpu=0m on Node 10.0.1.106 +Jan 14 04:58:24.871: INFO: Pod kube-proxy-g4qjh requesting resource cpu=0m on Node 10.0.1.99 +Jan 14 04:58:24.871: INFO: Pod kube-proxy-npt42 requesting resource cpu=0m on Node 10.0.1.212 +Jan 14 04:58:24.871: INFO: Pod kube-proxy-s6xxg requesting resource cpu=0m on Node 10.0.1.106 +Jan 14 04:58:24.871: INFO: Pod tke-bridge-agent-4rffd requesting resource cpu=0m on Node 10.0.1.212 +Jan 14 04:58:24.871: INFO: Pod tke-bridge-agent-frbcm requesting resource cpu=0m on Node 10.0.1.106 +Jan 14 04:58:24.871: INFO: Pod tke-bridge-agent-hzv6c requesting resource cpu=0m on Node 10.0.1.99 +Jan 14 04:58:24.871: INFO: Pod tke-cni-agent-7mk9b requesting resource cpu=0m on Node 10.0.1.99 +Jan 14 04:58:24.871: INFO: Pod tke-cni-agent-nv7pn requesting resource cpu=0m on Node 10.0.1.106 +Jan 14 04:58:24.871: INFO: Pod tke-cni-agent-xmggs requesting resource cpu=0m on Node 10.0.1.212 +Jan 14 04:58:24.871: INFO: Pod tke-monitor-agent-6gtt6 requesting resource cpu=10m on Node 10.0.1.212 +Jan 14 04:58:24.871: INFO: Pod tke-monitor-agent-g27mp requesting resource cpu=10m on Node 10.0.1.99 +Jan 14 04:58:24.871: INFO: Pod tke-monitor-agent-xhdhg requesting resource cpu=10m on Node 10.0.1.106 +Jan 14 04:58:24.871: INFO: Pod sonobuoy requesting resource cpu=0m on Node 10.0.1.99 +Jan 14 04:58:24.871: INFO: Pod sonobuoy-e2e-job-1b1a46fb40e34267 requesting resource cpu=0m on Node 10.0.1.212 +Jan 14 04:58:24.871: INFO: Pod sonobuoy-systemd-logs-daemon-set-0fc25bd2d6604eac-jkjnf requesting resource cpu=0m on Node 10.0.1.106 +Jan 14 04:58:24.871: INFO: Pod sonobuoy-systemd-logs-daemon-set-0fc25bd2d6604eac-lgxk7 requesting resource cpu=0m on Node 10.0.1.212 +Jan 14 04:58:24.871: INFO: Pod sonobuoy-systemd-logs-daemon-set-0fc25bd2d6604eac-nwh62 requesting resource cpu=0m on Node 10.0.1.99 +Jan 14 04:58:24.871: INFO: Pod ss2-0 requesting resource cpu=0m on Node 10.0.1.99 +Jan 14 04:58:24.871: INFO: Pod ss2-1 requesting resource cpu=0m on Node 10.0.1.99 +Jan 14 04:58:24.871: INFO: Pod ss2-2 requesting resource cpu=0m on Node 10.0.1.212 +STEP: Starting Pods to consume most of the cluster CPU. 01/14/23 04:58:24.871 +Jan 14 04:58:24.872: INFO: Creating a pod which consumes cpu=5453m on Node 10.0.1.106 +Jan 14 04:58:24.882: INFO: Creating a pod which consumes cpu=5383m on Node 10.0.1.212 +Jan 14 04:58:24.891: INFO: Creating a pod which consumes cpu=5383m on Node 10.0.1.99 +Jan 14 04:58:24.898: INFO: Waiting up to 5m0s for pod "filler-pod-ece7aaba-9bc0-456e-8dc1-41108a6ecc12" in namespace "sched-pred-5328" to be "running" +Jan 14 04:58:24.903: INFO: Pod "filler-pod-ece7aaba-9bc0-456e-8dc1-41108a6ecc12": Phase="Pending", Reason="", readiness=false. Elapsed: 5.183637ms +Jan 14 04:58:26.907: INFO: Pod "filler-pod-ece7aaba-9bc0-456e-8dc1-41108a6ecc12": Phase="Running", Reason="", readiness=true. Elapsed: 2.009543026s +Jan 14 04:58:26.908: INFO: Pod "filler-pod-ece7aaba-9bc0-456e-8dc1-41108a6ecc12" satisfied condition "running" +Jan 14 04:58:26.908: INFO: Waiting up to 5m0s for pod "filler-pod-f143daf2-96cf-42bc-89b8-deed767fc124" in namespace "sched-pred-5328" to be "running" +Jan 14 04:58:26.911: INFO: Pod "filler-pod-f143daf2-96cf-42bc-89b8-deed767fc124": Phase="Running", Reason="", readiness=true. Elapsed: 3.165495ms +Jan 14 04:58:26.911: INFO: Pod "filler-pod-f143daf2-96cf-42bc-89b8-deed767fc124" satisfied condition "running" +Jan 14 04:58:26.911: INFO: Waiting up to 5m0s for pod "filler-pod-19d8fa59-179a-4dde-8da1-975a5b5d3b4d" in namespace "sched-pred-5328" to be "running" +Jan 14 04:58:26.914: INFO: Pod "filler-pod-19d8fa59-179a-4dde-8da1-975a5b5d3b4d": Phase="Running", Reason="", readiness=true. Elapsed: 2.994945ms +Jan 14 04:58:26.914: INFO: Pod "filler-pod-19d8fa59-179a-4dde-8da1-975a5b5d3b4d" satisfied condition "running" +STEP: Creating another pod that requires unavailable amount of CPU. 01/14/23 04:58:26.914 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-19d8fa59-179a-4dde-8da1-975a5b5d3b4d.173a1418044b77c5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5328/filler-pod-19d8fa59-179a-4dde-8da1-975a5b5d3b4d to 10.0.1.99] 01/14/23 04:58:26.918 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-19d8fa59-179a-4dde-8da1-975a5b5d3b4d.173a14181feb7d6f], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 01/14/23 04:58:26.918 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-19d8fa59-179a-4dde-8da1-975a5b5d3b4d.173a1418211473ba], Reason = [Created], Message = [Created container filler-pod-19d8fa59-179a-4dde-8da1-975a5b5d3b4d] 01/14/23 04:58:26.918 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-19d8fa59-179a-4dde-8da1-975a5b5d3b4d.173a1418253aae88], Reason = [Started], Message = [Started container filler-pod-19d8fa59-179a-4dde-8da1-975a5b5d3b4d] 01/14/23 04:58:26.918 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-ece7aaba-9bc0-456e-8dc1-41108a6ecc12.173a1418034ebf79], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5328/filler-pod-ece7aaba-9bc0-456e-8dc1-41108a6ecc12 to 10.0.1.106] 01/14/23 04:58:26.918 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-ece7aaba-9bc0-456e-8dc1-41108a6ecc12.173a14181ec9a063], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 01/14/23 04:58:26.918 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-ece7aaba-9bc0-456e-8dc1-41108a6ecc12.173a14181fda13bd], Reason = [Created], Message = [Created container filler-pod-ece7aaba-9bc0-456e-8dc1-41108a6ecc12] 01/14/23 04:58:26.918 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-ece7aaba-9bc0-456e-8dc1-41108a6ecc12.173a141823b8bee6], Reason = [Started], Message = [Started container filler-pod-ece7aaba-9bc0-456e-8dc1-41108a6ecc12] 01/14/23 04:58:26.918 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-f143daf2-96cf-42bc-89b8-deed767fc124.173a141803cabcbe], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5328/filler-pod-f143daf2-96cf-42bc-89b8-deed767fc124 to 10.0.1.212] 01/14/23 04:58:26.918 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-f143daf2-96cf-42bc-89b8-deed767fc124.173a1418202958e9], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 01/14/23 04:58:26.918 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-f143daf2-96cf-42bc-89b8-deed767fc124.173a14182148bab4], Reason = [Created], Message = [Created container filler-pod-f143daf2-96cf-42bc-89b8-deed767fc124] 01/14/23 04:58:26.918 +STEP: Considering event: +Type = [Normal], Name = [filler-pod-f143daf2-96cf-42bc-89b8-deed767fc124.173a141825eff6e7], Reason = [Started], Message = [Started container filler-pod-f143daf2-96cf-42bc-89b8-deed767fc124] 01/14/23 04:58:26.918 +STEP: Considering event: +Type = [Warning], Name = [additional-pod.173a14187ccba435], Reason = [FailedScheduling], Message = [0/6 nodes are available: 3 Insufficient cpu, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 3 No preemption victims found for incoming pod, 3 Preemption is not helpful for scheduling..] 01/14/23 04:58:26.931 +STEP: removing the label node off the node 10.0.1.99 01/14/23 04:58:27.931 +STEP: verifying the node doesn't have the label node 01/14/23 04:58:27.944 +STEP: removing the label node off the node 10.0.1.106 01/14/23 04:58:27.95 +STEP: verifying the node doesn't have the label node 01/14/23 04:58:27.964 +STEP: removing the label node off the node 10.0.1.212 01/14/23 04:58:27.967 +STEP: verifying the node doesn't have the label node 01/14/23 04:58:27.979 +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:58:27.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:88 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "sched-pred-5328" for this suite. 01/14/23 04:58:27.987 +------------------------------ +• [3.250 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +test/e2e/scheduling/framework.go:40 + validates resource limits of pods that are allowed to run [Conformance] + test/e2e/scheduling/predicates.go:331 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:58:24.746 + Jan 14 04:58:24.746: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename sched-pred 01/14/23 04:58:24.747 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:58:24.76 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:58:24.762 + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:97 + Jan 14 04:58:24.765: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready + Jan 14 04:58:24.772: INFO: Waiting for terminating namespaces to be deleted... + Jan 14 04:58:24.775: INFO: + Logging pods the apiserver thinks is on node 10.0.1.106 before test + Jan 14 04:58:24.784: INFO: csi-cbs-node-5wf2s from kube-system started at 2023-01-13 08:11:20 +0000 UTC (2 container statuses recorded) + Jan 14 04:58:24.784: INFO: Container cbs-csi ready: true, restart count 1 + Jan 14 04:58:24.784: INFO: Container driver-registrar ready: true, restart count 0 + Jan 14 04:58:24.784: INFO: ip-masq-agent-rx9k6 from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) + Jan 14 04:58:24.784: INFO: Container ip-masq-agent ready: true, restart count 0 + Jan 14 04:58:24.784: INFO: kube-proxy-s6xxg from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) + Jan 14 04:58:24.784: INFO: Container kube-proxy ready: true, restart count 0 + Jan 14 04:58:24.784: INFO: tke-bridge-agent-frbcm from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) + Jan 14 04:58:24.784: INFO: Container tke-bridge-agent ready: true, restart count 1 + Jan 14 04:58:24.784: INFO: tke-cni-agent-nv7pn from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) + Jan 14 04:58:24.784: INFO: Container tke-cni-agent ready: true, restart count 0 + Jan 14 04:58:24.784: INFO: tke-monitor-agent-xhdhg from kube-system started at 2023-01-13 08:11:20 +0000 UTC (1 container statuses recorded) + Jan 14 04:58:24.784: INFO: Container tke-monitor-agent ready: true, restart count 0 + Jan 14 04:58:24.784: INFO: sonobuoy-systemd-logs-daemon-set-0fc25bd2d6604eac-jkjnf from sonobuoy started at 2023-01-14 03:34:54 +0000 UTC (2 container statuses recorded) + Jan 14 04:58:24.784: INFO: Container sonobuoy-worker ready: true, restart count 0 + Jan 14 04:58:24.784: INFO: Container systemd-logs ready: true, restart count 0 + Jan 14 04:58:24.784: INFO: + Logging pods the apiserver thinks is on node 10.0.1.212 before test + Jan 14 04:58:24.792: INFO: kubernetes-proxy-544fb566b4-zpvz8 from default started at 2023-01-14 03:20:34 +0000 UTC (1 container statuses recorded) + Jan 14 04:58:24.792: INFO: Container kubernetes-proxy ready: false, restart count 24 + Jan 14 04:58:24.792: INFO: csi-cbs-node-ddpcx from kube-system started at 2023-01-13 08:11:16 +0000 UTC (2 container statuses recorded) + Jan 14 04:58:24.792: INFO: Container cbs-csi ready: true, restart count 0 + Jan 14 04:58:24.792: INFO: Container driver-registrar ready: true, restart count 0 + Jan 14 04:58:24.792: INFO: ip-masq-agent-8wxxs from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) + Jan 14 04:58:24.792: INFO: Container ip-masq-agent ready: true, restart count 0 + Jan 14 04:58:24.792: INFO: kube-proxy-npt42 from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) + Jan 14 04:58:24.792: INFO: Container kube-proxy ready: true, restart count 0 + Jan 14 04:58:24.792: INFO: tke-bridge-agent-4rffd from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) + Jan 14 04:58:24.792: INFO: Container tke-bridge-agent ready: true, restart count 1 + Jan 14 04:58:24.792: INFO: tke-cni-agent-xmggs from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) + Jan 14 04:58:24.792: INFO: Container tke-cni-agent ready: true, restart count 0 + Jan 14 04:58:24.792: INFO: tke-monitor-agent-6gtt6 from kube-system started at 2023-01-13 08:11:16 +0000 UTC (1 container statuses recorded) + Jan 14 04:58:24.792: INFO: Container tke-monitor-agent ready: true, restart count 0 + Jan 14 04:58:24.792: INFO: sonobuoy-e2e-job-1b1a46fb40e34267 from sonobuoy started at 2023-01-14 03:34:54 +0000 UTC (2 container statuses recorded) + Jan 14 04:58:24.792: INFO: Container e2e ready: true, restart count 0 + Jan 14 04:58:24.792: INFO: Container sonobuoy-worker ready: true, restart count 0 + Jan 14 04:58:24.792: INFO: sonobuoy-systemd-logs-daemon-set-0fc25bd2d6604eac-lgxk7 from sonobuoy started at 2023-01-14 03:34:54 +0000 UTC (2 container statuses recorded) + Jan 14 04:58:24.792: INFO: Container sonobuoy-worker ready: true, restart count 0 + Jan 14 04:58:24.792: INFO: Container systemd-logs ready: true, restart count 0 + Jan 14 04:58:24.792: INFO: ss2-2 from statefulset-8862 started at 2023-01-14 02:33:30 +0000 UTC (1 container statuses recorded) + Jan 14 04:58:24.792: INFO: Container webserver ready: true, restart count 0 + Jan 14 04:58:24.792: INFO: + Logging pods the apiserver thinks is on node 10.0.1.99 before test + Jan 14 04:58:24.801: INFO: kubernetes-proxy-544fb566b4-782fh from default started at 2023-01-14 04:31:56 +0000 UTC (1 container statuses recorded) + Jan 14 04:58:24.801: INFO: Container kubernetes-proxy ready: false, restart count 10 + Jan 14 04:58:24.801: INFO: csi-cbs-node-q4l9b from kube-system started at 2023-01-13 08:11:43 +0000 UTC (2 container statuses recorded) + Jan 14 04:58:24.801: INFO: Container cbs-csi ready: true, restart count 0 + Jan 14 04:58:24.801: INFO: Container driver-registrar ready: true, restart count 0 + Jan 14 04:58:24.801: INFO: ip-masq-agent-kmrrk from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) + Jan 14 04:58:24.801: INFO: Container ip-masq-agent ready: true, restart count 0 + Jan 14 04:58:24.801: INFO: kube-proxy-g4qjh from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) + Jan 14 04:58:24.801: INFO: Container kube-proxy ready: true, restart count 0 + Jan 14 04:58:24.801: INFO: tke-bridge-agent-hzv6c from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) + Jan 14 04:58:24.801: INFO: Container tke-bridge-agent ready: true, restart count 1 + Jan 14 04:58:24.801: INFO: tke-cni-agent-7mk9b from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) + Jan 14 04:58:24.801: INFO: Container tke-cni-agent ready: true, restart count 0 + Jan 14 04:58:24.801: INFO: tke-monitor-agent-g27mp from kube-system started at 2023-01-13 08:11:43 +0000 UTC (1 container statuses recorded) + Jan 14 04:58:24.801: INFO: Container tke-monitor-agent ready: true, restart count 0 + Jan 14 04:58:24.801: INFO: sonobuoy from sonobuoy started at 2023-01-14 03:34:53 +0000 UTC (1 container statuses recorded) + Jan 14 04:58:24.801: INFO: Container kube-sonobuoy ready: true, restart count 0 + Jan 14 04:58:24.801: INFO: sonobuoy-systemd-logs-daemon-set-0fc25bd2d6604eac-nwh62 from sonobuoy started at 2023-01-14 03:34:54 +0000 UTC (2 container statuses recorded) + Jan 14 04:58:24.801: INFO: Container sonobuoy-worker ready: true, restart count 0 + Jan 14 04:58:24.801: INFO: Container systemd-logs ready: true, restart count 0 + Jan 14 04:58:24.801: INFO: ss2-0 from statefulset-8862 started at 2023-01-14 02:33:10 +0000 UTC (1 container statuses recorded) + Jan 14 04:58:24.801: INFO: Container webserver ready: true, restart count 0 + Jan 14 04:58:24.801: INFO: ss2-1 from statefulset-8862 started at 2023-01-14 04:57:59 +0000 UTC (1 container statuses recorded) + Jan 14 04:58:24.801: INFO: Container webserver ready: true, restart count 0 + [It] validates resource limits of pods that are allowed to run [Conformance] + test/e2e/scheduling/predicates.go:331 + STEP: verifying the node has the label node 10.0.1.106 01/14/23 04:58:24.824 + STEP: verifying the node has the label node 10.0.1.212 01/14/23 04:58:24.836 + STEP: verifying the node has the label node 10.0.1.99 01/14/23 04:58:24.848 + Jan 14 04:58:24.871: INFO: Pod kubernetes-proxy-544fb566b4-782fh requesting resource cpu=100m on Node 10.0.1.99 + Jan 14 04:58:24.871: INFO: Pod kubernetes-proxy-544fb566b4-zpvz8 requesting resource cpu=100m on Node 10.0.1.212 + Jan 14 04:58:24.871: INFO: Pod csi-cbs-node-5wf2s requesting resource cpu=0m on Node 10.0.1.106 + Jan 14 04:58:24.871: INFO: Pod csi-cbs-node-ddpcx requesting resource cpu=0m on Node 10.0.1.212 + Jan 14 04:58:24.871: INFO: Pod csi-cbs-node-q4l9b requesting resource cpu=0m on Node 10.0.1.99 + Jan 14 04:58:24.871: INFO: Pod ip-masq-agent-8wxxs requesting resource cpu=0m on Node 10.0.1.212 + Jan 14 04:58:24.871: INFO: Pod ip-masq-agent-kmrrk requesting resource cpu=0m on Node 10.0.1.99 + Jan 14 04:58:24.871: INFO: Pod ip-masq-agent-rx9k6 requesting resource cpu=0m on Node 10.0.1.106 + Jan 14 04:58:24.871: INFO: Pod kube-proxy-g4qjh requesting resource cpu=0m on Node 10.0.1.99 + Jan 14 04:58:24.871: INFO: Pod kube-proxy-npt42 requesting resource cpu=0m on Node 10.0.1.212 + Jan 14 04:58:24.871: INFO: Pod kube-proxy-s6xxg requesting resource cpu=0m on Node 10.0.1.106 + Jan 14 04:58:24.871: INFO: Pod tke-bridge-agent-4rffd requesting resource cpu=0m on Node 10.0.1.212 + Jan 14 04:58:24.871: INFO: Pod tke-bridge-agent-frbcm requesting resource cpu=0m on Node 10.0.1.106 + Jan 14 04:58:24.871: INFO: Pod tke-bridge-agent-hzv6c requesting resource cpu=0m on Node 10.0.1.99 + Jan 14 04:58:24.871: INFO: Pod tke-cni-agent-7mk9b requesting resource cpu=0m on Node 10.0.1.99 + Jan 14 04:58:24.871: INFO: Pod tke-cni-agent-nv7pn requesting resource cpu=0m on Node 10.0.1.106 + Jan 14 04:58:24.871: INFO: Pod tke-cni-agent-xmggs requesting resource cpu=0m on Node 10.0.1.212 + Jan 14 04:58:24.871: INFO: Pod tke-monitor-agent-6gtt6 requesting resource cpu=10m on Node 10.0.1.212 + Jan 14 04:58:24.871: INFO: Pod tke-monitor-agent-g27mp requesting resource cpu=10m on Node 10.0.1.99 + Jan 14 04:58:24.871: INFO: Pod tke-monitor-agent-xhdhg requesting resource cpu=10m on Node 10.0.1.106 + Jan 14 04:58:24.871: INFO: Pod sonobuoy requesting resource cpu=0m on Node 10.0.1.99 + Jan 14 04:58:24.871: INFO: Pod sonobuoy-e2e-job-1b1a46fb40e34267 requesting resource cpu=0m on Node 10.0.1.212 + Jan 14 04:58:24.871: INFO: Pod sonobuoy-systemd-logs-daemon-set-0fc25bd2d6604eac-jkjnf requesting resource cpu=0m on Node 10.0.1.106 + Jan 14 04:58:24.871: INFO: Pod sonobuoy-systemd-logs-daemon-set-0fc25bd2d6604eac-lgxk7 requesting resource cpu=0m on Node 10.0.1.212 + Jan 14 04:58:24.871: INFO: Pod sonobuoy-systemd-logs-daemon-set-0fc25bd2d6604eac-nwh62 requesting resource cpu=0m on Node 10.0.1.99 + Jan 14 04:58:24.871: INFO: Pod ss2-0 requesting resource cpu=0m on Node 10.0.1.99 + Jan 14 04:58:24.871: INFO: Pod ss2-1 requesting resource cpu=0m on Node 10.0.1.99 + Jan 14 04:58:24.871: INFO: Pod ss2-2 requesting resource cpu=0m on Node 10.0.1.212 + STEP: Starting Pods to consume most of the cluster CPU. 01/14/23 04:58:24.871 + Jan 14 04:58:24.872: INFO: Creating a pod which consumes cpu=5453m on Node 10.0.1.106 + Jan 14 04:58:24.882: INFO: Creating a pod which consumes cpu=5383m on Node 10.0.1.212 + Jan 14 04:58:24.891: INFO: Creating a pod which consumes cpu=5383m on Node 10.0.1.99 + Jan 14 04:58:24.898: INFO: Waiting up to 5m0s for pod "filler-pod-ece7aaba-9bc0-456e-8dc1-41108a6ecc12" in namespace "sched-pred-5328" to be "running" + Jan 14 04:58:24.903: INFO: Pod "filler-pod-ece7aaba-9bc0-456e-8dc1-41108a6ecc12": Phase="Pending", Reason="", readiness=false. Elapsed: 5.183637ms + Jan 14 04:58:26.907: INFO: Pod "filler-pod-ece7aaba-9bc0-456e-8dc1-41108a6ecc12": Phase="Running", Reason="", readiness=true. Elapsed: 2.009543026s + Jan 14 04:58:26.908: INFO: Pod "filler-pod-ece7aaba-9bc0-456e-8dc1-41108a6ecc12" satisfied condition "running" + Jan 14 04:58:26.908: INFO: Waiting up to 5m0s for pod "filler-pod-f143daf2-96cf-42bc-89b8-deed767fc124" in namespace "sched-pred-5328" to be "running" + Jan 14 04:58:26.911: INFO: Pod "filler-pod-f143daf2-96cf-42bc-89b8-deed767fc124": Phase="Running", Reason="", readiness=true. Elapsed: 3.165495ms + Jan 14 04:58:26.911: INFO: Pod "filler-pod-f143daf2-96cf-42bc-89b8-deed767fc124" satisfied condition "running" + Jan 14 04:58:26.911: INFO: Waiting up to 5m0s for pod "filler-pod-19d8fa59-179a-4dde-8da1-975a5b5d3b4d" in namespace "sched-pred-5328" to be "running" + Jan 14 04:58:26.914: INFO: Pod "filler-pod-19d8fa59-179a-4dde-8da1-975a5b5d3b4d": Phase="Running", Reason="", readiness=true. Elapsed: 2.994945ms + Jan 14 04:58:26.914: INFO: Pod "filler-pod-19d8fa59-179a-4dde-8da1-975a5b5d3b4d" satisfied condition "running" + STEP: Creating another pod that requires unavailable amount of CPU. 01/14/23 04:58:26.914 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-19d8fa59-179a-4dde-8da1-975a5b5d3b4d.173a1418044b77c5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5328/filler-pod-19d8fa59-179a-4dde-8da1-975a5b5d3b4d to 10.0.1.99] 01/14/23 04:58:26.918 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-19d8fa59-179a-4dde-8da1-975a5b5d3b4d.173a14181feb7d6f], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 01/14/23 04:58:26.918 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-19d8fa59-179a-4dde-8da1-975a5b5d3b4d.173a1418211473ba], Reason = [Created], Message = [Created container filler-pod-19d8fa59-179a-4dde-8da1-975a5b5d3b4d] 01/14/23 04:58:26.918 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-19d8fa59-179a-4dde-8da1-975a5b5d3b4d.173a1418253aae88], Reason = [Started], Message = [Started container filler-pod-19d8fa59-179a-4dde-8da1-975a5b5d3b4d] 01/14/23 04:58:26.918 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-ece7aaba-9bc0-456e-8dc1-41108a6ecc12.173a1418034ebf79], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5328/filler-pod-ece7aaba-9bc0-456e-8dc1-41108a6ecc12 to 10.0.1.106] 01/14/23 04:58:26.918 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-ece7aaba-9bc0-456e-8dc1-41108a6ecc12.173a14181ec9a063], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 01/14/23 04:58:26.918 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-ece7aaba-9bc0-456e-8dc1-41108a6ecc12.173a14181fda13bd], Reason = [Created], Message = [Created container filler-pod-ece7aaba-9bc0-456e-8dc1-41108a6ecc12] 01/14/23 04:58:26.918 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-ece7aaba-9bc0-456e-8dc1-41108a6ecc12.173a141823b8bee6], Reason = [Started], Message = [Started container filler-pod-ece7aaba-9bc0-456e-8dc1-41108a6ecc12] 01/14/23 04:58:26.918 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-f143daf2-96cf-42bc-89b8-deed767fc124.173a141803cabcbe], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5328/filler-pod-f143daf2-96cf-42bc-89b8-deed767fc124 to 10.0.1.212] 01/14/23 04:58:26.918 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-f143daf2-96cf-42bc-89b8-deed767fc124.173a1418202958e9], Reason = [Pulled], Message = [Container image "registry.k8s.io/pause:3.9" already present on machine] 01/14/23 04:58:26.918 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-f143daf2-96cf-42bc-89b8-deed767fc124.173a14182148bab4], Reason = [Created], Message = [Created container filler-pod-f143daf2-96cf-42bc-89b8-deed767fc124] 01/14/23 04:58:26.918 + STEP: Considering event: + Type = [Normal], Name = [filler-pod-f143daf2-96cf-42bc-89b8-deed767fc124.173a141825eff6e7], Reason = [Started], Message = [Started container filler-pod-f143daf2-96cf-42bc-89b8-deed767fc124] 01/14/23 04:58:26.918 + STEP: Considering event: + Type = [Warning], Name = [additional-pod.173a14187ccba435], Reason = [FailedScheduling], Message = [0/6 nodes are available: 3 Insufficient cpu, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 3 No preemption victims found for incoming pod, 3 Preemption is not helpful for scheduling..] 01/14/23 04:58:26.931 + STEP: removing the label node off the node 10.0.1.99 01/14/23 04:58:27.931 + STEP: verifying the node doesn't have the label node 01/14/23 04:58:27.944 + STEP: removing the label node off the node 10.0.1.106 01/14/23 04:58:27.95 + STEP: verifying the node doesn't have the label node 01/14/23 04:58:27.964 + STEP: removing the label node off the node 10.0.1.212 01/14/23 04:58:27.967 + STEP: verifying the node doesn't have the label node 01/14/23 04:58:27.979 + [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:58:27.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/scheduling/predicates.go:88 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-scheduling] SchedulerPredicates [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "sched-pred-5328" for this suite. 01/14/23 04:58:27.987 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] Lease + lease API should be available [Conformance] + test/e2e/common/node/lease.go:72 +[BeforeEach] [sig-node] Lease + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:58:27.999 +Jan 14 04:58:27.999: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename lease-test 01/14/23 04:58:28 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:58:28.014 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:58:28.016 +[BeforeEach] [sig-node] Lease + test/e2e/framework/metrics/init/init.go:31 +[It] lease API should be available [Conformance] + test/e2e/common/node/lease.go:72 +[AfterEach] [sig-node] Lease + test/e2e/framework/node/init/init.go:32 +Jan 14 04:58:28.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Lease + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Lease + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Lease + tear down framework | framework.go:193 +STEP: Destroying namespace "lease-test-126" for this suite. 01/14/23 04:58:28.063 +------------------------------ +• [0.069 seconds] +[sig-node] Lease +test/e2e/common/node/framework.go:23 + lease API should be available [Conformance] + test/e2e/common/node/lease.go:72 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Lease + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:58:27.999 + Jan 14 04:58:27.999: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename lease-test 01/14/23 04:58:28 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:58:28.014 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:58:28.016 + [BeforeEach] [sig-node] Lease + test/e2e/framework/metrics/init/init.go:31 + [It] lease API should be available [Conformance] + test/e2e/common/node/lease.go:72 + [AfterEach] [sig-node] Lease + test/e2e/framework/node/init/init.go:32 + Jan 14 04:58:28.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Lease + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Lease + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Lease + tear down framework | framework.go:193 + STEP: Destroying namespace "lease-test-126" for this suite. 01/14/23 04:58:28.063 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for the cluster [Conformance] + test/e2e/network/dns.go:50 +[BeforeEach] [sig-network] DNS + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:58:28.068 +Jan 14 04:58:28.069: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename dns 01/14/23 04:58:28.069 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:58:28.083 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:58:28.086 +[BeforeEach] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:31 +[It] should provide DNS for the cluster [Conformance] + test/e2e/network/dns.go:50 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;sleep 1; done + 01/14/23 04:58:28.088 +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;sleep 1; done + 01/14/23 04:58:28.088 +STEP: creating a pod to probe DNS 01/14/23 04:58:28.088 +STEP: submitting the pod to kubernetes 01/14/23 04:58:28.088 +Jan 14 04:58:28.098: INFO: Waiting up to 15m0s for pod "dns-test-d2bf63f5-ddf9-4a4f-a2f9-e0fbc97e2b42" in namespace "dns-560" to be "running" +Jan 14 04:58:28.101: INFO: Pod "dns-test-d2bf63f5-ddf9-4a4f-a2f9-e0fbc97e2b42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.888927ms +Jan 14 04:58:30.105: INFO: Pod "dns-test-d2bf63f5-ddf9-4a4f-a2f9-e0fbc97e2b42": Phase="Running", Reason="", readiness=true. Elapsed: 2.007341195s +Jan 14 04:58:30.105: INFO: Pod "dns-test-d2bf63f5-ddf9-4a4f-a2f9-e0fbc97e2b42" satisfied condition "running" +STEP: retrieving the pod 01/14/23 04:58:30.105 +STEP: looking for the results for each expected name from probers 01/14/23 04:58:30.108 +Jan 14 04:58:30.121: INFO: DNS probes using dns-560/dns-test-d2bf63f5-ddf9-4a4f-a2f9-e0fbc97e2b42 succeeded + +STEP: deleting the pod 01/14/23 04:58:30.121 +[AfterEach] [sig-network] DNS + test/e2e/framework/node/init/init.go:32 +Jan 14 04:58:30.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] DNS + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] DNS + tear down framework | framework.go:193 +STEP: Destroying namespace "dns-560" for this suite. 01/14/23 04:58:30.14 +------------------------------ +• [2.078 seconds] +[sig-network] DNS +test/e2e/network/common/framework.go:23 + should provide DNS for the cluster [Conformance] + test/e2e/network/dns.go:50 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] DNS + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:58:28.068 + Jan 14 04:58:28.069: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename dns 01/14/23 04:58:28.069 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:58:28.083 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:58:28.086 + [BeforeEach] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:31 + [It] should provide DNS for the cluster [Conformance] + test/e2e/network/dns.go:50 + STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;sleep 1; done + 01/14/23 04:58:28.088 + STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;sleep 1; done + 01/14/23 04:58:28.088 + STEP: creating a pod to probe DNS 01/14/23 04:58:28.088 + STEP: submitting the pod to kubernetes 01/14/23 04:58:28.088 + Jan 14 04:58:28.098: INFO: Waiting up to 15m0s for pod "dns-test-d2bf63f5-ddf9-4a4f-a2f9-e0fbc97e2b42" in namespace "dns-560" to be "running" + Jan 14 04:58:28.101: INFO: Pod "dns-test-d2bf63f5-ddf9-4a4f-a2f9-e0fbc97e2b42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.888927ms + Jan 14 04:58:30.105: INFO: Pod "dns-test-d2bf63f5-ddf9-4a4f-a2f9-e0fbc97e2b42": Phase="Running", Reason="", readiness=true. Elapsed: 2.007341195s + Jan 14 04:58:30.105: INFO: Pod "dns-test-d2bf63f5-ddf9-4a4f-a2f9-e0fbc97e2b42" satisfied condition "running" + STEP: retrieving the pod 01/14/23 04:58:30.105 + STEP: looking for the results for each expected name from probers 01/14/23 04:58:30.108 + Jan 14 04:58:30.121: INFO: DNS probes using dns-560/dns-test-d2bf63f5-ddf9-4a4f-a2f9-e0fbc97e2b42 succeeded + + STEP: deleting the pod 01/14/23 04:58:30.121 + [AfterEach] [sig-network] DNS + test/e2e/framework/node/init/init.go:32 + Jan 14 04:58:30.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] DNS + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] DNS + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] DNS + tear down framework | framework.go:193 + STEP: Destroying namespace "dns-560" for this suite. 01/14/23 04:58:30.14 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:261 +[BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:58:30.147 +Jan 14 04:58:30.147: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename projected 01/14/23 04:58:30.148 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:58:30.167 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:58:30.17 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 +[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:261 +STEP: Creating a pod to test downward API volume plugin 01/14/23 04:58:30.173 +Jan 14 04:58:30.183: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e61f0fd5-dd36-4259-abd1-a052991ed9bc" in namespace "projected-9242" to be "Succeeded or Failed" +Jan 14 04:58:30.188: INFO: Pod "downwardapi-volume-e61f0fd5-dd36-4259-abd1-a052991ed9bc": Phase="Pending", Reason="", readiness=false. Elapsed: 5.053805ms +Jan 14 04:58:32.193: INFO: Pod "downwardapi-volume-e61f0fd5-dd36-4259-abd1-a052991ed9bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010704584s +Jan 14 04:58:34.193: INFO: Pod "downwardapi-volume-e61f0fd5-dd36-4259-abd1-a052991ed9bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010446713s +STEP: Saw pod success 01/14/23 04:58:34.193 +Jan 14 04:58:34.193: INFO: Pod "downwardapi-volume-e61f0fd5-dd36-4259-abd1-a052991ed9bc" satisfied condition "Succeeded or Failed" +Jan 14 04:58:34.197: INFO: Trying to get logs from node 10.0.1.106 pod downwardapi-volume-e61f0fd5-dd36-4259-abd1-a052991ed9bc container client-container: +STEP: delete the pod 01/14/23 04:58:34.208 +Jan 14 04:58:34.223: INFO: Waiting for pod downwardapi-volume-e61f0fd5-dd36-4259-abd1-a052991ed9bc to disappear +Jan 14 04:58:34.226: INFO: Pod downwardapi-volume-e61f0fd5-dd36-4259-abd1-a052991ed9bc no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 +Jan 14 04:58:34.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-9242" for this suite. 01/14/23 04:58:34.231 +------------------------------ +• [4.089 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:261 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:58:30.147 + Jan 14 04:58:30.147: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename projected 01/14/23 04:58:30.148 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:58:30.167 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:58:30.17 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 + [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:261 + STEP: Creating a pod to test downward API volume plugin 01/14/23 04:58:30.173 + Jan 14 04:58:30.183: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e61f0fd5-dd36-4259-abd1-a052991ed9bc" in namespace "projected-9242" to be "Succeeded or Failed" + Jan 14 04:58:30.188: INFO: Pod "downwardapi-volume-e61f0fd5-dd36-4259-abd1-a052991ed9bc": Phase="Pending", Reason="", readiness=false. Elapsed: 5.053805ms + Jan 14 04:58:32.193: INFO: Pod "downwardapi-volume-e61f0fd5-dd36-4259-abd1-a052991ed9bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010704584s + Jan 14 04:58:34.193: INFO: Pod "downwardapi-volume-e61f0fd5-dd36-4259-abd1-a052991ed9bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010446713s + STEP: Saw pod success 01/14/23 04:58:34.193 + Jan 14 04:58:34.193: INFO: Pod "downwardapi-volume-e61f0fd5-dd36-4259-abd1-a052991ed9bc" satisfied condition "Succeeded or Failed" + Jan 14 04:58:34.197: INFO: Trying to get logs from node 10.0.1.106 pod downwardapi-volume-e61f0fd5-dd36-4259-abd1-a052991ed9bc container client-container: + STEP: delete the pod 01/14/23 04:58:34.208 + Jan 14 04:58:34.223: INFO: Waiting for pod downwardapi-volume-e61f0fd5-dd36-4259-abd1-a052991ed9bc to disappear + Jan 14 04:58:34.226: INFO: Pod downwardapi-volume-e61f0fd5-dd36-4259-abd1-a052991ed9bc no longer exists + [AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 + Jan 14 04:58:34.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-9242" for this suite. 01/14/23 04:58:34.231 + << End Captured GinkgoWriter Output +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with secret pod [Conformance] + test/e2e/storage/subpath.go:60 +[BeforeEach] [sig-storage] Subpath + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:58:34.236 +Jan 14 04:58:34.236: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename subpath 01/14/23 04:58:34.237 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:58:34.251 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:58:34.254 +[BeforeEach] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 +STEP: Setting up data 01/14/23 04:58:34.256 +[It] should support subpaths with secret pod [Conformance] + test/e2e/storage/subpath.go:60 +STEP: Creating pod pod-subpath-test-secret-dk7v 01/14/23 04:58:34.264 +STEP: Creating a pod to test atomic-volume-subpath 01/14/23 04:58:34.264 +Jan 14 04:58:34.273: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-dk7v" in namespace "subpath-186" to be "Succeeded or Failed" +Jan 14 04:58:34.276: INFO: Pod "pod-subpath-test-secret-dk7v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.934149ms +Jan 14 04:58:36.281: INFO: Pod "pod-subpath-test-secret-dk7v": Phase="Running", Reason="", readiness=true. Elapsed: 2.008701719s +Jan 14 04:58:38.281: INFO: Pod "pod-subpath-test-secret-dk7v": Phase="Running", Reason="", readiness=true. Elapsed: 4.008267236s +Jan 14 04:58:40.280: INFO: Pod "pod-subpath-test-secret-dk7v": Phase="Running", Reason="", readiness=true. Elapsed: 6.007667565s +Jan 14 04:58:42.281: INFO: Pod "pod-subpath-test-secret-dk7v": Phase="Running", Reason="", readiness=true. Elapsed: 8.008704128s +Jan 14 04:58:44.281: INFO: Pod "pod-subpath-test-secret-dk7v": Phase="Running", Reason="", readiness=true. Elapsed: 10.00793354s +Jan 14 04:58:46.281: INFO: Pod "pod-subpath-test-secret-dk7v": Phase="Running", Reason="", readiness=true. Elapsed: 12.008601754s +Jan 14 04:58:48.282: INFO: Pod "pod-subpath-test-secret-dk7v": Phase="Running", Reason="", readiness=true. Elapsed: 14.009105396s +Jan 14 04:58:50.280: INFO: Pod "pod-subpath-test-secret-dk7v": Phase="Running", Reason="", readiness=true. Elapsed: 16.00757669s +Jan 14 04:58:52.282: INFO: Pod "pod-subpath-test-secret-dk7v": Phase="Running", Reason="", readiness=true. Elapsed: 18.008818256s +Jan 14 04:58:54.281: INFO: Pod "pod-subpath-test-secret-dk7v": Phase="Running", Reason="", readiness=true. Elapsed: 20.008192591s +Jan 14 04:58:56.282: INFO: Pod "pod-subpath-test-secret-dk7v": Phase="Running", Reason="", readiness=false. Elapsed: 22.008947184s +Jan 14 04:58:58.282: INFO: Pod "pod-subpath-test-secret-dk7v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.009371055s +STEP: Saw pod success 01/14/23 04:58:58.282 +Jan 14 04:58:58.282: INFO: Pod "pod-subpath-test-secret-dk7v" satisfied condition "Succeeded or Failed" +Jan 14 04:58:58.286: INFO: Trying to get logs from node 10.0.1.106 pod pod-subpath-test-secret-dk7v container test-container-subpath-secret-dk7v: +STEP: delete the pod 01/14/23 04:58:58.292 +Jan 14 04:58:58.303: INFO: Waiting for pod pod-subpath-test-secret-dk7v to disappear +Jan 14 04:58:58.306: INFO: Pod pod-subpath-test-secret-dk7v no longer exists +STEP: Deleting pod pod-subpath-test-secret-dk7v 01/14/23 04:58:58.306 +Jan 14 04:58:58.306: INFO: Deleting pod "pod-subpath-test-secret-dk7v" in namespace "subpath-186" +[AfterEach] [sig-storage] Subpath + test/e2e/framework/node/init/init.go:32 +Jan 14 04:58:58.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Subpath + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Subpath + tear down framework | framework.go:193 +STEP: Destroying namespace "subpath-186" for this suite. 01/14/23 04:58:58.314 +------------------------------ +• [SLOW TEST] [24.083 seconds] +[sig-storage] Subpath +test/e2e/storage/utils/framework.go:23 + Atomic writer volumes + test/e2e/storage/subpath.go:36 + should support subpaths with secret pod [Conformance] + test/e2e/storage/subpath.go:60 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Subpath + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:58:34.236 + Jan 14 04:58:34.236: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename subpath 01/14/23 04:58:34.237 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:58:34.251 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:58:34.254 + [BeforeEach] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] Atomic writer volumes + test/e2e/storage/subpath.go:40 + STEP: Setting up data 01/14/23 04:58:34.256 + [It] should support subpaths with secret pod [Conformance] + test/e2e/storage/subpath.go:60 + STEP: Creating pod pod-subpath-test-secret-dk7v 01/14/23 04:58:34.264 + STEP: Creating a pod to test atomic-volume-subpath 01/14/23 04:58:34.264 + Jan 14 04:58:34.273: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-dk7v" in namespace "subpath-186" to be "Succeeded or Failed" + Jan 14 04:58:34.276: INFO: Pod "pod-subpath-test-secret-dk7v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.934149ms + Jan 14 04:58:36.281: INFO: Pod "pod-subpath-test-secret-dk7v": Phase="Running", Reason="", readiness=true. Elapsed: 2.008701719s + Jan 14 04:58:38.281: INFO: Pod "pod-subpath-test-secret-dk7v": Phase="Running", Reason="", readiness=true. Elapsed: 4.008267236s + Jan 14 04:58:40.280: INFO: Pod "pod-subpath-test-secret-dk7v": Phase="Running", Reason="", readiness=true. Elapsed: 6.007667565s + Jan 14 04:58:42.281: INFO: Pod "pod-subpath-test-secret-dk7v": Phase="Running", Reason="", readiness=true. Elapsed: 8.008704128s + Jan 14 04:58:44.281: INFO: Pod "pod-subpath-test-secret-dk7v": Phase="Running", Reason="", readiness=true. Elapsed: 10.00793354s + Jan 14 04:58:46.281: INFO: Pod "pod-subpath-test-secret-dk7v": Phase="Running", Reason="", readiness=true. Elapsed: 12.008601754s + Jan 14 04:58:48.282: INFO: Pod "pod-subpath-test-secret-dk7v": Phase="Running", Reason="", readiness=true. Elapsed: 14.009105396s + Jan 14 04:58:50.280: INFO: Pod "pod-subpath-test-secret-dk7v": Phase="Running", Reason="", readiness=true. Elapsed: 16.00757669s + Jan 14 04:58:52.282: INFO: Pod "pod-subpath-test-secret-dk7v": Phase="Running", Reason="", readiness=true. Elapsed: 18.008818256s + Jan 14 04:58:54.281: INFO: Pod "pod-subpath-test-secret-dk7v": Phase="Running", Reason="", readiness=true. Elapsed: 20.008192591s + Jan 14 04:58:56.282: INFO: Pod "pod-subpath-test-secret-dk7v": Phase="Running", Reason="", readiness=false. Elapsed: 22.008947184s + Jan 14 04:58:58.282: INFO: Pod "pod-subpath-test-secret-dk7v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.009371055s + STEP: Saw pod success 01/14/23 04:58:58.282 + Jan 14 04:58:58.282: INFO: Pod "pod-subpath-test-secret-dk7v" satisfied condition "Succeeded or Failed" + Jan 14 04:58:58.286: INFO: Trying to get logs from node 10.0.1.106 pod pod-subpath-test-secret-dk7v container test-container-subpath-secret-dk7v: + STEP: delete the pod 01/14/23 04:58:58.292 + Jan 14 04:58:58.303: INFO: Waiting for pod pod-subpath-test-secret-dk7v to disappear + Jan 14 04:58:58.306: INFO: Pod pod-subpath-test-secret-dk7v no longer exists + STEP: Deleting pod pod-subpath-test-secret-dk7v 01/14/23 04:58:58.306 + Jan 14 04:58:58.306: INFO: Deleting pod "pod-subpath-test-secret-dk7v" in namespace "subpath-186" + [AfterEach] [sig-storage] Subpath + test/e2e/framework/node/init/init.go:32 + Jan 14 04:58:58.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Subpath + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Subpath + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Subpath + tear down framework | framework.go:193 + STEP: Destroying namespace "subpath-186" for this suite. 01/14/23 04:58:58.314 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should run and stop complex daemon [Conformance] + test/e2e/apps/daemon_set.go:194 +[BeforeEach] [sig-apps] Daemon set [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:58:58.321 +Jan 14 04:58:58.321: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename daemonsets 01/14/23 04:58:58.321 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:58:58.335 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:58:58.337 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 +[It] should run and stop complex daemon [Conformance] + test/e2e/apps/daemon_set.go:194 +Jan 14 04:58:58.359: INFO: Creating daemon "daemon-set" with a node selector +STEP: Initially, daemon pods should not be running on any nodes. 01/14/23 04:58:58.366 +Jan 14 04:58:58.373: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jan 14 04:58:58.373: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +STEP: Change node label to blue, check that daemon pod is launched. 01/14/23 04:58:58.373 +Jan 14 04:58:58.394: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jan 14 04:58:58.394: INFO: Node 10.0.1.99 is running 0 daemon pod, expected 1 +Jan 14 04:58:59.398: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jan 14 04:58:59.398: INFO: Node 10.0.1.99 is running 0 daemon pod, expected 1 +Jan 14 04:59:00.399: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Jan 14 04:59:00.399: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set +STEP: Update the node label to green, and wait for daemons to be unscheduled 01/14/23 04:59:00.402 +Jan 14 04:59:00.417: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Jan 14 04:59:00.417: INFO: Number of running nodes: 0, number of available pods: 1 in daemonset daemon-set +Jan 14 04:59:01.422: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jan 14 04:59:01.422: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate 01/14/23 04:59:01.422 +Jan 14 04:59:01.437: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jan 14 04:59:01.437: INFO: Node 10.0.1.99 is running 0 daemon pod, expected 1 +Jan 14 04:59:02.442: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jan 14 04:59:02.442: INFO: Node 10.0.1.99 is running 0 daemon pod, expected 1 +Jan 14 04:59:03.441: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jan 14 04:59:03.441: INFO: Node 10.0.1.99 is running 0 daemon pod, expected 1 +Jan 14 04:59:04.441: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 +Jan 14 04:59:04.441: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 +STEP: Deleting DaemonSet "daemon-set" 01/14/23 04:59:04.448 +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1144, will wait for the garbage collector to delete the pods 01/14/23 04:59:04.448 +Jan 14 04:59:04.511: INFO: Deleting DaemonSet.extensions daemon-set took: 9.761263ms +Jan 14 04:59:04.612: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.73062ms +Jan 14 04:59:06.617: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 +Jan 14 04:59:06.617: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set +Jan 14 04:59:06.621: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"460063"},"items":null} + +Jan 14 04:59:06.624: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"460063"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/node/init/init.go:32 +Jan 14 04:59:06.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "daemonsets-1144" for this suite. 01/14/23 04:59:06.657 +------------------------------ +• [SLOW TEST] [8.342 seconds] +[sig-apps] Daemon set [Serial] +test/e2e/apps/framework.go:23 + should run and stop complex daemon [Conformance] + test/e2e/apps/daemon_set.go:194 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Daemon set [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:58:58.321 + Jan 14 04:58:58.321: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename daemonsets 01/14/23 04:58:58.321 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:58:58.335 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:58:58.337 + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:146 + [It] should run and stop complex daemon [Conformance] + test/e2e/apps/daemon_set.go:194 + Jan 14 04:58:58.359: INFO: Creating daemon "daemon-set" with a node selector + STEP: Initially, daemon pods should not be running on any nodes. 01/14/23 04:58:58.366 + Jan 14 04:58:58.373: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jan 14 04:58:58.373: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set + STEP: Change node label to blue, check that daemon pod is launched. 01/14/23 04:58:58.373 + Jan 14 04:58:58.394: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jan 14 04:58:58.394: INFO: Node 10.0.1.99 is running 0 daemon pod, expected 1 + Jan 14 04:58:59.398: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jan 14 04:58:59.398: INFO: Node 10.0.1.99 is running 0 daemon pod, expected 1 + Jan 14 04:59:00.399: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 + Jan 14 04:59:00.399: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set + STEP: Update the node label to green, and wait for daemons to be unscheduled 01/14/23 04:59:00.402 + Jan 14 04:59:00.417: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 + Jan 14 04:59:00.417: INFO: Number of running nodes: 0, number of available pods: 1 in daemonset daemon-set + Jan 14 04:59:01.422: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jan 14 04:59:01.422: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set + STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate 01/14/23 04:59:01.422 + Jan 14 04:59:01.437: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jan 14 04:59:01.437: INFO: Node 10.0.1.99 is running 0 daemon pod, expected 1 + Jan 14 04:59:02.442: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jan 14 04:59:02.442: INFO: Node 10.0.1.99 is running 0 daemon pod, expected 1 + Jan 14 04:59:03.441: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jan 14 04:59:03.441: INFO: Node 10.0.1.99 is running 0 daemon pod, expected 1 + Jan 14 04:59:04.441: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 1 + Jan 14 04:59:04.441: INFO: Number of running nodes: 1, number of available pods: 1 in daemonset daemon-set + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/apps/daemon_set.go:111 + STEP: Deleting DaemonSet "daemon-set" 01/14/23 04:59:04.448 + STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1144, will wait for the garbage collector to delete the pods 01/14/23 04:59:04.448 + Jan 14 04:59:04.511: INFO: Deleting DaemonSet.extensions daemon-set took: 9.761263ms + Jan 14 04:59:04.612: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.73062ms + Jan 14 04:59:06.617: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0 + Jan 14 04:59:06.617: INFO: Number of running nodes: 0, number of available pods: 0 in daemonset daemon-set + Jan 14 04:59:06.621: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"460063"},"items":null} + + Jan 14 04:59:06.624: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"460063"},"items":null} + + [AfterEach] [sig-apps] Daemon set [Serial] + test/e2e/framework/node/init/init.go:32 + Jan 14 04:59:06.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Daemon set [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "daemonsets-1144" for this suite. 01/14/23 04:59:06.657 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:79 +[BeforeEach] [sig-storage] Secrets + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:59:06.663 +Jan 14 04:59:06.663: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename secrets 01/14/23 04:59:06.664 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:59:06.681 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:59:06.683 +[BeforeEach] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:31 +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:79 +STEP: Creating secret with name secret-test-map-1d437827-8afa-447e-9e48-501c88fedabb 01/14/23 04:59:06.686 +STEP: Creating a pod to test consume secrets 01/14/23 04:59:06.69 +Jan 14 04:59:06.699: INFO: Waiting up to 5m0s for pod "pod-secrets-73f0b447-5a7f-43a2-9062-6793638dd5bb" in namespace "secrets-5557" to be "Succeeded or Failed" +Jan 14 04:59:06.702: INFO: Pod "pod-secrets-73f0b447-5a7f-43a2-9062-6793638dd5bb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.162063ms +Jan 14 04:59:08.707: INFO: Pod "pod-secrets-73f0b447-5a7f-43a2-9062-6793638dd5bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008078363s +Jan 14 04:59:10.707: INFO: Pod "pod-secrets-73f0b447-5a7f-43a2-9062-6793638dd5bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008330988s +STEP: Saw pod success 01/14/23 04:59:10.707 +Jan 14 04:59:10.708: INFO: Pod "pod-secrets-73f0b447-5a7f-43a2-9062-6793638dd5bb" satisfied condition "Succeeded or Failed" +Jan 14 04:59:10.711: INFO: Trying to get logs from node 10.0.1.106 pod pod-secrets-73f0b447-5a7f-43a2-9062-6793638dd5bb container secret-volume-test: +STEP: delete the pod 01/14/23 04:59:10.717 +Jan 14 04:59:10.730: INFO: Waiting for pod pod-secrets-73f0b447-5a7f-43a2-9062-6793638dd5bb to disappear +Jan 14 04:59:10.732: INFO: Pod pod-secrets-73f0b447-5a7f-43a2-9062-6793638dd5bb no longer exists +[AfterEach] [sig-storage] Secrets + test/e2e/framework/node/init/init.go:32 +Jan 14 04:59:10.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Secrets + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Secrets + tear down framework | framework.go:193 +STEP: Destroying namespace "secrets-5557" for this suite. 01/14/23 04:59:10.737 +------------------------------ +• [4.079 seconds] +[sig-storage] Secrets +test/e2e/common/storage/framework.go:23 + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:79 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Secrets + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:59:06.663 + Jan 14 04:59:06.663: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename secrets 01/14/23 04:59:06.664 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:59:06.681 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:59:06.683 + [BeforeEach] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:31 + [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + test/e2e/common/storage/secrets_volume.go:79 + STEP: Creating secret with name secret-test-map-1d437827-8afa-447e-9e48-501c88fedabb 01/14/23 04:59:06.686 + STEP: Creating a pod to test consume secrets 01/14/23 04:59:06.69 + Jan 14 04:59:06.699: INFO: Waiting up to 5m0s for pod "pod-secrets-73f0b447-5a7f-43a2-9062-6793638dd5bb" in namespace "secrets-5557" to be "Succeeded or Failed" + Jan 14 04:59:06.702: INFO: Pod "pod-secrets-73f0b447-5a7f-43a2-9062-6793638dd5bb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.162063ms + Jan 14 04:59:08.707: INFO: Pod "pod-secrets-73f0b447-5a7f-43a2-9062-6793638dd5bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008078363s + Jan 14 04:59:10.707: INFO: Pod "pod-secrets-73f0b447-5a7f-43a2-9062-6793638dd5bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008330988s + STEP: Saw pod success 01/14/23 04:59:10.707 + Jan 14 04:59:10.708: INFO: Pod "pod-secrets-73f0b447-5a7f-43a2-9062-6793638dd5bb" satisfied condition "Succeeded or Failed" + Jan 14 04:59:10.711: INFO: Trying to get logs from node 10.0.1.106 pod pod-secrets-73f0b447-5a7f-43a2-9062-6793638dd5bb container secret-volume-test: + STEP: delete the pod 01/14/23 04:59:10.717 + Jan 14 04:59:10.730: INFO: Waiting for pod pod-secrets-73f0b447-5a7f-43a2-9062-6793638dd5bb to disappear + Jan 14 04:59:10.732: INFO: Pod pod-secrets-73f0b447-5a7f-43a2-9062-6793638dd5bb no longer exists + [AfterEach] [sig-storage] Secrets + test/e2e/framework/node/init/init.go:32 + Jan 14 04:59:10.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Secrets + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Secrets + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Secrets + tear down framework | framework.go:193 + STEP: Destroying namespace "secrets-5557" for this suite. 01/14/23 04:59:10.737 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client Proxy server + should support --unix-socket=/path [Conformance] + test/e2e/kubectl/kubectl.go:1812 +[BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:59:10.745 +Jan 14 04:59:10.745: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename kubectl 01/14/23 04:59:10.745 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:59:10.76 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:59:10.763 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 +[It] should support --unix-socket=/path [Conformance] + test/e2e/kubectl/kubectl.go:1812 +STEP: Starting the proxy 01/14/23 04:59:10.765 +Jan 14 04:59:10.765: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-6429 proxy --unix-socket=/tmp/kubectl-proxy-unix3077306649/test' +STEP: retrieving proxy /api/ output 01/14/23 04:59:10.812 +[AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 +Jan 14 04:59:10.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 +STEP: Destroying namespace "kubectl-6429" for this suite. 01/14/23 04:59:10.818 +------------------------------ +• [0.079 seconds] +[sig-cli] Kubectl client +test/e2e/kubectl/framework.go:23 + Proxy server + test/e2e/kubectl/kubectl.go:1780 + should support --unix-socket=/path [Conformance] + test/e2e/kubectl/kubectl.go:1812 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-cli] Kubectl client + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:59:10.745 + Jan 14 04:59:10.745: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename kubectl 01/14/23 04:59:10.745 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:59:10.76 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:59:10.763 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-cli] Kubectl client + test/e2e/kubectl/kubectl.go:274 + [It] should support --unix-socket=/path [Conformance] + test/e2e/kubectl/kubectl.go:1812 + STEP: Starting the proxy 01/14/23 04:59:10.765 + Jan 14 04:59:10.765: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=kubectl-6429 proxy --unix-socket=/tmp/kubectl-proxy-unix3077306649/test' + STEP: retrieving proxy /api/ output 01/14/23 04:59:10.812 + [AfterEach] [sig-cli] Kubectl client + test/e2e/framework/node/init/init.go:32 + Jan 14 04:59:10.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-cli] Kubectl client + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-cli] Kubectl client + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-cli] Kubectl client + tear down framework | framework.go:193 + STEP: Destroying namespace "kubectl-6429" for this suite. 01/14/23 04:59:10.818 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] + should perform rolling updates and roll backs of template modifications [Conformance] + test/e2e/apps/statefulset.go:306 +[BeforeEach] [sig-apps] StatefulSet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 04:59:10.824 +Jan 14 04:59:10.824: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename statefulset 01/14/23 04:59:10.825 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:59:10.839 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:59:10.842 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 +[BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 +STEP: Creating service test in namespace statefulset-2440 01/14/23 04:59:10.844 +[It] should perform rolling updates and roll backs of template modifications [Conformance] + test/e2e/apps/statefulset.go:306 +STEP: Creating a new StatefulSet 01/14/23 04:59:10.849 +Jan 14 04:59:10.858: INFO: Found 0 stateful pods, waiting for 3 +Jan 14 04:59:20.864: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Jan 14 04:59:20.864: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Jan 14 04:59:20.864: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +Jan 14 04:59:20.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-2440 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Jan 14 04:59:20.997: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Jan 14 04:59:20.997: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Jan 14 04:59:20.997: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +STEP: Updating StatefulSet template: update image from registry.k8s.io/e2e-test-images/httpd:2.4.38-4 to registry.k8s.io/e2e-test-images/httpd:2.4.39-4 01/14/23 04:59:31.013 +Jan 14 04:59:31.032: INFO: Updating stateful set ss2 +STEP: Creating a new revision 01/14/23 04:59:31.032 +STEP: Updating Pods in reverse ordinal order 01/14/23 04:59:41.072 +Jan 14 04:59:41.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-2440 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Jan 14 04:59:41.188: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Jan 14 04:59:41.188: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Jan 14 04:59:41.188: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +STEP: Rolling back to a previous revision 01/14/23 04:59:51.227 +Jan 14 04:59:51.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-2440 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' +Jan 14 04:59:51.341: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" +Jan 14 04:59:51.341: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" +Jan 14 04:59:51.341: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + +Jan 14 05:00:01.377: INFO: Updating stateful set ss2 +STEP: Rolling back update in reverse ordinal order 01/14/23 05:00:11.393 +Jan 14 05:00:11.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-2440 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' +Jan 14 05:00:11.509: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" +Jan 14 05:00:11.509: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" +Jan 14 05:00:11.509: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + +[AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 +Jan 14 05:00:21.531: INFO: Deleting all statefulset in ns statefulset-2440 +Jan 14 05:00:21.533: INFO: Scaling statefulset ss2 to 0 +Jan 14 05:00:31.550: INFO: Waiting for statefulset status.replicas updated to 0 +Jan 14 05:00:31.553: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + test/e2e/framework/node/init/init.go:32 +Jan 14 05:00:31.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] StatefulSet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] StatefulSet + tear down framework | framework.go:193 +STEP: Destroying namespace "statefulset-2440" for this suite. 01/14/23 05:00:31.585 +------------------------------ +• [SLOW TEST] [80.766 seconds] +[sig-apps] StatefulSet +test/e2e/apps/framework.go:23 + Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:103 + should perform rolling updates and roll backs of template modifications [Conformance] + test/e2e/apps/statefulset.go:306 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] StatefulSet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 04:59:10.824 + Jan 14 04:59:10.824: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename statefulset 01/14/23 04:59:10.825 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 04:59:10.839 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 04:59:10.842 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] StatefulSet + test/e2e/apps/statefulset.go:98 + [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:113 + STEP: Creating service test in namespace statefulset-2440 01/14/23 04:59:10.844 + [It] should perform rolling updates and roll backs of template modifications [Conformance] + test/e2e/apps/statefulset.go:306 + STEP: Creating a new StatefulSet 01/14/23 04:59:10.849 + Jan 14 04:59:10.858: INFO: Found 0 stateful pods, waiting for 3 + Jan 14 04:59:20.864: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true + Jan 14 04:59:20.864: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true + Jan 14 04:59:20.864: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true + Jan 14 04:59:20.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-2440 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Jan 14 04:59:20.997: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Jan 14 04:59:20.997: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Jan 14 04:59:20.997: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + STEP: Updating StatefulSet template: update image from registry.k8s.io/e2e-test-images/httpd:2.4.38-4 to registry.k8s.io/e2e-test-images/httpd:2.4.39-4 01/14/23 04:59:31.013 + Jan 14 04:59:31.032: INFO: Updating stateful set ss2 + STEP: Creating a new revision 01/14/23 04:59:31.032 + STEP: Updating Pods in reverse ordinal order 01/14/23 04:59:41.072 + Jan 14 04:59:41.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-2440 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' + Jan 14 04:59:41.188: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" + Jan 14 04:59:41.188: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" + Jan 14 04:59:41.188: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + + STEP: Rolling back to a previous revision 01/14/23 04:59:51.227 + Jan 14 04:59:51.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-2440 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' + Jan 14 04:59:51.341: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" + Jan 14 04:59:51.341: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" + Jan 14 04:59:51.341: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' + + Jan 14 05:00:01.377: INFO: Updating stateful set ss2 + STEP: Rolling back update in reverse ordinal order 01/14/23 05:00:11.393 + Jan 14 05:00:11.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=statefulset-2440 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' + Jan 14 05:00:11.509: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" + Jan 14 05:00:11.509: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" + Jan 14 05:00:11.509: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' + + [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] + test/e2e/apps/statefulset.go:124 + Jan 14 05:00:21.531: INFO: Deleting all statefulset in ns statefulset-2440 + Jan 14 05:00:21.533: INFO: Scaling statefulset ss2 to 0 + Jan 14 05:00:31.550: INFO: Waiting for statefulset status.replicas updated to 0 + Jan 14 05:00:31.553: INFO: Deleting statefulset ss2 + [AfterEach] [sig-apps] StatefulSet + test/e2e/framework/node/init/init.go:32 + Jan 14 05:00:31.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] StatefulSet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] StatefulSet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] StatefulSet + tear down framework | framework.go:193 + STEP: Destroying namespace "statefulset-2440" for this suite. 01/14/23 05:00:31.585 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-apps] Deployment + RecreateDeployment should delete old pods and create new ones [Conformance] + test/e2e/apps/deployment.go:113 +[BeforeEach] [sig-apps] Deployment + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 05:00:31.591 +Jan 14 05:00:31.591: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename deployment 01/14/23 05:00:31.592 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 05:00:31.658 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 05:00:31.66 +[BeforeEach] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] RecreateDeployment should delete old pods and create new ones [Conformance] + test/e2e/apps/deployment.go:113 +Jan 14 05:00:31.663: INFO: Creating deployment "test-recreate-deployment" +Jan 14 05:00:31.668: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 +Jan 14 05:00:31.756: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created +Jan 14 05:00:33.764: INFO: Waiting deployment "test-recreate-deployment" to complete +Jan 14 05:00:33.767: INFO: Triggering a new rollout for deployment "test-recreate-deployment" +Jan 14 05:00:33.777: INFO: Updating deployment test-recreate-deployment +Jan 14 05:00:33.777: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Jan 14 05:00:33.861: INFO: Deployment "test-recreate-deployment": +&Deployment{ObjectMeta:{test-recreate-deployment deployment-8298 199e2c1b-a707-4baf-adc2-4c235ead069d 460912 2 2023-01-14 05:00:31 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-01-14 05:00:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 05:00:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004b32b18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2023-01-14 05:00:33 +0000 UTC,LastTransitionTime:2023-01-14 05:00:33 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-cff6dc657" is progressing.,LastUpdateTime:2023-01-14 05:00:33 +0000 UTC,LastTransitionTime:2023-01-14 05:00:31 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} + +Jan 14 05:00:33.864: INFO: New ReplicaSet "test-recreate-deployment-cff6dc657" of Deployment "test-recreate-deployment": +&ReplicaSet{ObjectMeta:{test-recreate-deployment-cff6dc657 deployment-8298 e3090cb1-6482-4047-8492-8e7bf82205e7 460911 1 2023-01-14 05:00:33 +0000 UTC map[name:sample-pod-3 pod-template-hash:cff6dc657] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 199e2c1b-a707-4baf-adc2-4c235ead069d 0xc004b32fc0 0xc004b32fc1}] [] [{kube-controller-manager Update apps/v1 2023-01-14 05:00:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"199e2c1b-a707-4baf-adc2-4c235ead069d\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 05:00:33 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: cff6dc657,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:cff6dc657] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004b33058 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Jan 14 05:00:33.864: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": +Jan 14 05:00:33.864: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-795566c5cb deployment-8298 52657143-677d-4e69-8728-ed5c3543e0c9 460901 2 2023-01-14 05:00:31 +0000 UTC map[name:sample-pod-3 pod-template-hash:795566c5cb] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 199e2c1b-a707-4baf-adc2-4c235ead069d 0xc004b32ea7 0xc004b32ea8}] [] [{kube-controller-manager Update apps/v1 2023-01-14 05:00:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"199e2c1b-a707-4baf-adc2-4c235ead069d\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 05:00:33 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 795566c5cb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:795566c5cb] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004b32f58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} +Jan 14 05:00:33.868: INFO: Pod "test-recreate-deployment-cff6dc657-96tv9" is not available: +&Pod{ObjectMeta:{test-recreate-deployment-cff6dc657-96tv9 test-recreate-deployment-cff6dc657- deployment-8298 b5eb3d2b-1e88-4f65-846a-78b08e18d093 460913 0 2023-01-14 05:00:33 +0000 UTC map[name:sample-pod-3 pod-template-hash:cff6dc657] map[] [{apps/v1 ReplicaSet test-recreate-deployment-cff6dc657 e3090cb1-6482-4047-8492-8e7bf82205e7 0xc00192e230 0xc00192e231}] [] [{kube-controller-manager Update v1 2023-01-14 05:00:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3090cb1-6482-4047-8492-8e7bf82205e7\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 05:00:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-g5dzq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g5dzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.106,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:00:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:00:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:00:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:00:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.106,PodIP:,StartTime:2023-01-14 05:00:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} +[AfterEach] [sig-apps] Deployment + test/e2e/framework/node/init/init.go:32 +Jan 14 05:00:33.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Deployment + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Deployment + tear down framework | framework.go:193 +STEP: Destroying namespace "deployment-8298" for this suite. 01/14/23 05:00:33.873 +------------------------------ +• [2.287 seconds] +[sig-apps] Deployment +test/e2e/apps/framework.go:23 + RecreateDeployment should delete old pods and create new ones [Conformance] + test/e2e/apps/deployment.go:113 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Deployment + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 05:00:31.591 + Jan 14 05:00:31.591: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename deployment 01/14/23 05:00:31.592 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 05:00:31.658 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 05:00:31.66 + [BeforeEach] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 + [It] RecreateDeployment should delete old pods and create new ones [Conformance] + test/e2e/apps/deployment.go:113 + Jan 14 05:00:31.663: INFO: Creating deployment "test-recreate-deployment" + Jan 14 05:00:31.668: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 + Jan 14 05:00:31.756: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created + Jan 14 05:00:33.764: INFO: Waiting deployment "test-recreate-deployment" to complete + Jan 14 05:00:33.767: INFO: Triggering a new rollout for deployment "test-recreate-deployment" + Jan 14 05:00:33.777: INFO: Updating deployment test-recreate-deployment + Jan 14 05:00:33.777: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods + [AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 + Jan 14 05:00:33.861: INFO: Deployment "test-recreate-deployment": + &Deployment{ObjectMeta:{test-recreate-deployment deployment-8298 199e2c1b-a707-4baf-adc2-4c235ead069d 460912 2 2023-01-14 05:00:31 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-01-14 05:00:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 05:00:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004b32b18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2023-01-14 05:00:33 +0000 UTC,LastTransitionTime:2023-01-14 05:00:33 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-cff6dc657" is progressing.,LastUpdateTime:2023-01-14 05:00:33 +0000 UTC,LastTransitionTime:2023-01-14 05:00:31 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} + + Jan 14 05:00:33.864: INFO: New ReplicaSet "test-recreate-deployment-cff6dc657" of Deployment "test-recreate-deployment": + &ReplicaSet{ObjectMeta:{test-recreate-deployment-cff6dc657 deployment-8298 e3090cb1-6482-4047-8492-8e7bf82205e7 460911 1 2023-01-14 05:00:33 +0000 UTC map[name:sample-pod-3 pod-template-hash:cff6dc657] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 199e2c1b-a707-4baf-adc2-4c235ead069d 0xc004b32fc0 0xc004b32fc1}] [] [{kube-controller-manager Update apps/v1 2023-01-14 05:00:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"199e2c1b-a707-4baf-adc2-4c235ead069d\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 05:00:33 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: cff6dc657,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:cff6dc657] map[] [] [] []} {[] [] [{httpd registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004b33058 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + Jan 14 05:00:33.864: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": + Jan 14 05:00:33.864: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-795566c5cb deployment-8298 52657143-677d-4e69-8728-ed5c3543e0c9 460901 2 2023-01-14 05:00:31 +0000 UTC map[name:sample-pod-3 pod-template-hash:795566c5cb] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 199e2c1b-a707-4baf-adc2-4c235ead069d 0xc004b32ea7 0xc004b32ea8}] [] [{kube-controller-manager Update apps/v1 2023-01-14 05:00:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"199e2c1b-a707-4baf-adc2-4c235ead069d\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 05:00:33 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 795566c5cb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:795566c5cb] map[] [] [] []} {[] [] [{agnhost registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004b32f58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + Jan 14 05:00:33.868: INFO: Pod "test-recreate-deployment-cff6dc657-96tv9" is not available: + &Pod{ObjectMeta:{test-recreate-deployment-cff6dc657-96tv9 test-recreate-deployment-cff6dc657- deployment-8298 b5eb3d2b-1e88-4f65-846a-78b08e18d093 460913 0 2023-01-14 05:00:33 +0000 UTC map[name:sample-pod-3 pod-template-hash:cff6dc657] map[] [{apps/v1 ReplicaSet test-recreate-deployment-cff6dc657 e3090cb1-6482-4047-8492-8e7bf82205e7 0xc00192e230 0xc00192e231}] [] [{kube-controller-manager Update v1 2023-01-14 05:00:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3090cb1-6482-4047-8492-8e7bf82205e7\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 05:00:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-g5dzq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g5dzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.106,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:00:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:00:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:00:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:00:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.106,PodIP:,StartTime:2023-01-14 05:00:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} + [AfterEach] [sig-apps] Deployment + test/e2e/framework/node/init/init.go:32 + Jan 14 05:00:33.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Deployment + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Deployment + tear down framework | framework.go:193 + STEP: Destroying namespace "deployment-8298" for this suite. 01/14/23 05:00:33.873 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-node] InitContainer [NodeConformance] + should not start app containers if init containers fail on a RestartAlways pod [Conformance] + test/e2e/common/node/init_container.go:334 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 05:00:33.878 +Jan 14 05:00:33.878: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename init-container 01/14/23 05:00:33.879 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 05:00:33.894 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 05:00:33.897 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/common/node/init_container.go:165 +[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] + test/e2e/common/node/init_container.go:334 +STEP: creating the pod 01/14/23 05:00:33.899 +Jan 14 05:00:33.899: INFO: PodSpec: initContainers in spec.initContainers +Jan 14 05:01:17.019: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-806e1efb-3180-4949-9ce8-1b4779cc83ca", GenerateName:"", Namespace:"init-container-2227", SelfLink:"", UID:"933880bb-aaa4-4910-afd8-0f86d068aea7", ResourceVersion:"461231", Generation:0, CreationTimestamp:time.Date(2023, time.January, 14, 5, 0, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"899360218"}, Annotations:map[string]string{"tke.cloud.tencent.com/networks-status":"[{\n \"name\": \"tke-bridge\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.52.1.39\"\n ],\n \"mac\": \"5e:4e:b2:82:fd:de\",\n \"default\": true,\n \"dns\": {}\n}]"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.January, 14, 5, 0, 33, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0014551a0), Subresource:""}, v1.ManagedFieldsEntry{Manager:"multus", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.January, 14, 5, 0, 34, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001455200), Subresource:"status"}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.January, 14, 5, 1, 17, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001455290), Subresource:"status"}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-22tx4", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc00020db60), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"registry.k8s.io/e2e-test-images/busybox:1.29-4", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil), Claims:[]v1.ResourceClaim(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-22tx4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"registry.k8s.io/e2e-test-images/busybox:1.29-4", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil), Claims:[]v1.ResourceClaim(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-22tx4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"registry.k8s.io/pause:3.9", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Claims:[]v1.ResourceClaim(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-22tx4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0061701c0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"10.0.1.106", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0043cd9d0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc006170250)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc006170270)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc006170278), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00617027c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc00567b440), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.January, 14, 5, 0, 33, 0, time.Local), Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.January, 14, 5, 0, 33, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.January, 14, 5, 0, 33, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.January, 14, 5, 0, 33, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.0.1.106", PodIP:"10.52.1.39", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.52.1.39"}}, StartTime:time.Date(2023, time.January, 14, 5, 0, 33, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0043cdb20)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0043cdb90)}, Ready:false, RestartCount:3, Image:"registry.k8s.io/e2e-test-images/busybox:1.29-4", ImageID:"registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937", ContainerID:"containerd://b4408d2d395eb2bec851f8a09ee1a89b2e31090adf92da2cc64a7ff60ebcfbc1", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00020dc40), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/e2e-test-images/busybox:1.29-4", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00020dc20), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/pause:3.9", ImageID:"", ContainerID:"", Started:(*bool)(0xc0061702ff)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} +[AfterEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/node/init/init.go:32 +Jan 14 05:01:17.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + tear down framework | framework.go:193 +STEP: Destroying namespace "init-container-2227" for this suite. 01/14/23 05:01:17.025 +------------------------------ +• [SLOW TEST] [43.152 seconds] +[sig-node] InitContainer [NodeConformance] +test/e2e/common/node/framework.go:23 + should not start app containers if init containers fail on a RestartAlways pod [Conformance] + test/e2e/common/node/init_container.go:334 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] InitContainer [NodeConformance] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 05:00:33.878 + Jan 14 05:00:33.878: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename init-container 01/14/23 05:00:33.879 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 05:00:33.894 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 05:00:33.897 + [BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] InitContainer [NodeConformance] + test/e2e/common/node/init_container.go:165 + [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] + test/e2e/common/node/init_container.go:334 + STEP: creating the pod 01/14/23 05:00:33.899 + Jan 14 05:00:33.899: INFO: PodSpec: initContainers in spec.initContainers + Jan 14 05:01:17.019: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-806e1efb-3180-4949-9ce8-1b4779cc83ca", GenerateName:"", Namespace:"init-container-2227", SelfLink:"", UID:"933880bb-aaa4-4910-afd8-0f86d068aea7", ResourceVersion:"461231", Generation:0, CreationTimestamp:time.Date(2023, time.January, 14, 5, 0, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"899360218"}, Annotations:map[string]string{"tke.cloud.tencent.com/networks-status":"[{\n \"name\": \"tke-bridge\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.52.1.39\"\n ],\n \"mac\": \"5e:4e:b2:82:fd:de\",\n \"default\": true,\n \"dns\": {}\n}]"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.January, 14, 5, 0, 33, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0014551a0), Subresource:""}, v1.ManagedFieldsEntry{Manager:"multus", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.January, 14, 5, 0, 34, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001455200), Subresource:"status"}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:time.Date(2023, time.January, 14, 5, 1, 17, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001455290), Subresource:"status"}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-22tx4", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc00020db60), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"registry.k8s.io/e2e-test-images/busybox:1.29-4", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil), Claims:[]v1.ResourceClaim(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-22tx4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"registry.k8s.io/e2e-test-images/busybox:1.29-4", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil), Claims:[]v1.ResourceClaim(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-22tx4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"registry.k8s.io/pause:3.9", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Claims:[]v1.ResourceClaim(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-22tx4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0061701c0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"10.0.1.106", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0043cd9d0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc006170250)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc006170270)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc006170278), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00617027c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc00567b440), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.January, 14, 5, 0, 33, 0, time.Local), Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.January, 14, 5, 0, 33, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.January, 14, 5, 0, 33, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.January, 14, 5, 0, 33, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.0.1.106", PodIP:"10.52.1.39", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.52.1.39"}}, StartTime:time.Date(2023, time.January, 14, 5, 0, 33, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0043cdb20)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0043cdb90)}, Ready:false, RestartCount:3, Image:"registry.k8s.io/e2e-test-images/busybox:1.29-4", ImageID:"registry.k8s.io/e2e-test-images/busybox@sha256:2e0f836850e09b8b7cc937681d6194537a09fbd5f6b9e08f4d646a85128e8937", ContainerID:"containerd://b4408d2d395eb2bec851f8a09ee1a89b2e31090adf92da2cc64a7ff60ebcfbc1", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00020dc40), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/e2e-test-images/busybox:1.29-4", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00020dc20), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/pause:3.9", ImageID:"", ContainerID:"", Started:(*bool)(0xc0061702ff)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} + [AfterEach] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/node/init/init.go:32 + Jan 14 05:01:17.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] InitContainer [NodeConformance] + tear down framework | framework.go:193 + STEP: Destroying namespace "init-container-2227" for this suite. 01/14/23 05:01:17.025 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:124 +[BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 05:01:17.033 +Jan 14 05:01:17.033: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename configmap 01/14/23 05:01:17.034 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 05:01:17.049 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 05:01:17.051 +[BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 +[It] updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:124 +STEP: Creating configMap with name configmap-test-upd-3ff28a0c-dde6-43f0-85b7-f6533b69ee8e 01/14/23 05:01:17.058 +STEP: Creating the pod 01/14/23 05:01:17.062 +Jan 14 05:01:17.071: INFO: Waiting up to 5m0s for pod "pod-configmaps-e9413260-f985-4841-90d1-0e8768409c5a" in namespace "configmap-2017" to be "running and ready" +Jan 14 05:01:17.073: INFO: Pod "pod-configmaps-e9413260-f985-4841-90d1-0e8768409c5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.908845ms +Jan 14 05:01:17.073: INFO: The phase of Pod pod-configmaps-e9413260-f985-4841-90d1-0e8768409c5a is Pending, waiting for it to be Running (with Ready = true) +Jan 14 05:01:19.079: INFO: Pod "pod-configmaps-e9413260-f985-4841-90d1-0e8768409c5a": Phase="Running", Reason="", readiness=true. Elapsed: 2.008036314s +Jan 14 05:01:19.079: INFO: The phase of Pod pod-configmaps-e9413260-f985-4841-90d1-0e8768409c5a is Running (Ready = true) +Jan 14 05:01:19.079: INFO: Pod "pod-configmaps-e9413260-f985-4841-90d1-0e8768409c5a" satisfied condition "running and ready" +STEP: Updating configmap configmap-test-upd-3ff28a0c-dde6-43f0-85b7-f6533b69ee8e 01/14/23 05:01:19.093 +STEP: waiting to observe update in volume 01/14/23 05:01:19.099 +[AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 +Jan 14 05:02:29.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 +STEP: Destroying namespace "configmap-2017" for this suite. 01/14/23 05:02:29.406 +------------------------------ +• [SLOW TEST] [72.378 seconds] +[sig-storage] ConfigMap +test/e2e/common/storage/framework.go:23 + updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:124 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] ConfigMap + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 05:01:17.033 + Jan 14 05:01:17.033: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename configmap 01/14/23 05:01:17.034 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 05:01:17.049 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 05:01:17.051 + [BeforeEach] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:31 + [It] updates should be reflected in volume [NodeConformance] [Conformance] + test/e2e/common/storage/configmap_volume.go:124 + STEP: Creating configMap with name configmap-test-upd-3ff28a0c-dde6-43f0-85b7-f6533b69ee8e 01/14/23 05:01:17.058 + STEP: Creating the pod 01/14/23 05:01:17.062 + Jan 14 05:01:17.071: INFO: Waiting up to 5m0s for pod "pod-configmaps-e9413260-f985-4841-90d1-0e8768409c5a" in namespace "configmap-2017" to be "running and ready" + Jan 14 05:01:17.073: INFO: Pod "pod-configmaps-e9413260-f985-4841-90d1-0e8768409c5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.908845ms + Jan 14 05:01:17.073: INFO: The phase of Pod pod-configmaps-e9413260-f985-4841-90d1-0e8768409c5a is Pending, waiting for it to be Running (with Ready = true) + Jan 14 05:01:19.079: INFO: Pod "pod-configmaps-e9413260-f985-4841-90d1-0e8768409c5a": Phase="Running", Reason="", readiness=true. Elapsed: 2.008036314s + Jan 14 05:01:19.079: INFO: The phase of Pod pod-configmaps-e9413260-f985-4841-90d1-0e8768409c5a is Running (Ready = true) + Jan 14 05:01:19.079: INFO: Pod "pod-configmaps-e9413260-f985-4841-90d1-0e8768409c5a" satisfied condition "running and ready" + STEP: Updating configmap configmap-test-upd-3ff28a0c-dde6-43f0-85b7-f6533b69ee8e 01/14/23 05:01:19.093 + STEP: waiting to observe update in volume 01/14/23 05:01:19.099 + [AfterEach] [sig-storage] ConfigMap + test/e2e/framework/node/init/init.go:32 + Jan 14 05:02:29.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] ConfigMap + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] ConfigMap + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] ConfigMap + tear down framework | framework.go:193 + STEP: Destroying namespace "configmap-2017" for this suite. 01/14/23 05:02:29.406 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSS +------------------------------ +[sig-apps] Deployment + should run the lifecycle of a Deployment [Conformance] + test/e2e/apps/deployment.go:185 +[BeforeEach] [sig-apps] Deployment + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 05:02:29.412 +Jan 14 05:02:29.412: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename deployment 01/14/23 05:02:29.413 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 05:02:29.428 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 05:02:29.431 +[BeforeEach] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 +[It] should run the lifecycle of a Deployment [Conformance] + test/e2e/apps/deployment.go:185 +STEP: creating a Deployment 01/14/23 05:02:29.436 +STEP: waiting for Deployment to be created 01/14/23 05:02:29.442 +STEP: waiting for all Replicas to be Ready 01/14/23 05:02:29.444 +Jan 14 05:02:29.445: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Jan 14 05:02:29.445: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Jan 14 05:02:29.456: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Jan 14 05:02:29.456: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Jan 14 05:02:29.468: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Jan 14 05:02:29.468: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Jan 14 05:02:29.503: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Jan 14 05:02:29.504: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 0 and labels map[test-deployment-static:true] +Jan 14 05:02:30.158: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 and labels map[test-deployment-static:true] +Jan 14 05:02:30.158: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 and labels map[test-deployment-static:true] +Jan 14 05:02:30.953: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 2 and labels map[test-deployment-static:true] +STEP: patching the Deployment 01/14/23 05:02:30.953 +W0114 05:02:30.963083 25 warnings.go:70] unknown field "spec.template.spec.TerminationGracePeriodSeconds" +Jan 14 05:02:30.964: INFO: observed event type ADDED +STEP: waiting for Replicas to scale 01/14/23 05:02:30.964 +Jan 14 05:02:30.966: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 0 +Jan 14 05:02:30.966: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 0 +Jan 14 05:02:30.966: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 0 +Jan 14 05:02:30.966: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 0 +Jan 14 05:02:30.966: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 0 +Jan 14 05:02:30.966: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 0 +Jan 14 05:02:30.966: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 0 +Jan 14 05:02:30.966: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 0 +Jan 14 05:02:30.966: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 +Jan 14 05:02:30.966: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 +Jan 14 05:02:30.966: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 2 +Jan 14 05:02:30.966: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 2 +Jan 14 05:02:30.966: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 2 +Jan 14 05:02:30.966: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 2 +Jan 14 05:02:30.979: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 2 +Jan 14 05:02:30.979: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 2 +Jan 14 05:02:31.004: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 2 +Jan 14 05:02:31.004: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 2 +Jan 14 05:02:31.014: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 +Jan 14 05:02:31.014: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 +Jan 14 05:02:31.028: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 +Jan 14 05:02:31.028: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 +Jan 14 05:02:32.168: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 2 +Jan 14 05:02:32.168: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 2 +Jan 14 05:02:32.198: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 +STEP: listing Deployments 01/14/23 05:02:32.198 +Jan 14 05:02:32.202: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] +STEP: updating the Deployment 01/14/23 05:02:32.202 +Jan 14 05:02:32.214: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 +STEP: fetching the DeploymentStatus 01/14/23 05:02:32.214 +Jan 14 05:02:32.222: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Jan 14 05:02:32.227: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Jan 14 05:02:32.247: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Jan 14 05:02:32.263: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Jan 14 05:02:32.270: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] +Jan 14 05:02:32.902: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Jan 14 05:02:33.177: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] +Jan 14 05:02:33.211: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Jan 14 05:02:33.217: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] +Jan 14 05:02:33.968: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] +STEP: patching the DeploymentStatus 01/14/23 05:02:33.99 +STEP: fetching the DeploymentStatus 01/14/23 05:02:33.997 +Jan 14 05:02:34.002: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 +Jan 14 05:02:34.002: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 +Jan 14 05:02:34.002: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 +Jan 14 05:02:34.002: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 +Jan 14 05:02:34.002: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 +Jan 14 05:02:34.002: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 2 +Jan 14 05:02:34.002: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 3 +Jan 14 05:02:34.002: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 2 +Jan 14 05:02:34.002: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 2 +Jan 14 05:02:34.002: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 3 +STEP: deleting the Deployment 01/14/23 05:02:34.002 +Jan 14 05:02:34.012: INFO: observed event type MODIFIED +Jan 14 05:02:34.013: INFO: observed event type MODIFIED +Jan 14 05:02:34.013: INFO: observed event type MODIFIED +Jan 14 05:02:34.013: INFO: observed event type MODIFIED +Jan 14 05:02:34.013: INFO: observed event type MODIFIED +Jan 14 05:02:34.013: INFO: observed event type MODIFIED +Jan 14 05:02:34.013: INFO: observed event type MODIFIED +Jan 14 05:02:34.013: INFO: observed event type MODIFIED +Jan 14 05:02:34.013: INFO: observed event type MODIFIED +Jan 14 05:02:34.013: INFO: observed event type MODIFIED +Jan 14 05:02:34.013: INFO: observed event type MODIFIED +[AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 +Jan 14 05:02:34.016: INFO: Log out all the ReplicaSets if there is no deployment created +Jan 14 05:02:34.020: INFO: ReplicaSet "test-deployment-7b7876f9d6": +&ReplicaSet{ObjectMeta:{test-deployment-7b7876f9d6 deployment-7916 3160ee40-6e5e-4743-80f7-3e564cf79951 461765 2 2023-01-14 05:02:32 +0000 UTC map[pod-template-hash:7b7876f9d6 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment 6f776a82-dda9-4dc9-bf82-9d06c0a0e053 0xc00807d427 0xc00807d428}] [] [{kube-controller-manager Update apps/v1 2023-01-14 05:02:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f776a82-dda9-4dc9-bf82-9d06c0a0e053\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 05:02:33 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 7b7876f9d6,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:7b7876f9d6 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00807d4b0 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:2,AvailableReplicas:2,Conditions:[]ReplicaSetCondition{},},} + +Jan 14 05:02:34.023: INFO: pod: "test-deployment-7b7876f9d6-hfrw8": +&Pod{ObjectMeta:{test-deployment-7b7876f9d6-hfrw8 test-deployment-7b7876f9d6- deployment-7916 374a770e-379f-4385-b068-3530b8ba7a6b 461729 0 2023-01-14 05:02:32 +0000 UTC map[pod-template-hash:7b7876f9d6 test-deployment-static:true] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.1.43" + ], + "mac": "76:d1:e6:51:e3:ef", + "default": true, + "dns": {} +}]] [{apps/v1 ReplicaSet test-deployment-7b7876f9d6 3160ee40-6e5e-4743-80f7-3e564cf79951 0xc003941927 0xc003941928}] [] [{kube-controller-manager Update v1 2023-01-14 05:02:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3160ee40-6e5e-4743-80f7-3e564cf79951\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus Update v1 2023-01-14 05:02:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status} {kubelet Update v1 2023-01-14 05:02:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.1.43\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-cdkjt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cdkjt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.106,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:02:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:02:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:02:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:02:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.106,PodIP:10.52.1.43,StartTime:2023-01-14 05:02:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 05:02:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:containerd://8680ef8e4ce97c45def355fe1e0bab470df7032431e6c3cdf427dfaadbfaf069,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.52.1.43,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +Jan 14 05:02:34.023: INFO: pod: "test-deployment-7b7876f9d6-mfhkq": +&Pod{ObjectMeta:{test-deployment-7b7876f9d6-mfhkq test-deployment-7b7876f9d6- deployment-7916 96b849e7-66a2-426c-9698-c88e9f43bc75 461764 0 2023-01-14 05:02:33 +0000 UTC map[pod-template-hash:7b7876f9d6 test-deployment-static:true] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.1.103" + ], + "mac": "72:c0:31:32:a9:cd", + "default": true, + "dns": {} +}]] [{apps/v1 ReplicaSet test-deployment-7b7876f9d6 3160ee40-6e5e-4743-80f7-3e564cf79951 0xc003941b57 0xc003941b58}] [] [{kube-controller-manager Update v1 2023-01-14 05:02:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3160ee40-6e5e-4743-80f7-3e564cf79951\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 05:02:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.1.103\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status} {multus Update v1 2023-01-14 05:02:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2qkqv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2qkqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.99,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:02:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:02:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:02:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:02:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.99,PodIP:10.52.1.103,StartTime:2023-01-14 05:02:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 05:02:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:containerd://c2db1d0f9344b4bf61fead71114c34d915ffa975ebc4df7fed076ade65e49ad8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.52.1.103,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +Jan 14 05:02:34.023: INFO: ReplicaSet "test-deployment-7df74c55ff": +&ReplicaSet{ObjectMeta:{test-deployment-7df74c55ff deployment-7916 7e7e3dc3-136c-4b7f-ad6d-d63dc2f0c328 461775 4 2023-01-14 05:02:30 +0000 UTC map[pod-template-hash:7df74c55ff test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment 6f776a82-dda9-4dc9-bf82-9d06c0a0e053 0xc00807d517 0xc00807d518}] [] [{kube-controller-manager Update apps/v1 2023-01-14 05:02:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f776a82-dda9-4dc9-bf82-9d06c0a0e053\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 05:02:33 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 7df74c55ff,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:7df74c55ff test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment registry.k8s.io/pause:3.9 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00807d5a0 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + +Jan 14 05:02:34.029: INFO: pod: "test-deployment-7df74c55ff-8zdcv": +&Pod{ObjectMeta:{test-deployment-7df74c55ff-8zdcv test-deployment-7df74c55ff- deployment-7916 1c8fffb7-d388-453e-a1cd-8139fbee923f 461771 0 2023-01-14 05:02:32 +0000 UTC 2023-01-14 05:02:34 +0000 UTC 0xc003f3cf38 map[pod-template-hash:7df74c55ff test-deployment-static:true] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.0.245" + ], + "mac": "ee:84:36:4c:3f:b0", + "default": true, + "dns": {} +}]] [{apps/v1 ReplicaSet test-deployment-7df74c55ff 7e7e3dc3-136c-4b7f-ad6d-d63dc2f0c328 0xc003f3cf67 0xc003f3cf68}] [] [{kube-controller-manager Update v1 2023-01-14 05:02:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7e7e3dc3-136c-4b7f-ad6d-d63dc2f0c328\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 05:02:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.0.245\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status} {multus Update v1 2023-01-14 05:02:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zx8c8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/pause:3.9,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zx8c8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.212,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:02:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:02:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:02:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:02:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.212,PodIP:10.52.0.245,StartTime:2023-01-14 05:02:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 05:02:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/pause:3.9,ImageID:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,ContainerID:containerd://92c3cb543b3a9eb86c1b520426c98ca4cd79d94d9ff8134babae29381f65239e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.52.0.245,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +Jan 14 05:02:34.030: INFO: pod: "test-deployment-7df74c55ff-zck77": +&Pod{ObjectMeta:{test-deployment-7df74c55ff-zck77 test-deployment-7df74c55ff- deployment-7916 734e9b72-967d-43f7-bc22-9e8c7afece97 461735 0 2023-01-14 05:02:30 +0000 UTC 2023-01-14 05:02:34 +0000 UTC 0xc003f3d150 map[pod-template-hash:7df74c55ff test-deployment-static:true] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.1.42" + ], + "mac": "46:0c:54:43:ef:2f", + "default": true, + "dns": {} +}]] [{apps/v1 ReplicaSet test-deployment-7df74c55ff 7e7e3dc3-136c-4b7f-ad6d-d63dc2f0c328 0xc003f3d187 0xc003f3d188}] [] [{kube-controller-manager Update v1 2023-01-14 05:02:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7e7e3dc3-136c-4b7f-ad6d-d63dc2f0c328\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus Update v1 2023-01-14 05:02:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status} {kubelet Update v1 2023-01-14 05:02:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.1.42\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dhsj5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/pause:3.9,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dhsj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.106,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:02:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:02:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:02:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:02:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.106,PodIP:10.52.1.42,StartTime:2023-01-14 05:02:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 05:02:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/pause:3.9,ImageID:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,ContainerID:containerd://b2bbd55b8120dade2602566b37e7d6980517ac135aaaca924b0f9d57084e0bd6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.52.1.42,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + +Jan 14 05:02:34.030: INFO: ReplicaSet "test-deployment-f4dbc4647": +&ReplicaSet{ObjectMeta:{test-deployment-f4dbc4647 deployment-7916 7fd4fcdd-e8d0-4337-8669-71299a38b29b 461687 3 2023-01-14 05:02:29 +0000 UTC map[pod-template-hash:f4dbc4647 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment 6f776a82-dda9-4dc9-bf82-9d06c0a0e053 0xc00807d607 0xc00807d608}] [] [{kube-controller-manager Update apps/v1 2023-01-14 05:02:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f776a82-dda9-4dc9-bf82-9d06c0a0e053\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 05:02:32 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: f4dbc4647,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:f4dbc4647 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00807d690 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + +[AfterEach] [sig-apps] Deployment + test/e2e/framework/node/init/init.go:32 +Jan 14 05:02:34.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] Deployment + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] Deployment + tear down framework | framework.go:193 +STEP: Destroying namespace "deployment-7916" for this suite. 01/14/23 05:02:34.039 +------------------------------ +• [4.637 seconds] +[sig-apps] Deployment +test/e2e/apps/framework.go:23 + should run the lifecycle of a Deployment [Conformance] + test/e2e/apps/deployment.go:185 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] Deployment + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 05:02:29.412 + Jan 14 05:02:29.412: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename deployment 01/14/23 05:02:29.413 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 05:02:29.428 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 05:02:29.431 + [BeforeEach] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:91 + [It] should run the lifecycle of a Deployment [Conformance] + test/e2e/apps/deployment.go:185 + STEP: creating a Deployment 01/14/23 05:02:29.436 + STEP: waiting for Deployment to be created 01/14/23 05:02:29.442 + STEP: waiting for all Replicas to be Ready 01/14/23 05:02:29.444 + Jan 14 05:02:29.445: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 0 and labels map[test-deployment-static:true] + Jan 14 05:02:29.445: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 0 and labels map[test-deployment-static:true] + Jan 14 05:02:29.456: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 0 and labels map[test-deployment-static:true] + Jan 14 05:02:29.456: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 0 and labels map[test-deployment-static:true] + Jan 14 05:02:29.468: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 0 and labels map[test-deployment-static:true] + Jan 14 05:02:29.468: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 0 and labels map[test-deployment-static:true] + Jan 14 05:02:29.503: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 0 and labels map[test-deployment-static:true] + Jan 14 05:02:29.504: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 0 and labels map[test-deployment-static:true] + Jan 14 05:02:30.158: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 and labels map[test-deployment-static:true] + Jan 14 05:02:30.158: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 and labels map[test-deployment-static:true] + Jan 14 05:02:30.953: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 2 and labels map[test-deployment-static:true] + STEP: patching the Deployment 01/14/23 05:02:30.953 + W0114 05:02:30.963083 25 warnings.go:70] unknown field "spec.template.spec.TerminationGracePeriodSeconds" + Jan 14 05:02:30.964: INFO: observed event type ADDED + STEP: waiting for Replicas to scale 01/14/23 05:02:30.964 + Jan 14 05:02:30.966: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 0 + Jan 14 05:02:30.966: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 0 + Jan 14 05:02:30.966: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 0 + Jan 14 05:02:30.966: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 0 + Jan 14 05:02:30.966: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 0 + Jan 14 05:02:30.966: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 0 + Jan 14 05:02:30.966: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 0 + Jan 14 05:02:30.966: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 0 + Jan 14 05:02:30.966: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 + Jan 14 05:02:30.966: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 + Jan 14 05:02:30.966: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 2 + Jan 14 05:02:30.966: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 2 + Jan 14 05:02:30.966: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 2 + Jan 14 05:02:30.966: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 2 + Jan 14 05:02:30.979: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 2 + Jan 14 05:02:30.979: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 2 + Jan 14 05:02:31.004: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 2 + Jan 14 05:02:31.004: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 2 + Jan 14 05:02:31.014: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 + Jan 14 05:02:31.014: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 + Jan 14 05:02:31.028: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 + Jan 14 05:02:31.028: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 + Jan 14 05:02:32.168: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 2 + Jan 14 05:02:32.168: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 2 + Jan 14 05:02:32.198: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 + STEP: listing Deployments 01/14/23 05:02:32.198 + Jan 14 05:02:32.202: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] + STEP: updating the Deployment 01/14/23 05:02:32.202 + Jan 14 05:02:32.214: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 + STEP: fetching the DeploymentStatus 01/14/23 05:02:32.214 + Jan 14 05:02:32.222: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] + Jan 14 05:02:32.227: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] + Jan 14 05:02:32.247: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] + Jan 14 05:02:32.263: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] + Jan 14 05:02:32.270: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] + Jan 14 05:02:32.902: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] + Jan 14 05:02:33.177: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] + Jan 14 05:02:33.211: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] + Jan 14 05:02:33.217: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] + Jan 14 05:02:33.968: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] + STEP: patching the DeploymentStatus 01/14/23 05:02:33.99 + STEP: fetching the DeploymentStatus 01/14/23 05:02:33.997 + Jan 14 05:02:34.002: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 + Jan 14 05:02:34.002: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 + Jan 14 05:02:34.002: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 + Jan 14 05:02:34.002: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 + Jan 14 05:02:34.002: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 1 + Jan 14 05:02:34.002: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 2 + Jan 14 05:02:34.002: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 3 + Jan 14 05:02:34.002: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 2 + Jan 14 05:02:34.002: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 2 + Jan 14 05:02:34.002: INFO: observed Deployment test-deployment in namespace deployment-7916 with ReadyReplicas 3 + STEP: deleting the Deployment 01/14/23 05:02:34.002 + Jan 14 05:02:34.012: INFO: observed event type MODIFIED + Jan 14 05:02:34.013: INFO: observed event type MODIFIED + Jan 14 05:02:34.013: INFO: observed event type MODIFIED + Jan 14 05:02:34.013: INFO: observed event type MODIFIED + Jan 14 05:02:34.013: INFO: observed event type MODIFIED + Jan 14 05:02:34.013: INFO: observed event type MODIFIED + Jan 14 05:02:34.013: INFO: observed event type MODIFIED + Jan 14 05:02:34.013: INFO: observed event type MODIFIED + Jan 14 05:02:34.013: INFO: observed event type MODIFIED + Jan 14 05:02:34.013: INFO: observed event type MODIFIED + Jan 14 05:02:34.013: INFO: observed event type MODIFIED + [AfterEach] [sig-apps] Deployment + test/e2e/apps/deployment.go:84 + Jan 14 05:02:34.016: INFO: Log out all the ReplicaSets if there is no deployment created + Jan 14 05:02:34.020: INFO: ReplicaSet "test-deployment-7b7876f9d6": + &ReplicaSet{ObjectMeta:{test-deployment-7b7876f9d6 deployment-7916 3160ee40-6e5e-4743-80f7-3e564cf79951 461765 2 2023-01-14 05:02:32 +0000 UTC map[pod-template-hash:7b7876f9d6 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment 6f776a82-dda9-4dc9-bf82-9d06c0a0e053 0xc00807d427 0xc00807d428}] [] [{kube-controller-manager Update apps/v1 2023-01-14 05:02:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f776a82-dda9-4dc9-bf82-9d06c0a0e053\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 05:02:33 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 7b7876f9d6,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:7b7876f9d6 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment registry.k8s.io/e2e-test-images/httpd:2.4.38-4 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00807d4b0 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:2,AvailableReplicas:2,Conditions:[]ReplicaSetCondition{},},} + + Jan 14 05:02:34.023: INFO: pod: "test-deployment-7b7876f9d6-hfrw8": + &Pod{ObjectMeta:{test-deployment-7b7876f9d6-hfrw8 test-deployment-7b7876f9d6- deployment-7916 374a770e-379f-4385-b068-3530b8ba7a6b 461729 0 2023-01-14 05:02:32 +0000 UTC map[pod-template-hash:7b7876f9d6 test-deployment-static:true] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.1.43" + ], + "mac": "76:d1:e6:51:e3:ef", + "default": true, + "dns": {} + }]] [{apps/v1 ReplicaSet test-deployment-7b7876f9d6 3160ee40-6e5e-4743-80f7-3e564cf79951 0xc003941927 0xc003941928}] [] [{kube-controller-manager Update v1 2023-01-14 05:02:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3160ee40-6e5e-4743-80f7-3e564cf79951\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus Update v1 2023-01-14 05:02:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status} {kubelet Update v1 2023-01-14 05:02:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.1.43\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-cdkjt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cdkjt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.106,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:02:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:02:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:02:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:02:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.106,PodIP:10.52.1.43,StartTime:2023-01-14 05:02:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 05:02:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:containerd://8680ef8e4ce97c45def355fe1e0bab470df7032431e6c3cdf427dfaadbfaf069,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.52.1.43,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + + Jan 14 05:02:34.023: INFO: pod: "test-deployment-7b7876f9d6-mfhkq": + &Pod{ObjectMeta:{test-deployment-7b7876f9d6-mfhkq test-deployment-7b7876f9d6- deployment-7916 96b849e7-66a2-426c-9698-c88e9f43bc75 461764 0 2023-01-14 05:02:33 +0000 UTC map[pod-template-hash:7b7876f9d6 test-deployment-static:true] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.1.103" + ], + "mac": "72:c0:31:32:a9:cd", + "default": true, + "dns": {} + }]] [{apps/v1 ReplicaSet test-deployment-7b7876f9d6 3160ee40-6e5e-4743-80f7-3e564cf79951 0xc003941b57 0xc003941b58}] [] [{kube-controller-manager Update v1 2023-01-14 05:02:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3160ee40-6e5e-4743-80f7-3e564cf79951\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 05:02:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.1.103\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status} {multus Update v1 2023-01-14 05:02:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2qkqv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2qkqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.99,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:02:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:02:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:02:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:02:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.99,PodIP:10.52.1.103,StartTime:2023-01-14 05:02:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 05:02:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-4,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:148b022f5c5da426fc2f3c14b5c0867e58ef05961510c84749ac1fddcb0fef22,ContainerID:containerd://c2db1d0f9344b4bf61fead71114c34d915ffa975ebc4df7fed076ade65e49ad8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.52.1.103,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + + Jan 14 05:02:34.023: INFO: ReplicaSet "test-deployment-7df74c55ff": + &ReplicaSet{ObjectMeta:{test-deployment-7df74c55ff deployment-7916 7e7e3dc3-136c-4b7f-ad6d-d63dc2f0c328 461775 4 2023-01-14 05:02:30 +0000 UTC map[pod-template-hash:7df74c55ff test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment 6f776a82-dda9-4dc9-bf82-9d06c0a0e053 0xc00807d517 0xc00807d518}] [] [{kube-controller-manager Update apps/v1 2023-01-14 05:02:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f776a82-dda9-4dc9-bf82-9d06c0a0e053\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 05:02:33 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 7df74c55ff,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:7df74c55ff test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment registry.k8s.io/pause:3.9 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00807d5a0 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + + Jan 14 05:02:34.029: INFO: pod: "test-deployment-7df74c55ff-8zdcv": + &Pod{ObjectMeta:{test-deployment-7df74c55ff-8zdcv test-deployment-7df74c55ff- deployment-7916 1c8fffb7-d388-453e-a1cd-8139fbee923f 461771 0 2023-01-14 05:02:32 +0000 UTC 2023-01-14 05:02:34 +0000 UTC 0xc003f3cf38 map[pod-template-hash:7df74c55ff test-deployment-static:true] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.0.245" + ], + "mac": "ee:84:36:4c:3f:b0", + "default": true, + "dns": {} + }]] [{apps/v1 ReplicaSet test-deployment-7df74c55ff 7e7e3dc3-136c-4b7f-ad6d-d63dc2f0c328 0xc003f3cf67 0xc003f3cf68}] [] [{kube-controller-manager Update v1 2023-01-14 05:02:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7e7e3dc3-136c-4b7f-ad6d-d63dc2f0c328\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-14 05:02:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.0.245\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status} {multus Update v1 2023-01-14 05:02:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zx8c8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/pause:3.9,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zx8c8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.212,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:02:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:02:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:02:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:02:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.212,PodIP:10.52.0.245,StartTime:2023-01-14 05:02:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 05:02:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/pause:3.9,ImageID:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,ContainerID:containerd://92c3cb543b3a9eb86c1b520426c98ca4cd79d94d9ff8134babae29381f65239e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.52.0.245,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + + Jan 14 05:02:34.030: INFO: pod: "test-deployment-7df74c55ff-zck77": + &Pod{ObjectMeta:{test-deployment-7df74c55ff-zck77 test-deployment-7df74c55ff- deployment-7916 734e9b72-967d-43f7-bc22-9e8c7afece97 461735 0 2023-01-14 05:02:30 +0000 UTC 2023-01-14 05:02:34 +0000 UTC 0xc003f3d150 map[pod-template-hash:7df74c55ff test-deployment-static:true] map[tke.cloud.tencent.com/networks-status:[{ + "name": "tke-bridge", + "interface": "eth0", + "ips": [ + "10.52.1.42" + ], + "mac": "46:0c:54:43:ef:2f", + "default": true, + "dns": {} + }]] [{apps/v1 ReplicaSet test-deployment-7df74c55ff 7e7e3dc3-136c-4b7f-ad6d-d63dc2f0c328 0xc003f3d187 0xc003f3d188}] [] [{kube-controller-manager Update v1 2023-01-14 05:02:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7e7e3dc3-136c-4b7f-ad6d-d63dc2f0c328\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {multus Update v1 2023-01-14 05:02:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:tke.cloud.tencent.com/networks-status":{}}}} status} {kubelet Update v1 2023-01-14 05:02:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.52.1.42\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dhsj5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:registry.k8s.io/pause:3.9,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dhsj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:10.0.1.106,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,SchedulingGates:[]PodSchedulingGate{},ResourceClaims:[]PodResourceClaim{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:02:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:02:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:02:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-14 05:02:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.0.1.106,PodIP:10.52.1.42,StartTime:2023-01-14 05:02:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-14 05:02:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/pause:3.9,ImageID:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,ContainerID:containerd://b2bbd55b8120dade2602566b37e7d6980517ac135aaaca924b0f9d57084e0bd6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.52.1.42,},},EphemeralContainerStatuses:[]ContainerStatus{},},} + + Jan 14 05:02:34.030: INFO: ReplicaSet "test-deployment-f4dbc4647": + &ReplicaSet{ObjectMeta:{test-deployment-f4dbc4647 deployment-7916 7fd4fcdd-e8d0-4337-8669-71299a38b29b 461687 3 2023-01-14 05:02:29 +0000 UTC map[pod-template-hash:f4dbc4647 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment 6f776a82-dda9-4dc9-bf82-9d06c0a0e053 0xc00807d607 0xc00807d608}] [] [{kube-controller-manager Update apps/v1 2023-01-14 05:02:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f776a82-dda9-4dc9-bf82-9d06c0a0e053\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-14 05:02:32 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: f4dbc4647,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[pod-template-hash:f4dbc4647 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment registry.k8s.io/e2e-test-images/agnhost:2.43 [] [] [] [] [] {map[] map[] []} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00807d690 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] nil [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} + + [AfterEach] [sig-apps] Deployment + test/e2e/framework/node/init/init.go:32 + Jan 14 05:02:34.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] Deployment + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] Deployment + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] Deployment + tear down framework | framework.go:193 + STEP: Destroying namespace "deployment-7916" for this suite. 01/14/23 05:02:34.039 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should be able to change the type from ExternalName to ClusterIP [Conformance] + test/e2e/network/service.go:1438 +[BeforeEach] [sig-network] Services + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 05:02:34.049 +Jan 14 05:02:34.049: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename services 01/14/23 05:02:34.05 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 05:02:34.063 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 05:02:34.066 +[BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 +[It] should be able to change the type from ExternalName to ClusterIP [Conformance] + test/e2e/network/service.go:1438 +STEP: creating a service externalname-service with the type=ExternalName in namespace services-3095 01/14/23 05:02:34.068 +STEP: changing the ExternalName service to type=ClusterIP 01/14/23 05:02:34.073 +STEP: creating replication controller externalname-service in namespace services-3095 01/14/23 05:02:34.089 +I0114 05:02:34.094541 25 runners.go:193] Created replication controller with name: externalname-service, namespace: services-3095, replica count: 2 +I0114 05:02:37.146050 25 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Jan 14 05:02:37.146: INFO: Creating new exec pod +Jan 14 05:02:37.156: INFO: Waiting up to 5m0s for pod "execpodczf2m" in namespace "services-3095" to be "running" +Jan 14 05:02:37.160: INFO: Pod "execpodczf2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3.037172ms +Jan 14 05:02:39.164: INFO: Pod "execpodczf2m": Phase="Running", Reason="", readiness=true. Elapsed: 2.007833314s +Jan 14 05:02:39.164: INFO: Pod "execpodczf2m" satisfied condition "running" +Jan 14 05:02:40.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-3095 exec execpodczf2m -- /bin/sh -x -c nc -v -z -w 2 externalname-service 80' +Jan 14 05:02:40.278: INFO: stderr: "+ nc -v -z -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" +Jan 14 05:02:40.278: INFO: stdout: "" +Jan 14 05:02:40.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-3095 exec execpodczf2m -- /bin/sh -x -c nc -v -z -w 2 10.55.252.167 80' +Jan 14 05:02:40.390: INFO: stderr: "+ nc -v -z -w 2 10.55.252.167 80\nConnection to 10.55.252.167 80 port [tcp/http] succeeded!\n" +Jan 14 05:02:40.390: INFO: stdout: "" +Jan 14 05:02:40.390: INFO: Cleaning up the ExternalName to ClusterIP test service +[AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 +Jan 14 05:02:40.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 +STEP: Destroying namespace "services-3095" for this suite. 01/14/23 05:02:40.41 +------------------------------ +• [SLOW TEST] [6.366 seconds] +[sig-network] Services +test/e2e/network/common/framework.go:23 + should be able to change the type from ExternalName to ClusterIP [Conformance] + test/e2e/network/service.go:1438 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-network] Services + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 05:02:34.049 + Jan 14 05:02:34.049: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename services 01/14/23 05:02:34.05 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 05:02:34.063 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 05:02:34.066 + [BeforeEach] [sig-network] Services + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-network] Services + test/e2e/network/service.go:766 + [It] should be able to change the type from ExternalName to ClusterIP [Conformance] + test/e2e/network/service.go:1438 + STEP: creating a service externalname-service with the type=ExternalName in namespace services-3095 01/14/23 05:02:34.068 + STEP: changing the ExternalName service to type=ClusterIP 01/14/23 05:02:34.073 + STEP: creating replication controller externalname-service in namespace services-3095 01/14/23 05:02:34.089 + I0114 05:02:34.094541 25 runners.go:193] Created replication controller with name: externalname-service, namespace: services-3095, replica count: 2 + I0114 05:02:37.146050 25 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady + Jan 14 05:02:37.146: INFO: Creating new exec pod + Jan 14 05:02:37.156: INFO: Waiting up to 5m0s for pod "execpodczf2m" in namespace "services-3095" to be "running" + Jan 14 05:02:37.160: INFO: Pod "execpodczf2m": Phase="Pending", Reason="", readiness=false. Elapsed: 3.037172ms + Jan 14 05:02:39.164: INFO: Pod "execpodczf2m": Phase="Running", Reason="", readiness=true. Elapsed: 2.007833314s + Jan 14 05:02:39.164: INFO: Pod "execpodczf2m" satisfied condition "running" + Jan 14 05:02:40.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-3095 exec execpodczf2m -- /bin/sh -x -c nc -v -z -w 2 externalname-service 80' + Jan 14 05:02:40.278: INFO: stderr: "+ nc -v -z -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" + Jan 14 05:02:40.278: INFO: stdout: "" + Jan 14 05:02:40.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-1841037317 --namespace=services-3095 exec execpodczf2m -- /bin/sh -x -c nc -v -z -w 2 10.55.252.167 80' + Jan 14 05:02:40.390: INFO: stderr: "+ nc -v -z -w 2 10.55.252.167 80\nConnection to 10.55.252.167 80 port [tcp/http] succeeded!\n" + Jan 14 05:02:40.390: INFO: stdout: "" + Jan 14 05:02:40.390: INFO: Cleaning up the ExternalName to ClusterIP test service + [AfterEach] [sig-network] Services + test/e2e/framework/node/init/init.go:32 + Jan 14 05:02:40.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-network] Services + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-network] Services + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-network] Services + tear down framework | framework.go:193 + STEP: Destroying namespace "services-3095" for this suite. 01/14/23 05:02:40.41 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-apps] ReplicationController + should get and update a ReplicationController scale [Conformance] + test/e2e/apps/rc.go:402 +[BeforeEach] [sig-apps] ReplicationController + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 05:02:40.416 +Jan 14 05:02:40.416: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename replication-controller 01/14/23 05:02:40.417 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 05:02:40.432 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 05:02:40.434 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:57 +[It] should get and update a ReplicationController scale [Conformance] + test/e2e/apps/rc.go:402 +STEP: Creating ReplicationController "e2e-rc-b24fx" 01/14/23 05:02:40.437 +Jan 14 05:02:40.443: INFO: Get Replication Controller "e2e-rc-b24fx" to confirm replicas +Jan 14 05:02:41.447: INFO: Get Replication Controller "e2e-rc-b24fx" to confirm replicas +Jan 14 05:02:41.451: INFO: Found 1 replicas for "e2e-rc-b24fx" replication controller +STEP: Getting scale subresource for ReplicationController "e2e-rc-b24fx" 01/14/23 05:02:41.451 +STEP: Updating a scale subresource 01/14/23 05:02:41.454 +STEP: Verifying replicas where modified for replication controller "e2e-rc-b24fx" 01/14/23 05:02:41.459 +Jan 14 05:02:41.459: INFO: Get Replication Controller "e2e-rc-b24fx" to confirm replicas +Jan 14 05:02:42.462: INFO: Get Replication Controller "e2e-rc-b24fx" to confirm replicas +Jan 14 05:02:42.466: INFO: Found 2 replicas for "e2e-rc-b24fx" replication controller +[AfterEach] [sig-apps] ReplicationController + test/e2e/framework/node/init/init.go:32 +Jan 14 05:02:42.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-apps] ReplicationController + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-apps] ReplicationController + tear down framework | framework.go:193 +STEP: Destroying namespace "replication-controller-6098" for this suite. 01/14/23 05:02:42.471 +------------------------------ +• [2.060 seconds] +[sig-apps] ReplicationController +test/e2e/apps/framework.go:23 + should get and update a ReplicationController scale [Conformance] + test/e2e/apps/rc.go:402 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-apps] ReplicationController + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 05:02:40.416 + Jan 14 05:02:40.416: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename replication-controller 01/14/23 05:02:40.417 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 05:02:40.432 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 05:02:40.434 + [BeforeEach] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-apps] ReplicationController + test/e2e/apps/rc.go:57 + [It] should get and update a ReplicationController scale [Conformance] + test/e2e/apps/rc.go:402 + STEP: Creating ReplicationController "e2e-rc-b24fx" 01/14/23 05:02:40.437 + Jan 14 05:02:40.443: INFO: Get Replication Controller "e2e-rc-b24fx" to confirm replicas + Jan 14 05:02:41.447: INFO: Get Replication Controller "e2e-rc-b24fx" to confirm replicas + Jan 14 05:02:41.451: INFO: Found 1 replicas for "e2e-rc-b24fx" replication controller + STEP: Getting scale subresource for ReplicationController "e2e-rc-b24fx" 01/14/23 05:02:41.451 + STEP: Updating a scale subresource 01/14/23 05:02:41.454 + STEP: Verifying replicas where modified for replication controller "e2e-rc-b24fx" 01/14/23 05:02:41.459 + Jan 14 05:02:41.459: INFO: Get Replication Controller "e2e-rc-b24fx" to confirm replicas + Jan 14 05:02:42.462: INFO: Get Replication Controller "e2e-rc-b24fx" to confirm replicas + Jan 14 05:02:42.466: INFO: Found 2 replicas for "e2e-rc-b24fx" replication controller + [AfterEach] [sig-apps] ReplicationController + test/e2e/framework/node/init/init.go:32 + Jan 14 05:02:42.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-apps] ReplicationController + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-apps] ReplicationController + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-apps] ReplicationController + tear down framework | framework.go:193 + STEP: Destroying namespace "replication-controller-6098" for this suite. 01/14/23 05:02:42.471 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSS +------------------------------ +[sig-instrumentation] Events + should delete a collection of events [Conformance] + test/e2e/instrumentation/core_events.go:175 +[BeforeEach] [sig-instrumentation] Events + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 05:02:42.476 +Jan 14 05:02:42.476: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename events 01/14/23 05:02:42.477 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 05:02:42.49 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 05:02:42.492 +[BeforeEach] [sig-instrumentation] Events + test/e2e/framework/metrics/init/init.go:31 +[It] should delete a collection of events [Conformance] + test/e2e/instrumentation/core_events.go:175 +STEP: Create set of events 01/14/23 05:02:42.495 +Jan 14 05:02:42.504: INFO: created test-event-1 +Jan 14 05:02:42.508: INFO: created test-event-2 +Jan 14 05:02:42.513: INFO: created test-event-3 +STEP: get a list of Events with a label in the current namespace 01/14/23 05:02:42.513 +STEP: delete collection of events 01/14/23 05:02:42.517 +Jan 14 05:02:42.517: INFO: requesting DeleteCollection of events +STEP: check that the list of events matches the requested quantity 01/14/23 05:02:42.537 +Jan 14 05:02:42.537: INFO: requesting list of events to confirm quantity +[AfterEach] [sig-instrumentation] Events + test/e2e/framework/node/init/init.go:32 +Jan 14 05:02:42.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-instrumentation] Events + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-instrumentation] Events + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-instrumentation] Events + tear down framework | framework.go:193 +STEP: Destroying namespace "events-3860" for this suite. 01/14/23 05:02:42.544 +------------------------------ +• [0.077 seconds] +[sig-instrumentation] Events +test/e2e/instrumentation/common/framework.go:23 + should delete a collection of events [Conformance] + test/e2e/instrumentation/core_events.go:175 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-instrumentation] Events + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 05:02:42.476 + Jan 14 05:02:42.476: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename events 01/14/23 05:02:42.477 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 05:02:42.49 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 05:02:42.492 + [BeforeEach] [sig-instrumentation] Events + test/e2e/framework/metrics/init/init.go:31 + [It] should delete a collection of events [Conformance] + test/e2e/instrumentation/core_events.go:175 + STEP: Create set of events 01/14/23 05:02:42.495 + Jan 14 05:02:42.504: INFO: created test-event-1 + Jan 14 05:02:42.508: INFO: created test-event-2 + Jan 14 05:02:42.513: INFO: created test-event-3 + STEP: get a list of Events with a label in the current namespace 01/14/23 05:02:42.513 + STEP: delete collection of events 01/14/23 05:02:42.517 + Jan 14 05:02:42.517: INFO: requesting DeleteCollection of events + STEP: check that the list of events matches the requested quantity 01/14/23 05:02:42.537 + Jan 14 05:02:42.537: INFO: requesting list of events to confirm quantity + [AfterEach] [sig-instrumentation] Events + test/e2e/framework/node/init/init.go:32 + Jan 14 05:02:42.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-instrumentation] Events + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-instrumentation] Events + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-instrumentation] Events + tear down framework | framework.go:193 + STEP: Destroying namespace "events-3860" for this suite. 01/14/23 05:02:42.544 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-node] Kubelet when scheduling a busybox command that always fails in a pod + should have an terminated reason [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:110 +[BeforeEach] [sig-node] Kubelet + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 05:02:42.554 +Jan 14 05:02:42.554: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename kubelet-test 01/14/23 05:02:42.555 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 05:02:42.568 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 05:02:42.57 +[BeforeEach] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 +[BeforeEach] when scheduling a busybox command that always fails in a pod + test/e2e/common/node/kubelet.go:85 +[It] should have an terminated reason [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:110 +[AfterEach] [sig-node] Kubelet + test/e2e/framework/node/init/init.go:32 +Jan 14 05:02:46.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Kubelet + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Kubelet + tear down framework | framework.go:193 +STEP: Destroying namespace "kubelet-test-516" for this suite. 01/14/23 05:02:46.595 +------------------------------ +• [4.046 seconds] +[sig-node] Kubelet +test/e2e/common/node/framework.go:23 + when scheduling a busybox command that always fails in a pod + test/e2e/common/node/kubelet.go:82 + should have an terminated reason [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:110 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Kubelet + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 05:02:42.554 + Jan 14 05:02:42.554: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename kubelet-test 01/14/23 05:02:42.555 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 05:02:42.568 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 05:02:42.57 + [BeforeEach] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-node] Kubelet + test/e2e/common/node/kubelet.go:41 + [BeforeEach] when scheduling a busybox command that always fails in a pod + test/e2e/common/node/kubelet.go:85 + [It] should have an terminated reason [NodeConformance] [Conformance] + test/e2e/common/node/kubelet.go:110 + [AfterEach] [sig-node] Kubelet + test/e2e/framework/node/init/init.go:32 + Jan 14 05:02:46.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Kubelet + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Kubelet + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Kubelet + tear down framework | framework.go:193 + STEP: Destroying namespace "kubelet-test-516" for this suite. 01/14/23 05:02:46.595 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSS +------------------------------ +[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook + should execute prestop http hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:212 +[BeforeEach] [sig-node] Container Lifecycle Hook + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 05:02:46.601 +Jan 14 05:02:46.601: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename container-lifecycle-hook 01/14/23 05:02:46.602 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 05:02:46.617 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 05:02:46.621 +[BeforeEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:77 +STEP: create the container to handle the HTTPGet hook request. 01/14/23 05:02:46.628 +Jan 14 05:02:46.637: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-322" to be "running and ready" +Jan 14 05:02:46.640: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 2.855725ms +Jan 14 05:02:46.640: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) +Jan 14 05:02:48.645: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 2.00839288s +Jan 14 05:02:48.645: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) +Jan 14 05:02:48.645: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" +[It] should execute prestop http hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:212 +STEP: create the pod with lifecycle hook 01/14/23 05:02:48.649 +Jan 14 05:02:48.655: INFO: Waiting up to 5m0s for pod "pod-with-prestop-http-hook" in namespace "container-lifecycle-hook-322" to be "running and ready" +Jan 14 05:02:48.659: INFO: Pod "pod-with-prestop-http-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 3.12733ms +Jan 14 05:02:48.659: INFO: The phase of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) +Jan 14 05:02:50.663: INFO: Pod "pod-with-prestop-http-hook": Phase="Running", Reason="", readiness=true. Elapsed: 2.008001027s +Jan 14 05:02:50.664: INFO: The phase of Pod pod-with-prestop-http-hook is Running (Ready = true) +Jan 14 05:02:50.664: INFO: Pod "pod-with-prestop-http-hook" satisfied condition "running and ready" +STEP: delete the pod with lifecycle hook 01/14/23 05:02:50.667 +Jan 14 05:02:50.678: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Jan 14 05:02:50.681: INFO: Pod pod-with-prestop-http-hook still exists +Jan 14 05:02:52.682: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Jan 14 05:02:52.686: INFO: Pod pod-with-prestop-http-hook no longer exists +STEP: check prestop hook 01/14/23 05:02:52.686 +[AfterEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/node/init/init.go:32 +Jan 14 05:02:52.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + tear down framework | framework.go:193 +STEP: Destroying namespace "container-lifecycle-hook-322" for this suite. 01/14/23 05:02:52.697 +------------------------------ +• [SLOW TEST] [6.102 seconds] +[sig-node] Container Lifecycle Hook +test/e2e/common/node/framework.go:23 + when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:46 + should execute prestop http hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:212 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Container Lifecycle Hook + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 05:02:46.601 + Jan 14 05:02:46.601: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename container-lifecycle-hook 01/14/23 05:02:46.602 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 05:02:46.617 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 05:02:46.621 + [BeforeEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] when create a pod with lifecycle hook + test/e2e/common/node/lifecycle_hook.go:77 + STEP: create the container to handle the HTTPGet hook request. 01/14/23 05:02:46.628 + Jan 14 05:02:46.637: INFO: Waiting up to 5m0s for pod "pod-handle-http-request" in namespace "container-lifecycle-hook-322" to be "running and ready" + Jan 14 05:02:46.640: INFO: Pod "pod-handle-http-request": Phase="Pending", Reason="", readiness=false. Elapsed: 2.855725ms + Jan 14 05:02:46.640: INFO: The phase of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) + Jan 14 05:02:48.645: INFO: Pod "pod-handle-http-request": Phase="Running", Reason="", readiness=true. Elapsed: 2.00839288s + Jan 14 05:02:48.645: INFO: The phase of Pod pod-handle-http-request is Running (Ready = true) + Jan 14 05:02:48.645: INFO: Pod "pod-handle-http-request" satisfied condition "running and ready" + [It] should execute prestop http hook properly [NodeConformance] [Conformance] + test/e2e/common/node/lifecycle_hook.go:212 + STEP: create the pod with lifecycle hook 01/14/23 05:02:48.649 + Jan 14 05:02:48.655: INFO: Waiting up to 5m0s for pod "pod-with-prestop-http-hook" in namespace "container-lifecycle-hook-322" to be "running and ready" + Jan 14 05:02:48.659: INFO: Pod "pod-with-prestop-http-hook": Phase="Pending", Reason="", readiness=false. Elapsed: 3.12733ms + Jan 14 05:02:48.659: INFO: The phase of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) + Jan 14 05:02:50.663: INFO: Pod "pod-with-prestop-http-hook": Phase="Running", Reason="", readiness=true. Elapsed: 2.008001027s + Jan 14 05:02:50.664: INFO: The phase of Pod pod-with-prestop-http-hook is Running (Ready = true) + Jan 14 05:02:50.664: INFO: Pod "pod-with-prestop-http-hook" satisfied condition "running and ready" + STEP: delete the pod with lifecycle hook 01/14/23 05:02:50.667 + Jan 14 05:02:50.678: INFO: Waiting for pod pod-with-prestop-http-hook to disappear + Jan 14 05:02:50.681: INFO: Pod pod-with-prestop-http-hook still exists + Jan 14 05:02:52.682: INFO: Waiting for pod pod-with-prestop-http-hook to disappear + Jan 14 05:02:52.686: INFO: Pod pod-with-prestop-http-hook no longer exists + STEP: check prestop hook 01/14/23 05:02:52.686 + [AfterEach] [sig-node] Container Lifecycle Hook + test/e2e/framework/node/init/init.go:32 + Jan 14 05:02:52.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Container Lifecycle Hook + tear down framework | framework.go:193 + STEP: Destroying namespace "container-lifecycle-hook-322" for this suite. 01/14/23 05:02:52.697 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-api-machinery] ResourceQuota + should verify ResourceQuota with best effort scope. [Conformance] + test/e2e/apimachinery/resource_quota.go:803 +[BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 05:02:52.704 +Jan 14 05:02:52.704: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename resourcequota 01/14/23 05:02:52.705 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 05:02:52.718 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 05:02:52.72 +[BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 +[It] should verify ResourceQuota with best effort scope. [Conformance] + test/e2e/apimachinery/resource_quota.go:803 +STEP: Creating a ResourceQuota with best effort scope 01/14/23 05:02:52.722 +STEP: Ensuring ResourceQuota status is calculated 01/14/23 05:02:52.729 +STEP: Creating a ResourceQuota with not best effort scope 01/14/23 05:02:54.733 +STEP: Ensuring ResourceQuota status is calculated 01/14/23 05:02:54.738 +STEP: Creating a best-effort pod 01/14/23 05:02:56.741 +STEP: Ensuring resource quota with best effort scope captures the pod usage 01/14/23 05:02:56.754 +STEP: Ensuring resource quota with not best effort ignored the pod usage 01/14/23 05:02:58.757 +STEP: Deleting the pod 01/14/23 05:03:00.762 +STEP: Ensuring resource quota status released the pod usage 01/14/23 05:03:00.776 +STEP: Creating a not best-effort pod 01/14/23 05:03:02.78 +STEP: Ensuring resource quota with not best effort scope captures the pod usage 01/14/23 05:03:02.793 +STEP: Ensuring resource quota with best effort scope ignored the pod usage 01/14/23 05:03:04.797 +STEP: Deleting the pod 01/14/23 05:03:06.801 +STEP: Ensuring resource quota status released the pod usage 01/14/23 05:03:06.821 +[AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 +Jan 14 05:03:08.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 +STEP: Destroying namespace "resourcequota-5725" for this suite. 01/14/23 05:03:08.83 +------------------------------ +• [SLOW TEST] [16.132 seconds] +[sig-api-machinery] ResourceQuota +test/e2e/apimachinery/framework.go:23 + should verify ResourceQuota with best effort scope. [Conformance] + test/e2e/apimachinery/resource_quota.go:803 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] ResourceQuota + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 05:02:52.704 + Jan 14 05:02:52.704: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename resourcequota 01/14/23 05:02:52.705 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 05:02:52.718 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 05:02:52.72 + [BeforeEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:31 + [It] should verify ResourceQuota with best effort scope. [Conformance] + test/e2e/apimachinery/resource_quota.go:803 + STEP: Creating a ResourceQuota with best effort scope 01/14/23 05:02:52.722 + STEP: Ensuring ResourceQuota status is calculated 01/14/23 05:02:52.729 + STEP: Creating a ResourceQuota with not best effort scope 01/14/23 05:02:54.733 + STEP: Ensuring ResourceQuota status is calculated 01/14/23 05:02:54.738 + STEP: Creating a best-effort pod 01/14/23 05:02:56.741 + STEP: Ensuring resource quota with best effort scope captures the pod usage 01/14/23 05:02:56.754 + STEP: Ensuring resource quota with not best effort ignored the pod usage 01/14/23 05:02:58.757 + STEP: Deleting the pod 01/14/23 05:03:00.762 + STEP: Ensuring resource quota status released the pod usage 01/14/23 05:03:00.776 + STEP: Creating a not best-effort pod 01/14/23 05:03:02.78 + STEP: Ensuring resource quota with not best effort scope captures the pod usage 01/14/23 05:03:02.793 + STEP: Ensuring resource quota with best effort scope ignored the pod usage 01/14/23 05:03:04.797 + STEP: Deleting the pod 01/14/23 05:03:06.801 + STEP: Ensuring resource quota status released the pod usage 01/14/23 05:03:06.821 + [AfterEach] [sig-api-machinery] ResourceQuota + test/e2e/framework/node/init/init.go:32 + Jan 14 05:03:08.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] ResourceQuota + tear down framework | framework.go:193 + STEP: Destroying namespace "resourcequota-5725" for this suite. 01/14/23 05:03:08.83 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should ensure that all pods are removed when a namespace is deleted [Conformance] + test/e2e/apimachinery/namespace.go:243 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 05:03:08.836 +Jan 14 05:03:08.837: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename namespaces 01/14/23 05:03:08.837 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 05:03:08.851 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 05:03:08.854 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:31 +[It] should ensure that all pods are removed when a namespace is deleted [Conformance] + test/e2e/apimachinery/namespace.go:243 +STEP: Creating a test namespace 01/14/23 05:03:08.856 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 05:03:08.87 +STEP: Creating a pod in the namespace 01/14/23 05:03:08.872 +STEP: Waiting for the pod to have running status 01/14/23 05:03:08.881 +Jan 14 05:03:08.881: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "nsdeletetest-1146" to be "running" +Jan 14 05:03:08.885: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.414393ms +Jan 14 05:03:10.890: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.008197335s +Jan 14 05:03:10.890: INFO: Pod "test-pod" satisfied condition "running" +STEP: Deleting the namespace 01/14/23 05:03:10.89 +STEP: Waiting for the namespace to be removed. 01/14/23 05:03:10.895 +STEP: Recreating the namespace 01/14/23 05:03:21.899 +STEP: Verifying there are no pods in the namespace 01/14/23 05:03:21.915 +[AfterEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/node/init/init.go:32 +Jan 14 05:03:21.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + tear down framework | framework.go:193 +STEP: Destroying namespace "namespaces-7921" for this suite. 01/14/23 05:03:21.923 +STEP: Destroying namespace "nsdeletetest-1146" for this suite. 01/14/23 05:03:21.929 +Jan 14 05:03:21.931: INFO: Namespace nsdeletetest-1146 was already deleted +STEP: Destroying namespace "nsdeletetest-4554" for this suite. 01/14/23 05:03:21.931 +------------------------------ +• [SLOW TEST] [13.099 seconds] +[sig-api-machinery] Namespaces [Serial] +test/e2e/apimachinery/framework.go:23 + should ensure that all pods are removed when a namespace is deleted [Conformance] + test/e2e/apimachinery/namespace.go:243 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Namespaces [Serial] + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 05:03:08.836 + Jan 14 05:03:08.837: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename namespaces 01/14/23 05:03:08.837 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 05:03:08.851 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 05:03:08.854 + [BeforeEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:31 + [It] should ensure that all pods are removed when a namespace is deleted [Conformance] + test/e2e/apimachinery/namespace.go:243 + STEP: Creating a test namespace 01/14/23 05:03:08.856 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 05:03:08.87 + STEP: Creating a pod in the namespace 01/14/23 05:03:08.872 + STEP: Waiting for the pod to have running status 01/14/23 05:03:08.881 + Jan 14 05:03:08.881: INFO: Waiting up to 5m0s for pod "test-pod" in namespace "nsdeletetest-1146" to be "running" + Jan 14 05:03:08.885: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 3.414393ms + Jan 14 05:03:10.890: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.008197335s + Jan 14 05:03:10.890: INFO: Pod "test-pod" satisfied condition "running" + STEP: Deleting the namespace 01/14/23 05:03:10.89 + STEP: Waiting for the namespace to be removed. 01/14/23 05:03:10.895 + STEP: Recreating the namespace 01/14/23 05:03:21.899 + STEP: Verifying there are no pods in the namespace 01/14/23 05:03:21.915 + [AfterEach] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/node/init/init.go:32 + Jan 14 05:03:21.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Namespaces [Serial] + tear down framework | framework.go:193 + STEP: Destroying namespace "namespaces-7921" for this suite. 01/14/23 05:03:21.923 + STEP: Destroying namespace "nsdeletetest-1146" for this suite. 01/14/23 05:03:21.929 + Jan 14 05:03:21.931: INFO: Namespace nsdeletetest-1146 was already deleted + STEP: Destroying namespace "nsdeletetest-4554" for this suite. 01/14/23 05:03:21.931 + << End Captured GinkgoWriter Output +------------------------------ +SSSSS +------------------------------ +[sig-node] PodTemplates + should run the lifecycle of PodTemplates [Conformance] + test/e2e/common/node/podtemplates.go:53 +[BeforeEach] [sig-node] PodTemplates + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 05:03:21.936 +Jan 14 05:03:21.936: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename podtemplate 01/14/23 05:03:21.937 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 05:03:21.95 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 05:03:21.952 +[BeforeEach] [sig-node] PodTemplates + test/e2e/framework/metrics/init/init.go:31 +[It] should run the lifecycle of PodTemplates [Conformance] + test/e2e/common/node/podtemplates.go:53 +[AfterEach] [sig-node] PodTemplates + test/e2e/framework/node/init/init.go:32 +Jan 14 05:03:21.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] PodTemplates + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] PodTemplates + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] PodTemplates + tear down framework | framework.go:193 +STEP: Destroying namespace "podtemplate-7465" for this suite. 01/14/23 05:03:21.982 +------------------------------ +• [0.050 seconds] +[sig-node] PodTemplates +test/e2e/common/node/framework.go:23 + should run the lifecycle of PodTemplates [Conformance] + test/e2e/common/node/podtemplates.go:53 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] PodTemplates + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 05:03:21.936 + Jan 14 05:03:21.936: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename podtemplate 01/14/23 05:03:21.937 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 05:03:21.95 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 05:03:21.952 + [BeforeEach] [sig-node] PodTemplates + test/e2e/framework/metrics/init/init.go:31 + [It] should run the lifecycle of PodTemplates [Conformance] + test/e2e/common/node/podtemplates.go:53 + [AfterEach] [sig-node] PodTemplates + test/e2e/framework/node/init/init.go:32 + Jan 14 05:03:21.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] PodTemplates + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] PodTemplates + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] PodTemplates + tear down framework | framework.go:193 + STEP: Destroying namespace "podtemplate-7465" for this suite. 01/14/23 05:03:21.982 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should mount an API token into pods [Conformance] + test/e2e/auth/service_accounts.go:78 +[BeforeEach] [sig-auth] ServiceAccounts + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 05:03:21.989 +Jan 14 05:03:21.989: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename svcaccounts 01/14/23 05:03:21.989 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 05:03:22.004 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 05:03:22.007 +[BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:31 +[It] should mount an API token into pods [Conformance] + test/e2e/auth/service_accounts.go:78 +Jan 14 05:03:22.022: INFO: Waiting up to 5m0s for pod "pod-service-account-2f509cf0-4cc4-481a-b7ec-5ebbde4a0e5c" in namespace "svcaccounts-8463" to be "running" +Jan 14 05:03:22.025: INFO: Pod "pod-service-account-2f509cf0-4cc4-481a-b7ec-5ebbde4a0e5c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.05933ms +Jan 14 05:03:24.030: INFO: Pod "pod-service-account-2f509cf0-4cc4-481a-b7ec-5ebbde4a0e5c": Phase="Running", Reason="", readiness=true. Elapsed: 2.008413003s +Jan 14 05:03:24.030: INFO: Pod "pod-service-account-2f509cf0-4cc4-481a-b7ec-5ebbde4a0e5c" satisfied condition "running" +STEP: reading a file in the container 01/14/23 05:03:24.03 +Jan 14 05:03:24.031: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8463 pod-service-account-2f509cf0-4cc4-481a-b7ec-5ebbde4a0e5c -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' +STEP: reading a file in the container 01/14/23 05:03:24.142 +Jan 14 05:03:24.142: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8463 pod-service-account-2f509cf0-4cc4-481a-b7ec-5ebbde4a0e5c -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' +STEP: reading a file in the container 01/14/23 05:03:24.259 +Jan 14 05:03:24.259: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8463 pod-service-account-2f509cf0-4cc4-481a-b7ec-5ebbde4a0e5c -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' +Jan 14 05:03:24.370: INFO: Got root ca configmap in namespace "svcaccounts-8463" +[AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/node/init/init.go:32 +Jan 14 05:03:24.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-auth] ServiceAccounts + tear down framework | framework.go:193 +STEP: Destroying namespace "svcaccounts-8463" for this suite. 01/14/23 05:03:24.378 +------------------------------ +• [2.395 seconds] +[sig-auth] ServiceAccounts +test/e2e/auth/framework.go:23 + should mount an API token into pods [Conformance] + test/e2e/auth/service_accounts.go:78 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-auth] ServiceAccounts + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 05:03:21.989 + Jan 14 05:03:21.989: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename svcaccounts 01/14/23 05:03:21.989 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 05:03:22.004 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 05:03:22.007 + [BeforeEach] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:31 + [It] should mount an API token into pods [Conformance] + test/e2e/auth/service_accounts.go:78 + Jan 14 05:03:22.022: INFO: Waiting up to 5m0s for pod "pod-service-account-2f509cf0-4cc4-481a-b7ec-5ebbde4a0e5c" in namespace "svcaccounts-8463" to be "running" + Jan 14 05:03:22.025: INFO: Pod "pod-service-account-2f509cf0-4cc4-481a-b7ec-5ebbde4a0e5c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.05933ms + Jan 14 05:03:24.030: INFO: Pod "pod-service-account-2f509cf0-4cc4-481a-b7ec-5ebbde4a0e5c": Phase="Running", Reason="", readiness=true. Elapsed: 2.008413003s + Jan 14 05:03:24.030: INFO: Pod "pod-service-account-2f509cf0-4cc4-481a-b7ec-5ebbde4a0e5c" satisfied condition "running" + STEP: reading a file in the container 01/14/23 05:03:24.03 + Jan 14 05:03:24.031: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8463 pod-service-account-2f509cf0-4cc4-481a-b7ec-5ebbde4a0e5c -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' + STEP: reading a file in the container 01/14/23 05:03:24.142 + Jan 14 05:03:24.142: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8463 pod-service-account-2f509cf0-4cc4-481a-b7ec-5ebbde4a0e5c -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' + STEP: reading a file in the container 01/14/23 05:03:24.259 + Jan 14 05:03:24.259: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8463 pod-service-account-2f509cf0-4cc4-481a-b7ec-5ebbde4a0e5c -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' + Jan 14 05:03:24.370: INFO: Got root ca configmap in namespace "svcaccounts-8463" + [AfterEach] [sig-auth] ServiceAccounts + test/e2e/framework/node/init/init.go:32 + Jan 14 05:03:24.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-auth] ServiceAccounts + tear down framework | framework.go:193 + STEP: Destroying namespace "svcaccounts-8463" for this suite. 01/14/23 05:03:24.378 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Servers with support for Table transformation + should return a 406 for a backend which does not implement metadata [Conformance] + test/e2e/apimachinery/table_conversion.go:154 +[BeforeEach] [sig-api-machinery] Servers with support for Table transformation + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 05:03:24.385 +Jan 14 05:03:24.385: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename tables 01/14/23 05:03:24.386 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 05:03:24.399 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 05:03:24.401 +[BeforeEach] [sig-api-machinery] Servers with support for Table transformation + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-api-machinery] Servers with support for Table transformation + test/e2e/apimachinery/table_conversion.go:49 +[It] should return a 406 for a backend which does not implement metadata [Conformance] + test/e2e/apimachinery/table_conversion.go:154 +[AfterEach] [sig-api-machinery] Servers with support for Table transformation + test/e2e/framework/node/init/init.go:32 +Jan 14 05:03:24.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-api-machinery] Servers with support for Table transformation + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-api-machinery] Servers with support for Table transformation + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-api-machinery] Servers with support for Table transformation + tear down framework | framework.go:193 +STEP: Destroying namespace "tables-4803" for this suite. 01/14/23 05:03:24.41 +------------------------------ +• [0.030 seconds] +[sig-api-machinery] Servers with support for Table transformation +test/e2e/apimachinery/framework.go:23 + should return a 406 for a backend which does not implement metadata [Conformance] + test/e2e/apimachinery/table_conversion.go:154 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-api-machinery] Servers with support for Table transformation + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 05:03:24.385 + Jan 14 05:03:24.385: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename tables 01/14/23 05:03:24.386 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 05:03:24.399 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 05:03:24.401 + [BeforeEach] [sig-api-machinery] Servers with support for Table transformation + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-api-machinery] Servers with support for Table transformation + test/e2e/apimachinery/table_conversion.go:49 + [It] should return a 406 for a backend which does not implement metadata [Conformance] + test/e2e/apimachinery/table_conversion.go:154 + [AfterEach] [sig-api-machinery] Servers with support for Table transformation + test/e2e/framework/node/init/init.go:32 + Jan 14 05:03:24.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-api-machinery] Servers with support for Table transformation + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-api-machinery] Servers with support for Table transformation + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-api-machinery] Servers with support for Table transformation + tear down framework | framework.go:193 + STEP: Destroying namespace "tables-4803" for this suite. 01/14/23 05:03:24.41 + << End Captured GinkgoWriter Output +------------------------------ +[sig-node] Downward API + should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:44 +[BeforeEach] [sig-node] Downward API + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 05:03:24.415 +Jan 14 05:03:24.415: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename downward-api 01/14/23 05:03:24.416 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 05:03:24.431 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 05:03:24.433 +[BeforeEach] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:31 +[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:44 +STEP: Creating a pod to test downward api env vars 01/14/23 05:03:24.436 +Jan 14 05:03:24.445: INFO: Waiting up to 5m0s for pod "downward-api-4ebd41d0-2e23-4174-a871-d699bdc3e1c9" in namespace "downward-api-2701" to be "Succeeded or Failed" +Jan 14 05:03:24.448: INFO: Pod "downward-api-4ebd41d0-2e23-4174-a871-d699bdc3e1c9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.131128ms +Jan 14 05:03:26.453: INFO: Pod "downward-api-4ebd41d0-2e23-4174-a871-d699bdc3e1c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008135895s +Jan 14 05:03:28.454: INFO: Pod "downward-api-4ebd41d0-2e23-4174-a871-d699bdc3e1c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009492199s +STEP: Saw pod success 01/14/23 05:03:28.454 +Jan 14 05:03:28.455: INFO: Pod "downward-api-4ebd41d0-2e23-4174-a871-d699bdc3e1c9" satisfied condition "Succeeded or Failed" +Jan 14 05:03:28.458: INFO: Trying to get logs from node 10.0.1.106 pod downward-api-4ebd41d0-2e23-4174-a871-d699bdc3e1c9 container dapi-container: +STEP: delete the pod 01/14/23 05:03:28.465 +Jan 14 05:03:28.496: INFO: Waiting for pod downward-api-4ebd41d0-2e23-4174-a871-d699bdc3e1c9 to disappear +Jan 14 05:03:28.500: INFO: Pod downward-api-4ebd41d0-2e23-4174-a871-d699bdc3e1c9 no longer exists +[AfterEach] [sig-node] Downward API + test/e2e/framework/node/init/init.go:32 +Jan 14 05:03:28.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-node] Downward API + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-node] Downward API + tear down framework | framework.go:193 +STEP: Destroying namespace "downward-api-2701" for this suite. 01/14/23 05:03:28.506 +------------------------------ +• [4.098 seconds] +[sig-node] Downward API +test/e2e/common/node/framework.go:23 + should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:44 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-node] Downward API + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 05:03:24.415 + Jan 14 05:03:24.415: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename downward-api 01/14/23 05:03:24.416 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 05:03:24.431 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 05:03:24.433 + [BeforeEach] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:31 + [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] + test/e2e/common/node/downwardapi.go:44 + STEP: Creating a pod to test downward api env vars 01/14/23 05:03:24.436 + Jan 14 05:03:24.445: INFO: Waiting up to 5m0s for pod "downward-api-4ebd41d0-2e23-4174-a871-d699bdc3e1c9" in namespace "downward-api-2701" to be "Succeeded or Failed" + Jan 14 05:03:24.448: INFO: Pod "downward-api-4ebd41d0-2e23-4174-a871-d699bdc3e1c9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.131128ms + Jan 14 05:03:26.453: INFO: Pod "downward-api-4ebd41d0-2e23-4174-a871-d699bdc3e1c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008135895s + Jan 14 05:03:28.454: INFO: Pod "downward-api-4ebd41d0-2e23-4174-a871-d699bdc3e1c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009492199s + STEP: Saw pod success 01/14/23 05:03:28.454 + Jan 14 05:03:28.455: INFO: Pod "downward-api-4ebd41d0-2e23-4174-a871-d699bdc3e1c9" satisfied condition "Succeeded or Failed" + Jan 14 05:03:28.458: INFO: Trying to get logs from node 10.0.1.106 pod downward-api-4ebd41d0-2e23-4174-a871-d699bdc3e1c9 container dapi-container: + STEP: delete the pod 01/14/23 05:03:28.465 + Jan 14 05:03:28.496: INFO: Waiting for pod downward-api-4ebd41d0-2e23-4174-a871-d699bdc3e1c9 to disappear + Jan 14 05:03:28.500: INFO: Pod downward-api-4ebd41d0-2e23-4174-a871-d699bdc3e1c9 no longer exists + [AfterEach] [sig-node] Downward API + test/e2e/framework/node/init/init.go:32 + Jan 14 05:03:28.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-node] Downward API + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-node] Downward API + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-node] Downward API + tear down framework | framework.go:193 + STEP: Destroying namespace "downward-api-2701" for this suite. 01/14/23 05:03:28.506 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's memory request [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:235 +[BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 +STEP: Creating a kubernetes client 01/14/23 05:03:28.522 +Jan 14 05:03:28.523: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 +STEP: Building a namespace api object, basename projected 01/14/23 05:03:28.523 +STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 05:03:28.585 +STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 05:03:28.589 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 +[BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 +[It] should provide container's memory request [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:235 +STEP: Creating a pod to test downward API volume plugin 01/14/23 05:03:28.592 +Jan 14 05:03:28.634: INFO: Waiting up to 5m0s for pod "downwardapi-volume-915b292d-a0ae-48f8-9b70-abc770d7d23f" in namespace "projected-5904" to be "Succeeded or Failed" +Jan 14 05:03:28.638: INFO: Pod "downwardapi-volume-915b292d-a0ae-48f8-9b70-abc770d7d23f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.404607ms +Jan 14 05:03:30.643: INFO: Pod "downwardapi-volume-915b292d-a0ae-48f8-9b70-abc770d7d23f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008741454s +Jan 14 05:03:32.643: INFO: Pod "downwardapi-volume-915b292d-a0ae-48f8-9b70-abc770d7d23f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008595293s +STEP: Saw pod success 01/14/23 05:03:32.643 +Jan 14 05:03:32.643: INFO: Pod "downwardapi-volume-915b292d-a0ae-48f8-9b70-abc770d7d23f" satisfied condition "Succeeded or Failed" +Jan 14 05:03:32.647: INFO: Trying to get logs from node 10.0.1.106 pod downwardapi-volume-915b292d-a0ae-48f8-9b70-abc770d7d23f container client-container: +STEP: delete the pod 01/14/23 05:03:32.652 +Jan 14 05:03:32.666: INFO: Waiting for pod downwardapi-volume-915b292d-a0ae-48f8-9b70-abc770d7d23f to disappear +Jan 14 05:03:32.669: INFO: Pod downwardapi-volume-915b292d-a0ae-48f8-9b70-abc770d7d23f no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 +Jan 14 05:03:32.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 +[DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 +STEP: Destroying namespace "projected-5904" for this suite. 01/14/23 05:03:32.674 +------------------------------ +• [4.156 seconds] +[sig-storage] Projected downwardAPI +test/e2e/common/storage/framework.go:23 + should provide container's memory request [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:235 + + Begin Captured GinkgoWriter Output >> + [BeforeEach] [sig-storage] Projected downwardAPI + set up framework | framework.go:178 + STEP: Creating a kubernetes client 01/14/23 05:03:28.522 + Jan 14 05:03:28.523: INFO: >>> kubeConfig: /tmp/kubeconfig-1841037317 + STEP: Building a namespace api object, basename projected 01/14/23 05:03:28.523 + STEP: Waiting for a default service account to be provisioned in namespace 01/14/23 05:03:28.585 + STEP: Waiting for kube-root-ca.crt to be provisioned in namespace 01/14/23 05:03:28.589 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:31 + [BeforeEach] [sig-storage] Projected downwardAPI + test/e2e/common/storage/projected_downwardapi.go:44 + [It] should provide container's memory request [NodeConformance] [Conformance] + test/e2e/common/storage/projected_downwardapi.go:235 + STEP: Creating a pod to test downward API volume plugin 01/14/23 05:03:28.592 + Jan 14 05:03:28.634: INFO: Waiting up to 5m0s for pod "downwardapi-volume-915b292d-a0ae-48f8-9b70-abc770d7d23f" in namespace "projected-5904" to be "Succeeded or Failed" + Jan 14 05:03:28.638: INFO: Pod "downwardapi-volume-915b292d-a0ae-48f8-9b70-abc770d7d23f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.404607ms + Jan 14 05:03:30.643: INFO: Pod "downwardapi-volume-915b292d-a0ae-48f8-9b70-abc770d7d23f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008741454s + Jan 14 05:03:32.643: INFO: Pod "downwardapi-volume-915b292d-a0ae-48f8-9b70-abc770d7d23f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008595293s + STEP: Saw pod success 01/14/23 05:03:32.643 + Jan 14 05:03:32.643: INFO: Pod "downwardapi-volume-915b292d-a0ae-48f8-9b70-abc770d7d23f" satisfied condition "Succeeded or Failed" + Jan 14 05:03:32.647: INFO: Trying to get logs from node 10.0.1.106 pod downwardapi-volume-915b292d-a0ae-48f8-9b70-abc770d7d23f container client-container: + STEP: delete the pod 01/14/23 05:03:32.652 + Jan 14 05:03:32.666: INFO: Waiting for pod downwardapi-volume-915b292d-a0ae-48f8-9b70-abc770d7d23f to disappear + Jan 14 05:03:32.669: INFO: Pod downwardapi-volume-915b292d-a0ae-48f8-9b70-abc770d7d23f no longer exists + [AfterEach] [sig-storage] Projected downwardAPI + test/e2e/framework/node/init/init.go:32 + Jan 14 05:03:32.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + test/e2e/framework/metrics/init/init.go:33 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + dump namespaces | framework.go:196 + [DeferCleanup (Each)] [sig-storage] Projected downwardAPI + tear down framework | framework.go:193 + STEP: Destroying namespace "projected-5904" for this suite. 01/14/23 05:03:32.674 + << End Captured GinkgoWriter Output +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[SynchronizedAfterSuite] +test/e2e/e2e.go:88 +[SynchronizedAfterSuite] TOP-LEVEL + test/e2e/e2e.go:88 +[SynchronizedAfterSuite] TOP-LEVEL + test/e2e/e2e.go:88 +Jan 14 05:03:32.680: INFO: Running AfterSuite actions on node 1 +Jan 14 05:03:32.680: INFO: Skipping dumping logs from cluster +------------------------------ +[SynchronizedAfterSuite] PASSED [0.000 seconds] +[SynchronizedAfterSuite] +test/e2e/e2e.go:88 + + Begin Captured GinkgoWriter Output >> + [SynchronizedAfterSuite] TOP-LEVEL + test/e2e/e2e.go:88 + [SynchronizedAfterSuite] TOP-LEVEL + test/e2e/e2e.go:88 + Jan 14 05:03:32.680: INFO: Running AfterSuite actions on node 1 + Jan 14 05:03:32.680: INFO: Skipping dumping logs from cluster + << End Captured GinkgoWriter Output +------------------------------ +[ReportAfterSuite] Kubernetes e2e suite report +test/e2e/e2e_test.go:153 +[ReportAfterSuite] TOP-LEVEL + test/e2e/e2e_test.go:153 +------------------------------ +[ReportAfterSuite] PASSED [0.000 seconds] +[ReportAfterSuite] Kubernetes e2e suite report +test/e2e/e2e_test.go:153 + + Begin Captured GinkgoWriter Output >> + [ReportAfterSuite] TOP-LEVEL + test/e2e/e2e_test.go:153 + << End Captured GinkgoWriter Output +------------------------------ +[ReportAfterSuite] Kubernetes e2e JUnit report +test/e2e/framework/test_context.go:529 +[ReportAfterSuite] TOP-LEVEL + test/e2e/framework/test_context.go:529 +------------------------------ +[ReportAfterSuite] PASSED [0.088 seconds] +[ReportAfterSuite] Kubernetes e2e JUnit report +test/e2e/framework/test_context.go:529 + + Begin Captured GinkgoWriter Output >> + [ReportAfterSuite] TOP-LEVEL + test/e2e/framework/test_context.go:529 + << End Captured GinkgoWriter Output +------------------------------ + +Ran 368 of 7069 Specs in 5317.861 seconds +SUCCESS! -- 368 Passed | 0 Failed | 0 Pending | 6701 Skipped +PASS + +Ginkgo ran 1 suite in 1h28m38.197868995s +Test Suite Passed +You're using deprecated Ginkgo functionality: +============================================= + --noColor is deprecated, use --no-color instead + Learn more at: https://onsi.github.io/ginkgo/MIGRATING_TO_V2#changed-command-line-flags + +To silence deprecations that can be silenced set the following environment variable: + ACK_GINKGO_DEPRECATIONS=2.4.0 + diff --git a/v1.26/tencentcloud/junit_01.xml b/v1.26/tencentcloud/junit_01.xml new file mode 100644 index 0000000000..f85704c5ef --- /dev/null +++ b/v1.26/tencentcloud/junit_01.xml @@ -0,0 +1,20499 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file