Skip to content
This repository has been archived by the owner on May 12, 2021. It is now read-only.

minikube installation with kata-fc fails #1915

Closed
paul-snively opened this issue Jul 28, 2019 · 20 comments
Closed

minikube installation with kata-fc fails #1915

paul-snively opened this issue Jul 28, 2019 · 20 comments
Assignees
Labels
bug Incorrect behaviour needs-docs Needs some new or updated documentation needs-review Needs to be assessed by the team. question Requires an answer related/firecracker Firecracker related/k8s Kubernetes

Comments

@paul-snively
Copy link

Description of problem

Following the installation guide for minikube, at least with a certain configuration, results in a test pod stuck in ContainerCreating.

psnively@oryx-pro:~|cd packaging/kata-deploy 
psnively@oryx-pro:~/packaging/kata-deploy|master 
⇒  minikube delete
! "minikube" cluster does not exist
! "minikube" profile does not exist
psnively@oryx-pro:~/packaging/kata-deploy|master 
⇒  minikube start --bootstrapper=kubeadm --container-runtime=containerd --enable-default-cni --memory 16384 --network-plugin=cni --vm-driver kvm2 --feature-gates=RuntimeClass=true --cpus 4 --disk-size 50G --kubernetes-version 1.13.7
* minikube v1.2.0 on linux (amd64)
* Creating kvm2 VM (CPUs=4, Memory=16384MB, Disk=50000MB) ...
* Configuring environment for Kubernetes v1.13.7 on containerd 1.2.5
* Pulling images ...
* Launching Kubernetes ... 
* Verifying: apiserver etcd scheduler controller
* Done! kubectl is now configured to use "minikube"
psnively@oryx-pro:~/packaging/kata-deploy|master 
⇒  kubectl apply -f kata-rbac.yaml 
serviceaccount/kata-label-node created
clusterrole.rbac.authorization.k8s.io/node-labeler created
clusterrolebinding.rbac.authorization.k8s.io/kata-label-node-rb created
psnively@oryx-pro:~/packaging/kata-deploy|master 
⇒  kubectl apply -f kata-deploy.yaml 
daemonset.apps/kata-deploy created
psnively@oryx-pro:~/packaging/kata-deploy|master 
⇒  kubectl apply -f k8s-1.13/runtimeclass-crd.yaml 
customresourcedefinition.apiextensions.k8s.io/runtimeclasses.node.k8s.io created
psnively@oryx-pro:~/packaging/kata-deploy|master 
⇒  kubectl apply -f k8s-1.13/kata-fc-runtimeClass.yaml 
runtimeclass.node.k8s.io/kata-fc created
psnively@oryx-pro:~/packaging/kata-deploy|master 
⇒  kubectl apply -f examples/test-deploy-kata-fc.yaml 
deployment.apps/php-apache-kata-fc created
service/php-apache-kata-fc created
psnively@oryx-pro:~/packaging/kata-deploy|master 
⇒  kubectl get pods
NAME                                  READY   STATUS              RESTARTS   AGE
php-apache-kata-fc-6c6f484c4b-xlv89   0/1     ContainerCreating   0          17s
psnively@oryx-pro:~/packaging/kata-deploy|master 
⇒  kubectl describe php-apache-kata-fc-6c6f484c4b-xlv89
error: the server doesn't have a resource type "php-apache-kata-fc-6c6f484c4b-xlv89"
psnively@oryx-pro:~/packaging/kata-deploy|master 
⇒  kubectl describe pod/php-apache-kata-fc-6c6f484c4b-xlv89
Name:           php-apache-kata-fc-6c6f484c4b-xlv89
Namespace:      default
Priority:       0
Node:           minikube/192.168.122.43
Start Time:     Sun, 28 Jul 2019 17:57:30 -0400
Labels:         pod-template-hash=6c6f484c4b
                run=php-apache-kata-fc
Annotations:    <none>
Status:         Pending
IP:             
Controlled By:  ReplicaSet/php-apache-kata-fc-6c6f484c4b
Containers:
  php-apache:
    Container ID:   
    Image:          k8s.gcr.io/hpa-example
    Image ID:       
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:        200m
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-g8kbn (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  default-token-g8kbn:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-g8kbn
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                  Age   From               Message
  ----     ------                  ----  ----               -------
  Normal   Scheduled               44s   default-scheduler  Successfully assigned default/php-apache-kata-fc-6c6f484c4b-xlv89 to minikube
  Warning  FailedCreatePodSandBox  23s   kubelet, minikube  Failed create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container: failed to create containerd task: rootfs (/run/kata-containers/shared/containers/9b9a1fbccac7b764c12fed12c4b13d220171d4011be902d6e7460ed41818ee10/rootfs) does not exist: unknown

Expected result

The test pod would run successfully.

Actual result

The pod is stuck per the above output.

You may be wondering, "Why Kubernetes 1.13.7 and containerd?" The answer is, that's the closest approximation to what we can get in GKE today.

I'm attaching the result of SSHing into minikube and sudo env "PATH=/opt/kata/bin:$PATH" /opt/kata/bin/kata-collect-data.sh > kata.log in case that helps.

kata.log

@paul-snively paul-snively added bug Incorrect behaviour needs-review Needs to be assessed by the team. labels Jul 28, 2019
@jodh-intel
Copy link
Contributor

Thanks for raising @paul-snively. For reference, the minikube install doc is:

@grahamwhaley - any thoughts? I really think we need to find a way to make that doc "executable" to ensure it works in all scenarios.

@grahamwhaley
Copy link
Contributor

Hi @paul-snively - I have a strong suspicion this is not related to your version of k8s or containerd :-) You may note in the minikube kata guide, that kata-fc/firecracker is not mentioned - because I have not tried it, and I suspect it does not (currently) work under minikube. I just tried it locally with default minikube v1.1.1, and I can launch a kata-qemu, but a kata-fc gets stuck in ContainerCreating.

My strong suspicion is that the minikube kernel config probably has some items missing that are required by kata-fc. Much like we had on kubernetes/minikube#4340

Let me see if I can identify any missing configs.
/cc @amshinde @devimc who have dabbled in the minikube kernel config and kata-fc kernel config areas...

@grahamwhaley
Copy link
Contributor

For reference then, over on kata-containers/packaging@364f425 @devimc added:

# x86 specific mmio related items

# Next config are required for firecracker
CONFIG_VIRTIO_MMIO=y
CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES=y

but I don't see them in the minikube kernel def config over at https://github.com/kubernetes/minikube/blob/master/deploy/iso/minikube-iso/board/coreos/minikube/linux_defconfig

@devimc - minikube is running kernel v4.15 - is that new enough to support fc features do you know?

@paul-snively
Copy link
Author

Thanks for the rapid response, gentlemen!

Another data point:

psnively@oryx-pro:~|⇒  sudo -E podman run -it --runtime=/opt/kata/bin/kata-fc alpine sh
[sudo] password for psnively:              
Error: rpc error: code = Unknown desc = rootfs (/run/kata-containers/shared/containers/b5bf05289f6b65160ade4f3770024dc7d481a73e25c474ca621f40ceda891fe0/rootfs) does not exist
: OCI runtime error

This is with kata-containers 1.8.0.

I would be comfortable bailing on minikube in favor of MicroK8s, in all honesty. The idea is to have easy local development in an environment similar to a real cloud deployment, and to be able to run in a CI/CD environment.

So far, I have to be honest: the Firecracker experience is completely underwhelming. I have yet to get it to run anywhere. With nemu nipping at its heels (including with kata-containers), I'm beginning to seriously question whether Firecracker is worth it.

Thanks for all your help regardless!

@chavafg
Copy link
Contributor

chavafg commented Jul 29, 2019

I think this is because of block devices not being supported by containerd. iirc, we are waiting for containerd to have a release with the devmapper snapshotter from @ganeshmaharaj which should make this work.

/cc @ganeshmaharaj @egernst

@grahamwhaley
Copy link
Contributor

@chavafg beat me to it - must have slow brain today - I just remembered we have a (slim, and not that easy to find :-( ) list of kata-fc requirements:

https://github.com/kata-containers/documentation/wiki/Initial-release-of-Kata-Containers-with-Firecracker-support#pre-requisuites

which pretty much lists virtio and block device (devicemapper) support...

@devimc
Copy link

devimc commented Jul 29, 2019

@paul-snively I'm curious, why firecracker and not QEMU?

@grahamwhaley
Copy link
Contributor

ah, microk8s - I thought that might be the cluster based on docker images (for which I believe we could not then run kata, as kata-under-docker is a bit of a no no I think) - but, no, mickok8s is a snap of all the bits of k8s you need, so an isolated k8s local cluster. I still suspect kata under microk8s might be a non-starter, as I'm not sure we can install kata (/opt/kata/*) inside that snap and then have it (qemu/kvm in particular) access the host to set up the VMs. But, @devimc would probably have more snap knowledge around kata there than anybody else. Just curious and contemplating other 'kata quick start eval platforms' beyond minikube and https://github.com/clearlinux/cloud-native-setup/tree/master/clr-k8s-examples.

@devimc
Copy link

devimc commented Jul 29, 2019

@grahamwhaley you're right , microk8s is based on docker images, you can run kata containers but network won't work :( , @paul-snively you can integrate the kata snap with an already running instance of k8s, but currently we don't include firecracker binary in the snap

@paul-snively
Copy link
Author

paul-snively commented Jul 29, 2019

@grahamwhaley: Thanks! I'd actually found those prereq's and, I suppose naïvely, believed miniube met them.

@chavafg:

I think this is because of block devices not being supported by containerd. iirc, we are waiting for containerd to have a release with the devmapper snapshotter from @ganeshmaharaj which should make this work.

/cc @ganeshmaharaj @egernst

Good catch! From firecracker-containerd, it looks like you're correct.

@devimc:

@paul-snively I'm curious, why firecracker and not QEMU?

Startup time, footprint, security. Making OSv Run on Firecracker has some interesting details.

A related effort is NEMU. Firecracker and NEMU are both forks of QEMU—Firecracker from AWS, NEMU from Intel—that take the maturity of QEMU and strip it way down to make a cloud-focused hypervisor. So far, NEMU seems a lot less finicky than Firecracker, and kata-containers already supports it, too. But poking around at the CRI ecosystem is revealing a stunning lack of maturity at this point, and has me thinking I need to allocate my innovation budget more carefully, which has me thinking about unikernel approaches possibly based on OSv, which would still be a big innovation tax bite, or Rumprun, which has the advantage of basically being a bunch of very mature NetBSD drivers in userspace.

My intended app development approach is to use GraalVM's native-image to build Scala code and deploy it to Kubernetes in the fastest, lowest-overhead profile I can. This approach is explicitly supported by OSv. I thought kata-containers with Firecracker might be it, but Firecracker seems like a royal pain, especially with containerd, so I may just go the unikernel route and rely on virtlet instead.

@paul-snively
Copy link
Author

Gentlemen, thank you all for the invaluable feedback. I'd like to reiterate #1915 (comment) before I close this ticket, in case there's anything more for me to learn there. My understanding at the moment, though, is that if I wish to use kata-containers with Firecracker in Kubernetes today, my best bet is to rely on firecracker-containerd, which may entail replacing any existing containerd in the cluster with firecracker-containerd's custom build. In addition, the storage driver must be a block driver and the host must support virtio. Does this accurately summarize the situation?

@egernst
Copy link
Member

egernst commented Jul 30, 2019

No, it doesn’t summarize :)

You need block based snapshotter, devmapper, which is in upstream containerd, but not yet released in containerd packages (targeting 1.3 AFAIU). So, you simply need to build containerd from source (very straight forward) and configure it to use the devmapper snapshotter.

@paul-snively
Copy link
Author

paul-snively commented Aug 1, 2019

No, it doesn’t summarize :)

You need block based snapshotter, devmapper, which is in upstream containerd, but not yet released in containerd packages (targeting 1.3 AFAIU). So, you simply need to build containerd from source (very straight forward) and configure it to use the devmapper snapshotter.

Forgive me, but it's rather clearly not that simple, along at least a few dimensions:

  1. Suppose I built containerd from source as you suggest. How would I go about installing this containerd in minikube?
  2. The firecracker-containerd quick start explains carefully that a container expecting to be run via firecracker-containerd must include the firecracker-containerd agent and runC. I haven't yet seen an explanation as to how to ensure kata-containers-fc containers include these necessary components.
  3. The documentation does note that the snapshotter code has been contributed upstream, but gives no indication that that eventuality will subsume the instructions provided—and, given the firecracker-containerd architecture, it's far from obvious how it possibly could.

So I continue to maintain that I've identified a gap, and possibly an insurmountable one, given minikube's architecture (e.g. relying on a vendored-in containerd). By way of contrast, k3s supports a --container-runtime-endpoint argument to its server command. Given its minimalistic design, small footprint, and --rootless mode, it also becomes appealing for use in CI/CD environments, so I'll likely refocus my efforts there.

Thanks again, everyone, for the information. I believe I've gathered enough to reach closure on the issue, so I'll close the ticket.

@jodh-intel
Copy link
Contributor

Hi @paul-snively - thanks for the detailed summary. I'm going to re-open as it sounds like there are still outstanding questions here.

@erick0z - I can't find any mention of devmapper or snapshotter in any of the Kata documentation. Should we move https://github.com/kata-containers/documentation/wiki/Initial-release-of-Kata-Containers-with-Firecracker-support into a formal doc to help others do you think?

@mcastelino, @grahamwhaley - any thoughts on this (including k3s)?

@paul-snively - if you get Kata working with k3s, please do let us know as we'd love to add a howto doc to our https://github.com/kata-containers/documentation repo! ;)

@jodh-intel jodh-intel reopened this Aug 1, 2019
@jodh-intel jodh-intel added needs-docs Needs some new or updated documentation question Requires an answer related/firecracker Firecracker related/k8s Kubernetes bug Incorrect behaviour needs-review Needs to be assessed by the team. and removed bug Incorrect behaviour needs-review Needs to be assessed by the team. labels Aug 1, 2019
@jodh-intel
Copy link
Contributor

/me has fun twiddling with GitHub labels 😄

@jodh-intel
Copy link
Contributor

Oops - sorry @erick0z! I meant to ping @egernst :)

@grahamwhaley
Copy link
Contributor

Unless there is a supported and easy(ish) way to get a block based graph driver into minikube, then kata-fc will not be working there. We should probably note that clearly on the kata minikube install docs (sure, I'll go PR it..)

@jodh-intel
Copy link
Contributor

Thanks @grahamwhaley. I do think we need a lot more doc in this area (and yes, I know it's going [to have to] change frequently, but we still need it ;)

@jodh-intel
Copy link
Contributor

Adding a ref to @grahamwhaley's PR to explain this: kata-containers/documentation#527.

@ganeshmaharaj
Copy link
Contributor

@grahamwhaley @jodh-intel @paul-snively i wonder if we can use a stand-alone snapshotter (shameless plug 😁 https://github.com/ganeshmaharaj/lvm-snapshotter) would that help get past the need for a block based snapshotter with current containerd releases that are used by minikube?

@jodh-intel jodh-intel added this to To do in Issue backlog Aug 10, 2020
egernst pushed a commit to egernst/runtime that referenced this issue Feb 9, 2021
This updates grpc-go vendor package to v1.11.3 release, to fix server.Stop()
handling so that server.Serve() does not wait blindly.

Full commit list:
d11072e (tag: v1.11.3) Change version to 1.11.3
d06e756 clientconn: add support for unix network in DialContext. (kata-containers#1883)
452c2a7 Change version to 1.11.3-dev
d89cded (tag: v1.11.2) Change version to 1.11.2
98ac976 server: add grpc.Method function for extracting method from context (kata-containers#1961)
0f5fa28 Change version to 1.11.2-dev
1e2570b (tag: v1.11.1) Change version to 1.11.1
d28faca client: Fix race when using both client-side default CallOptions and per-call CallOptions (kata-containers#1948)
48b7669 Change version to 1.11.1-dev
afc05b9 (tag: v1.11.0) Change version to 1.11.0
f2620c3 resolver: keep full unparsed target string if scheme in parsed target is not registered (kata-containers#1943)
9d2250f status: rename Status to GRPCStatus to avoid name conflicts (kata-containers#1944)
2756956 status: Allow external packages to produce status-compatible errors (kata-containers#1927)
0ff1b76 routeguide: reimplement distance calculation
dfbefc6 service reflection can lookup enum, enum val, oneof, and field symbols (kata-containers#1910)
32d9ffa Documentation: Fix broken link in rpc-errors.md (kata-containers#1935)
d5126f9 Correct Go 1.6 support policy (kata-containers#1934)
5415d18 Add documentation and example of adding details to errors (kata-containers#1915)
57640c0 Allow storing alternate transport.ServerStream implementations in context (kata-containers#1904)
031ee13 Fix Test: Update the deadline since small deadlines are prone to flakes on Travis. (kata-containers#1932)
2249df6 gzip: Add ability to set compression level (kata-containers#1891)
8124abf credentials/alts: Remove the enable_untrusted_alts flag (kata-containers#1931)
b96718f metadata: Fix bug where AppendToOutgoingContext could modify another context's metadata (kata-containers#1930)
738eb6b fix minor typos and remove grpc.Codec related code in TestInterceptorCanAccessCallOptions (kata-containers#1929)
211a7b7 credentials/alts: Update ALTS "New" APIs (kata-containers#1921)
fa28bef client: export types implementing CallOptions for access by interceptors (kata-containers#1902)
ec9275b travis: add Go 1.10 and run vet there instead of 1.9 (kata-containers#1913)
13975c0 stream: split per-attempt data from clientStream (kata-containers#1900)
2c2d834 stats: add BeginTime to stats.End (kata-containers#1907)
3a9e1ba Reset ping strike counter right before sending out data. (kata-containers#1905)
90dca43 resolver: always fall back to default resolver when target does not follow URI scheme (kata-containers#1889)
9aba044 server: Convert all non-status errors to codes.Unknown (kata-containers#1881)
efcc755 credentials/alts: change ALTS protos to match the golden version (kata-containers#1908)
0843fd0 credentials/alts: fix infinite recursion bug [in custom error type] (kata-containers#1906)
207e276 Fix test race: Atomically access minConnecTimout in testing environment. (kata-containers#1897)
3ae2a61 interop: Add use_alts flag to client and server binaries (kata-containers#1896)
5190b06 ALTS: Simplify "New" APIs (kata-containers#1895)
7c5299d Fix flaky test: TestCloseConnectionWhenServerPrefaceNotReceived (kata-containers#1870)
f0a1202 examples: Replace context.Background with context.WithTimeout (kata-containers#1877)
a1de3b2 alts: Change ALTS proto package name (kata-containers#1886)
2e7e633 Add ALTS code (kata-containers#1865)
583a630 Expunge error codes that shouldn't be returned from library (kata-containers#1875)
2759199 Small spelling fixes (unknow -> unknown) (kata-containers#1868)
12da026 clientconn: fix a typo in GetMethodConfig documentation (kata-containers#1867)
dfa1834 Change version to 1.11.0-dev (kata-containers#1863)
46fd263 benchmarks: add flag to benchmain to use bufconn instead of network (kata-containers#1837)
3926816 addrConn: Report underlying connection error in RPC error (kata-containers#1855)
445b728 Fix data race in TestServerGoAwayPendingRPC (kata-containers#1862)
e014063 addrConn: keep retrying even on non-temporary errors (kata-containers#1856)
484b3eb transport: fix race causing flow control discrepancy when sending messages over server limit (kata-containers#1859)
6c48c7f interop test: Expect io.EOF from stream.Send() (kata-containers#1858)
08d6261 metadata: provide AppendToOutgoingContext interface (kata-containers#1794)
d50734d Add status.Convert convenience function (kata-containers#1848)
365770f streams: Stop cleaning up after orphaned streams (kata-containers#1854)
7646b53 transport: support stats.Handler in serverHandlerTransport (kata-containers#1840)
104054a Fix connection drain error message (kata-containers#1844)
d09ec43 Implement unary functionality using streams (kata-containers#1835)
37346e3 Revert "Add WithResolverUserOptions for custom resolver build options" (kata-containers#1839)
424e3e9 Stream: do not cancel ctx created with service config timeout (kata-containers#1838)
f9628db Fix lint error and typo (kata-containers#1843)
0bd008f stats: Fix bug causing trailers-only responses to be reported as headers (kata-containers#1817)
5769e02 transport: remove unnecessary rstReceived (kata-containers#1834)
0848a09 transport: remove redundant check of stream state in Write (kata-containers#1833)
c22018a client: send RST_STREAM on client-side errors to prevent server from blocking (kata-containers#1823)
82e9f61 Use keyed fields for struct initializers (kata-containers#1829)
5ba054b encoding: Introduce new method for registering and choosing codecs (kata-containers#1813)
4f7a2c7 compare atomic and mutex performance in case of contention. (kata-containers#1788)
b71aced transport: Fix a data race when headers are received while the stream is being closed (kata-containers#1814)
46bef23 Write should fail when the stream was done but context wasn't cancelled. (kata-containers#1792)
10598f3 Explain target format in DialContext's documentation (kata-containers#1785)
08b7bd3 gzip: add Name const to avoid typos in usage (kata-containers#1804)
8b02d69 remove .please-update (kata-containers#1800)
1cd2346 Documentation: update broken wire.html link in metadata package. (kata-containers#1791)
6913ad5 Document that all errors from RPCs are status errors (kata-containers#1782)
8a8ac82 update const order (kata-containers#1770)
e975017 Don't set reconnect parameters when the server has already responded. (kata-containers#1779)
7aea499 credentials: return Unavailable instead of Internal for per-RPC creds errors (kata-containers#1776)
c998149 Avoid copying headers/trailers in unary RPCs unless requested by CallOptions (kata-containers#1775)
8246210 Update version to 1.10.0-dev (kata-containers#1777)
17c6e90 compare atomic and mutex performance for incrementing/storing one variable (kata-containers#1757)
65c901e Fix flakey test. (kata-containers#1771)
7f2472b grpclb: Remove duplicate init() (kata-containers#1764)
09fc336 server: fix bug preventing Serve from exiting when Listener is closed (kata-containers#1765)
035eb47 Fix TestGracefulStop flakiness (kata-containers#1767)
2720857 server: fix race between GracefulStop and new incoming connections (kata-containers#1745)
0547980 Notify parent ClientConn to re-resolve in grpclb (kata-containers#1699)
e6549e6 Add dial option to set balancer (kata-containers#1697)
6610f9a Fix test: Data race while resetting global var. (kata-containers#1748)
f4b5237 status: add Code convenience function (kata-containers#1754)
47bddd7 vet: run golint on _string files (kata-containers#1749)
45088c2 examples: fix concurrent map accesses in route_guide server (kata-containers#1752)
4e393e0 grpc: fix deprecation comments to conform to standard (kata-containers#1691)
0b24825 Adjust keepalive paramenters in the test such that scheduling delays don't cause false failures too often. (kata-containers#1730)
f9390a7 fix typo (kata-containers#1746)
6ef45d3 fix stats flaky test (kata-containers#1740)
98b17f2 relocate check for shutdown in ac.tearDown() (kata-containers#1723)
5ff10c3 fix flaky TestPickfirstOneAddressRemoval (kata-containers#1731)
2625f03 bufconn: allow readers to receive data after writers close (kata-containers#1739)
b0e0950 After sending second goaway close conn if idle. (kata-containers#1736)
b8cf13e Make sure all goroutines have ended before restoring global vars. (kata-containers#1732)
4742c42 client: fix race between server response and stream context cancellation (kata-containers#1729)
8fba5fc In gracefull stop close server transport only after flushing status of the last stream. (kata-containers#1734)
d1fc8fa Deflake tests that rely on Stop() then Dial() not reconnecting (kata-containers#1728)
dba60db Switch balancer to grpclb when at least one address is grpclb address (kata-containers#1692)
ca1b23b Update CONTRIBUTING.md to CNCF CLA
2941ee1 codes: Add UnmarshalJSON support to Code type (kata-containers#1720)
ec61302 naming: Fix build constraints for go1.6 and go1.7 (kata-containers#1718)
b8191e5 remove stringer and go generate (kata-containers#1715)
ff1be3f Add WithResolverUserOptions for custom resolver build options (kata-containers#1711)
580defa Fix grpc basics link in route_guide example (kata-containers#1713)
b7dc71e Optimize codes.String() method using a switch instead of a slice of indexes (kata-containers#1712)
1fc873d Disable ccBalancerWrapper when it is closed (kata-containers#1698)
bf35f1b Refactor roundrobin to support custom picker (kata-containers#1707)
4308342 Change parseTimeout to not handle non-second durations (kata-containers#1706)
be07790 make load balancing policy name string case-insensitive (kata-containers#1708)
cd563b8 protoCodec: avoid buffer allocations if proto.Marshaler/Unmarshaler (kata-containers#1689)
61c6740 Add comments to ClientConn/SubConn interfaces to indicate new methods may be added (kata-containers#1680)
ddbb27e client: backoff before reconnecting if an HTTP2 server preface was not received (kata-containers#1648)
a4bf341 use the request context with net/http handler (kata-containers#1696)
c6b4608 transport: fix race sending RPC status that could lead to a panic (kata-containers#1687)
00383af Fix misleading default resolver scheme comments (kata-containers#1703)
a62701e Eliminate data race in ccBalancerWrapper (kata-containers#1688)
1e1a47f Re-resolve target when one connection becomes TransientFailure (kata-containers#1679)
2ef021f New grpclb implementation (kata-containers#1558)
10873b3 Fix panics on balancer and resolver updates (kata-containers#1684)
646f701 Change version to 1.9.0-dev (kata-containers#1682)

Fixes: kata-containers#307

Signed-off-by: Peng Tao <bergwolf@gmail.com>
Issue backlog automation moved this from To do to Done Apr 7, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Incorrect behaviour needs-docs Needs some new or updated documentation needs-review Needs to be assessed by the team. question Requires an answer related/firecracker Firecracker related/k8s Kubernetes
Projects
Issue backlog
  
Done
Development

No branches or pull requests

7 participants