Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrade version privatebin 1.3.4 to 1.3.5 #44

Merged
merged 1 commit into from
Sep 7, 2021

Conversation

v-theo
Copy link
Contributor

@v-theo v-theo commented Aug 27, 2021

No description provided.

@elrido elrido requested a review from bdashrad August 27, 2021 14:19
@elrido elrido merged commit 0dab511 into PrivateBin:master Sep 7, 2021
@elrido
Copy link
Contributor

elrido commented Sep 7, 2021

Thank you for the help and sorry for the delay. I guess we would be looking for additional maintainers for this project.

@bdashrad
Copy link
Collaborator

bdashrad commented Sep 7, 2021

@elrido @v-theo Sorry I didn't get a chance to review this, I've been sick the last week.

@elrido
Copy link
Contributor

elrido commented Sep 7, 2021

No worries and get well soon!

@v-theo
Copy link
Contributor Author

v-theo commented Sep 7, 2021

Thank for validating the merge request.
On the other hand I notice that the checks are KO, is this normal?
I have good knowledge of helm and in my opinion my modification is valid.
Looking forward to helping

@elrido
Copy link
Contributor

elrido commented Sep 7, 2021

@v-theo I'm not sure myself but they all failed with similar logs, hence I assumed it was an issue with that workflow and not the change.

For this stage:

- name: Run chart-testing (install)
run: ct install --config .ci/ct.yaml

The logs states:

Installing charts...

------------------------------------------------------------------------------------------------------------------------
 Charts to be processed:
------------------------------------------------------------------------------------------------------------------------
 privatebin => (version: "0.7.0", path: "privatebin")
------------------------------------------------------------------------------------------------------------------------

"privatebin" already exists with the same configuration, skipping
Installing chart 'privatebin => (version: "0.7.0", path: "privatebin")'...
Creating namespace 'privatebin-79b75njdzu'...
namespace/privatebin-79b75njdzu created
Error: timed out waiting for the condition
========================================================================================================================
........................................................................................................................
==> Events of namespace privatebin-79b75njdzu
........................................................................................................................
LAST SEEN   TYPE      REASON              OBJECT                                       SUBOBJECT   SOURCE                  MESSAGE                                                                       FIRST SEEN   COUNT   NAME
85s         Warning   FailedScheduling    pod/privatebin-79b75njdzu-9697bf556-l2rbf                default-scheduler       0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate.   13m          10      privatebin-79b75njdzu-9697bf556-l2rbf.16a272a0b688c779
13m         Normal    SuccessfulCreate    replicaset/privatebin-79b75njdzu-9697bf556               replicaset-controller   Created pod: privatebin-79b75njdzu-9697bf556-l2rbf                            13m          1       privatebin-79b75njdzu-9697bf556.16a272a0b6817bda
13m         Normal    ScalingReplicaSet   deployment/privatebin-79b75njdzu                         deployment-controller   Scaled up replica set privatebin-79b75njdzu-9697bf556 to 1                    13m          1       privatebin-79b75njdzu.16a272a0b5f812e1
........................................................................................................................
<== Events of namespace privatebin-79b75njdzu
........................................................................................................................
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
==> Description of pod privatebin-79b75njdzu-9697bf556-l2rbf
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Name:           privatebin-79b75njdzu-9697bf556-l2rbf
Namespace:      privatebin-79b75njdzu
Priority:       0
Node:           <none>
Labels:         app.kubernetes.io/instance=privatebin-79b75njdzu
                app.kubernetes.io/name=privatebin
                pod-template-hash=9697bf556
Annotations:    <none>
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  ReplicaSet/privatebin-79b75njdzu-9697bf556
Containers:
  privatebin:
    Image:        privatebin/nginx-fpm-alpine:1.3.5
    Port:         8080/TCP
    Host Port:    0/TCP
    Liveness:     http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
    Readiness:    http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /srv/cfg from configs (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-c9rcj (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  configs:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      privatebin-79b75njdzu-configs
    Optional:  false
  default-token-c9rcj:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-c9rcj
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  86s (x10 over 13m)  default-scheduler  0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate.

And for the verify stage:

- name: Verify kind
run: |
kubectl cluster-info
kubectl get nodes -o wide
kubectl get pods -n kube-system

The log output is:

Kubernetes master is running at https://127.0.0.1:36585
KubeDNS is running at https://127.0.0.1:36585/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
NAME                          STATUS     ROLES    AGE   VERSION    INTERNAL-IP   EXTERNAL-IP   OS-IMAGE       KERNEL-VERSION     CONTAINER-RUNTIME
chart-testing-control-plane   NotReady   master   94s   v1.16.15   172.18.0.2    <none>        Ubuntu 21.04   5.8.0-1040-azure   containerd://1.5.2
chart-testing-worker          NotReady   <none>   61s   v1.16.15   172.18.0.3    <none>        Ubuntu 21.04   5.8.0-1040-azure   containerd://1.5.2
NAME                                                  READY   STATUS             RESTARTS   AGE
coredns-5644d7b6d9-bhvnz                              0/1     Pending            0          76s
coredns-5644d7b6d9-dbd9b                              0/1     Pending            0          76s
etcd-chart-testing-control-plane                      1/1     Running            0          26s
kindnet-2f5jp                                         1/1     Running            0          62s
kindnet-wsf7c                                         1/1     Running            0          76s
kube-apiserver-chart-testing-control-plane            1/1     Running            0          21s
kube-controller-manager-chart-testing-control-plane   1/1     Running            0          13s
kube-proxy-cxh2f                                      0/1     Error              3          62s
kube-proxy-nlcfq                                      0/1     CrashLoopBackOff   3          76s
kube-scheduler-chart-testing-control-plane            1/1     Running            0          33s

Maybe we need to upgrade the kind action?

- name: Create kind cluster
uses: helm/kind-action@v1.0.0

@bdashrad
Copy link
Collaborator

bdashrad commented Sep 7, 2021

The checks fail when they run in the forked branch because they run in your account's context @v-theo instead of the PrivateBin context. I haven't found a way around this yet for PRs from forks but it's something we can explore further.

@v-theo
Copy link
Contributor Author

v-theo commented Sep 7, 2021

I will also watch when I have a little time.

@github-actions github-actions bot mentioned this pull request Nov 22, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants