Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gcloud container get-credentials not authenticating service account #30617

Closed
Draiken opened this issue Aug 15, 2016 · 86 comments
Closed

gcloud container get-credentials not authenticating service account #30617

Draiken opened this issue Aug 15, 2016 · 86 comments
Assignees
Labels
area/kubectl area/provider/gcp Issues or PRs related to gcp provider lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle.

Comments

@Draiken
Copy link

Draiken commented Aug 15, 2016

Kubernetes version: Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.3", GitCommit:"c6411395e09da356c608896d3d9725acab821418", GitTreeState:"clean", BuildDate:"2016-07-22T20:29:38Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: GKE
  • OS (e.g. from /etc/os-release): Debian 8
  • Kernel (e.g. uname -a): Linux 5ab86176e0e5 3.13.0-88-generic Add load balancing support to services. #135-Ubuntu SMP Wed Jun 8 21:10:42 UTC 2016 x86_64 GNU/Linux

What happened: After successfully authenticating with a service account on gcloud using:

gcloud auth activate-service-account $GOOGLE_AUTH_EMAIL --key-file /keyconfig.json --project $GOOGLE_PROJECT_ID

The cluster credentials were downloaded using:

gcloud container clusters get-credentials $CLUSTER_NAME

Even tho no errors are raised, when attempting to run kubectl version I get this error:

kubectl version
Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.3", GitCommit:"c6411395e09da356c608896d3d9725acab821418", GitTreeState:"clean", BuildDate:"2016-07-22T20:29:38Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"linux/amd64"}
error: google: could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.

What you expected to happen: Kubectl should be configured to use the cluster properly

How to reproduce it (as minimally and precisely as possible):

  • Authenticate with a service account on gcloud
  • Get credentials from a cluster through gcloud
  • Run any kubectl command that reaches the server

Anything else do we need to know:
After looking into the configuration environment I observed the user is not properly configured on .kube/config:

kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: REDACTED
    server: REDACTED
  name: gke_cluster_name
contexts:
- context:
    cluster:  gke_cluster_name
    user: gke_cluster_name
  name:  gke_cluster_name
current-context:  gke_cluster_namel
kind: Config
preferences: {}
users:
- name:  gke_cluster_name
  user:
    auth-provider:
      config: null
      name: gcp

After looking in the gcloud docs, I found this instruction:

export GOOGLE_APPLICATION_CREDENTIALS="/path/to/keyfile.json"

After exporting the variable and running get-credentials once again, running kubectl version worked, and then running kubectl config view correctly displayed the user as authenticated

...
users:
- name: gke_cluster_name
  user:
    auth-provider:
      config:
        access-token: REDACTED
        expiry: 2016-08-15T12:30:18.220399721Z
      name: gcp

Note that this workflow was running perfectly in the past. My guess is that kubectl is no longer correctly detecting the gcloud service account that is authenticated. It's only looking for the environment variable. Either way, something seems broken :)

@Cloven
Copy link

Cloven commented Aug 15, 2016

@thockin asked for @fabioy to be tagged on this in #google-containers. Duly doing so

@fabioy
Copy link
Contributor

fabioy commented Aug 15, 2016

@cjcullen Redirecting to CJ.

@cjcullen
Copy link
Member

Previously, gcloud would have configured kubectl to use the cluster's static client certificate to authenticate. Now, gcloud is configuring kubectl to use the service account's credentials.

Kubectl is just using the Application Default Credentials library, and it looks like this is part of the ADC flow for using a JSON-key service account. I'll see if there is a way that we could make this nicer.

If you don't want to add the export GOOGLE_APPLICATION_CREDENTIALS="/path/to/keyfile.json" to your flow, you can reenable the old way (use the client cert) by setting the cloudsdk container/use_client_certificate property to true. Either:

gcloud config set container/use_client_certificate True

or

export CLOUDSDK_CONTAINER_USE_CLIENT_CERTIFICATE=True

@Cloven
Copy link

Cloven commented Aug 15, 2016

Several in #google-containers have discovered the client certificate
workaround, but it's described as 'legacy', and to the best of our
collective understanding, the application default service credential is the
master credential of the actual owner of the project (which one might,
understandably, not want to hand off to a third party service provider like
a CI server), so there's a fear that we're either using something destined
to be destroyed soon, or have to use something that grants excessive
power. We've also collectively had a hard time locating and comprehending
any documentation relevant to this feature.

On Mon, Aug 15, 2016 at 4:12 PM, CJ Cullen notifications@github.com wrote:

Previously, gcloud would have configured kubectl to use the cluster's
static client certificate to authenticate. Now, gcloud is configuring
kubectl to use the service account's credentials.

Kubectl is just using the Application Default Credentials library, and it
looks like this is part of the ADC flow for using a JSON-key service
account. I'll see if there is a way that we could make this nicer.

If you don't want to add the export GOOGLE_APPLICATION_
CREDENTIALS="/path/to/keyfile.json" to your flow, you can reenable the
old way (use the client cert) by setting the cloudsdk
container/use_client_certificate property to true. Either:

gcloud config set container/use_client_certificate True

or

export CLOUDSDK_CONTAINER_USE_CLIENT_CERTIFICATE=True


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#30617 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAeOr55Dkr2ezC8Hq2fArB0CqISCjOBZks5qgPJLgaJpZM4JkTyE
.

@munnerz
Copy link
Member

munnerz commented Aug 16, 2016

So this issue caused us to be unable to deploy any of our services for ~24hrs whilst we struggled to find a cause and eventually this thread. The default behaviour of gcloud has changed in v121 to generate the auth-provider block set to GCP (and thus requiring a service account).

Our service account file is stored, encrypted, within our repositories meaning we are able to generate a kubeconfig entry however as the service account is not in the standard application default credentials location (we actually delete the service account after activating it in gcloud), kubectl then fails to authenticate as it's unable to find the service account.

The biggest issue here seems to be that the default behaviour has changed, and without much warning or particularly visible documentation. As a workaround we've added the CLOUDSDK_CONTAINER_USE_CLIENT_CERTIFICATE=True var, however would appreciate some documentation on what the longterm solution should be here now that it's hit gcloud stable.

@munnerz
Copy link
Member

munnerz commented Aug 16, 2016

I've found some info on this change on the gcloud release notes (https://cloud.google.com/sdk/release_notes), my bad. I'm not sure how best this switch should be handled and/or whether kubectl itself could print a more useful error message.

@Draiken
Copy link
Author

Draiken commented Aug 16, 2016

@munnerz you're not alone on the sentiment. I created this issue after pulling my hair for hours attempting to figure out where the error was coming from. The cryptic error message yields no help and I couldn't find anything on google.

@cjcullen
Copy link
Member

Uggh sorry about this.

CLOUDSDK_CONTAINER_USE_CLIENT_CERTIFICATE=True makes kubectl use your cluster's client cert or basic-auth.

CLOUDSDK_CONTAINER_USE_CLIENT_CERTIFICATE=False (or unset) configures kubectl to use an OAuth2 access token from whatever Google identity is currently configured (either a logged in user from gcloud auth login or a service account).

There's some meager documentation here, but we could have done a better job communicating this change.

Please continue this thread with any more questions you have, because it will help us get better documentation.

@cjcullen
Copy link
Member

As for guidance, using Google OAuth2 identities is probably safer (access can be revoked, and only temporary credentials are passed around). If you need per-cluster permissions, we don't support that yet in GKE, so you might be better off using the per-cluster credentials.

And we won't take away the ability to use the per-cluster credentials on your existing clusters. We'd like to encourage using OAuth2 credentials when possible, but you should use whatever works best for your workflow.

@Cloven
Copy link

Cloven commented Aug 18, 2016

Some questions:

  1. What are the full recommended steps for configuring e.g., a third-party CI server to have minimal permissions to be able to schedule instances into a GKE cluster?
  2. Where are the specific permissions granted by a container engine service account role documented (i.e. what is 'Container Engine Developer' vs. 'Container Engine Admin')?

@Draiken
Copy link
Author

Draiken commented Aug 18, 2016

@Cloven https://cloud.google.com/compute/docs/access/iam Took a while but I found this

We use this on deployment scripts by authenticating gcloud with a service account, configuring kubectl and then running kubectl commands to deploy new releases.

So if I understood this correctly, to use the new OAuth2 style of authentication all I have to do is point the environment variable to the private key and it'll use the new authentication style behind the scenes for me. Is that correct?

@cjcullen
Copy link
Member

cjcullen commented Aug 18, 2016

@Cloven

  1. I'd recommend creating a service account specifically for the CI server. Then, give that service account the "Container Engine Developer" role in your project's IAM page. This will allow it full access inside the cluster, but not any other privileges in your project.
  2. Docs are on their way.

Container Engine Admin: Permissions for all GKE operations.

Container Engine Cluster Admin: Permission to Create/Update/Delete clusters, but not to create Kubernetes resources inside of them.

Container Engine Developer: Permission on all Kubernetes resources inside of clusters, but not to create/update/delete clusters.

Container Engine Viewer: Read-only permission to everything (except secrets).

@cjcullen
Copy link
Member

@Draiken that's correct. Kubectl will use the ApplicationDefaultCredentials library, which should now be authenticating as the same service account that gcloud is using.

@Cloven
Copy link

Cloven commented Aug 18, 2016

@cjcullen, thanks, sounds about right. Glad to hear the docs are catching up. Also look forward to more finely-grained IAM permissions; e.g., I'd like to differentially secure dev, staging and prod clusters under one project.

@cjcullen
Copy link
Member

Here are some docs. Feel free to comment here on what you think is missing/could be better:
https://cloud.google.com/container-engine/docs/iam-integration

@cjcullen
Copy link
Member

Per-cluster granularity is also on the roadmap :)

@rvrignaud
Copy link

rvrignaud commented Aug 19, 2016

Hi @cjcullen,

As other said, it's nice to see that IAM is working now. Could we hope per cluster / per namespace granularity ? That would be awesome !

@cjcullen
Copy link
Member

@rvrignaud Both are in our plans.

@jmhodges
Copy link

jmhodges commented Aug 23, 2016

@cjcullen thanks so much for your comments!

I think I'm still a little confused about what the right solution for authenticating kubectl is. If you have some time, maybe you could see if there's something obvious that I'm doing wrong?

Here's what I've tried to get my CI to use a Google service account and not rely on the client certificate.

(I'm going to lay it all out here in this comment, but I also have my CI script and the failed build open for others to see.)

Using the latest gcloud, I run gcloud auth activate-service-account --key-file /yadda/yadda.json to authenticate. The service account email address outputted by activate-service-account is the one I have given the role "Container Engine Developer". But kubectl patch fails with "You must be logged in to the server (the server has asked for the client to provide credentials)"

I then tried tossing in export GOOGLE_APPLICATION_CREDENTIALS="/yadda/yadda.json" but that fails in the same way.

To avoid using the client certificate, what's left for me to do? I think maybe I'm not sure what environment variable that "point the environment variable to the private key" is referring to if it's not GOOGLE_APPLICATION_CREDENTIALS.

@cjcullen
Copy link
Member

@jmhodges I would expect that flow to work. The "server has asked for credentials" error means that the credentials that kubectl is using are not getting authenticated for some reason (but it looks like your gcloud commands are working fine with them). I'll take a closer look tomorrow.

@jmhodges
Copy link

jmhodges commented Aug 26, 2016

Great! Thanks for looking at it! I think this is the problem a lot of folks have been having. We've been getting a regular stream of folks in the #google-containers channel on the kubernetes slack that have this problem (and all solved with use_client_certificate).

@jmhodges
Copy link

Oh, and for further hopeful debugging help: I first got a red build (with no deploy code changes) on Aug 12th, but that's not necessarily the first day it would have been broken. That seems to be about when other folks started noticing it.

@Cloven
Copy link

Cloven commented Aug 26, 2016

So I created a service account with the container dev role, and downloaded its json key, and authed it, with latest gcloud.

I confirmed that it was limited to inside-container operations by trying to create a new cluster with it, and was properly denied:


fsg@spatula:/Users/fsg/p/k8/sputnik  $ gcloud container clusters create boop
ERROR: (gcloud.container.clusters.create) ResponseError: code=403, message=Required "container.clusters.create" permission for "projects/amorphous-horse".

I then switched accounts to the account owner account, created a new cluster from the command line as my account owner, and then switched back to the service account with 'gcloud config set account gkerunner@amorphous-horse.iam.gserviceaccount.com'.

I then downloaded the cluster credential:

gcloud container clusters get-credentials rubik

I then confirmed that I could deploy new images to the cluster using the service account:

fsg@spatula:/Users/fsg/p/k8/sputnik  $ kubectl run sputnik --image=gcr.io/amorphous-horse/sputnik                   
deployment "sputnik" created

and checking the logs and dashboard confirmed that the deployment and associated pod were up and running successfully.

I then tried to replicate @jmhodges' issue by replicating the exact command that his CI script used, on the theory that perhaps there was a permissioning issue specifically involving just the patch subcommand of kubectl:

fsg@spatula:/Users/fsg/p/k8/sputnik  $ PATCH="[{\"op\": \"replace\", \"path\": \"/spec/template/spec/containers/0/image\", \"value\": \"${DEPLOY_IMAGE}\"}]"
fsg@spatula:/Users/fsg/p/k8/sputnik  $ echo $PATCH
[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value": "gcr.io/amorphous-horse/sputnik"}]
fsg@spatula:/Users/fsg/p/k8/sputnik  $ kubectl patch deployment sputnik --type="json" -p "${PATCH}"
"sputnik" patched

and to my faint surprise everything seemed to work fine.

fsg@spatula:/Users/fsg/p/k8/sputnik  $ gcloud auth list
Credentialed Accounts:
 - felixgallo@gmail.com 
 - gkerunner@amorphous-horse.iam.gserviceaccount.com ACTIVE
To set the active account, run:
    $ gcloud config set account `ACCOUNT`
fsg@spatula:/Users/fsg/p/k8/sputnik  $

So I don't know what that means, but either I'm misunderstanding a step, or I was able to complete the entire process without granting additional abilities or using the 'legacy' certificate flow. I had been all excited because I thought maybe, aha, kubectl patch (which also operates on nodes) might be under more severe permission restrictions, but ... apparently not.

@jmhodges, what happens if you try using the set image/rollout flow (http://kubernetes.io/docs/user-guide/kubectl/kubectl_rollout/), which might maybe theoretically possibly what maybe you perhaps want anyway, instead of the patch flow?

@cjcullen
Copy link
Member

@Cloven: It's very possible that your kubectl is still using your gmail account, even though your gcloud is using the service account. I think there might be a different step to configure Application Default Credentials to activate a service account: https://cloud.google.com/sdk/gcloud/reference/beta/auth/application-default/activate-service-account.

@jmhodges: Try gcloud beta auth application-default activate-service-account ...

@jmhodges
Copy link

jmhodges commented Aug 26, 2016

@cjcullen Thanks for the idea! The beta auth application-default failed in a new way.

ERROR: (gcloud.container.clusters.get-credentials) ResponseError: code=403, message=Request had insufficient authentication scopes.
unable to get credentials for GKE cluster

Build with that in it is at: https://travis-ci.org/jmhodges/howsmyssl/builds/155478643. (There's a bit of extra noise in there because gcloud had to download beta auth stuff.)

And I triple checked my assumptions: The email address of the svc account listed in the build (howsmyssl-travis-deploy@personal-sites-1295.iam.gserviceaccount.com) matches the one I have in my project's IAM with the "Container Engine Developer" role and the last modified date of the key file in the git commit log and the creation date of the svc account in IAM. So, I'm pretty confident I've got the right account.

@jmhodges
Copy link

Oh, and here's the patch I put in play: jmhodges/howsmyssl@297b2b2 (you can find it linked in the travis build page, but it's a little hard to find there)

@spiffxp
Copy link
Member

spiffxp commented Jun 23, 2017

/sig cluster-lifecycle
/area platform/gke
since we lack a sig-gcp at the moment

@k8s-ci-robot k8s-ci-robot added sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. area/provider/gcp Issues or PRs related to gcp provider labels Jun 23, 2017
@k8s-github-robot k8s-github-robot removed the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Jun 23, 2017
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 30, 2017
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 29, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@venumurthy
Copy link

astonished to see this issue has been there for more than 2 years now! Google Cloud wake up

@damien75
Copy link

damien75 commented May 11, 2018

Hi,
After creating a cluster, I would like to use gcloud to write the credentials to my K8s config file. I do the following:

gcloud config set container/use_client_certificate True
gcloud container clusters get-credentials clusterName

In my K8s config file it fills the fields:

    username: null
    password: null

Am I missing something or it happens to others? I am authenticated with a service account that has roles K8s Engine Admin and K8s Engine Cluster Admin.
Thanks in advance for your help!

@rochdev
Copy link

rochdev commented May 11, 2018

@damien75 Generally speaking, you should not set container/use_client_certificate to true for new clusters as this switches to the old, less secure way to authenticate.

If you want to use your own account permissions, use:

gcloud auth application-default login
gcloud container clusters get-credentials clusterName

If you want to use a service account file, use:

gcloud auth activate-service-account [ACCOUNT] --key-file=KEY_FILE
gcloud container clusters get-credentials clusterName

You can also set the GOOGLE_APPLICATION_CREDENTIALS to the service account file path instead.

In both cases, you should make sure that container/use_client_certificate is either set to false or not set at all.

@dchenk
Copy link

dchenk commented Jan 21, 2019

If you have a region-wide cluster, pass in --region=cluster-region-id to gcloud container clusters get-credentials. So you might say: gcloud container clusters get-credentials my-cluster --region=us-central1

@sudomann
Copy link

I discovered this thread when searching for a solution to some unrelated use of kubernetes, where kubectl suddenly wouldn't be able to connect to the cluster, throwing:

Unable to connect to the server: x509: certificate has expired or is not yet valid

Here is the method i used to work around it, in case it can help someone else:

Not wanting to use the older, more insecure authentication method;

gcloud config set container/use_client_certificate True

I deleted the kubeconfig file

rm ~/.kube/config

and re-authenticated with gcloud container clusters get-credentials

@mboret
Copy link

mboret commented Jul 16, 2019

ERROR: (gcloud.container.clusters.get-credentials) ResponseError: code=403, message=Required "container.clusters.get" permission(s) for "projects/....

In my case, I get this error after a Gcloud service account recreation(delete/create with the same SA email - username@projectXXX.iam.gserviceaccount.com). Creating the SA with a different email fixed my issue.

@tvvignesh
Copy link

@mboret I had the same issue today re-using the same email for service account. Lucky that I saw your comment. Changed the name and it worked. Thanks.

Screenshot from 2020-04-23 00-09-15

RangaSamudrala added a commit to RangaSamudrala/multi-k8s that referenced this issue Jul 6, 2020
@avielb
Copy link

avielb commented Jul 15, 2020

for anyone who still gets it in my end the person I was helping with this problem had multiple accounts
can be viewed by typing:
gcloud auth list and see if marked with *

dlebrero added a commit to akvo/akvo-flow-services that referenced this issue Oct 26, 2020
@tantweiler
Copy link

Thanks for digging into that @jmhodges. It looks like the gcloud beta auth application-default was deleted. All that command did was copied the key to the default application default credentials location, which was kinda unnecessary anyway.

For anybody that comes across this issue, here are some steps from the beginning on how to authenticate to a GKE cluster using a service account:

# Set these variables for your project
PROJECT_ID=my-project
SA_NAME=my-new-serviceaccount
SA_EMAIL=$SA_NAME@$PROJECT_ID.iam.gserviceaccount.com
KEY_FILE=~/serviceaccount_key.json
CLUSTER_NAME=my-cluster

# Create a new GCP IAM service account.
gcloud iam service-accounts create $SA_NAME

# Download a json key for that service account.
gcloud iam service-accounts keys create $KEY_FILE --iam-account $SA_EMAIL

# Give that service account the "Container Engine Developer" IAM role for your project.
gcloud projects add-iam-policy-binding $PROJECT_ID --member serviceAccount:$SA_EMAIL --role roles/container.developer

# Configure kubectl to point to your cluster.
gcloud container clusters get-credentials $CLUSTER_NAME

# Configure Application Default Credentials (what kubectl uses) to use the service account.
export GOOGLE_APPLICATION_CREDENTIALS=$KEY_FILE

I ran into the same issue but the comment from @cjcullen (09/29/2016) fixed it for me! Thanks a lot!

@talonx
Copy link

talonx commented Feb 7, 2021

@damien75 Generally speaking, you should not set container/use_client_certificate to true for new clusters as this switches to the old, less secure way to authenticate.

If you want to use your own account permissions, use:

gcloud auth application-default login
gcloud container clusters get-credentials clusterName

If you want to use a service account file, use:

gcloud auth activate-service-account [ACCOUNT] --key-file=KEY_FILE
gcloud container clusters get-credentials clusterName

You can also set the GOOGLE_APPLICATION_CREDENTIALS to the service account file path instead.

In both cases, you should make sure that container/use_client_certificate is either set to false or not set at all.

@rochdev While this is the recommended solution for new clusters on GKE, this seems to have issues when the clusters are in different projects, or different accounts, and we use two different gcloud accounts to do a get-credentials.

Because the use_client_certificate is set to false, it ends up using the gcloud command to refresh a token in the kubeconfig, and with multiple gcloud accounts one of the kubectls is going to fail at some point because only one gcloud config can be active at any point.

A hack (not recommended) would be to allow the same gcloud account to have access to both clusters and then do a get-credentials but I don't want to go down that path.

Am I missing something here? What is the recommended way to fetch kubeconfigs for multiple GKE clusters running on different GCP accounts or projects, with no single service account that has access to both of them?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kubectl area/provider/gcp Issues or PRs related to gcp provider lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle.
Projects
None yet
Development

No branches or pull requests