You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
… tokens (#263)
Make sure you have checked all steps below.
### GitHub Issue
Closes#262
### Checklist:
<!--- Go over all the following points. Check boxes that apply to this pull request -->
- [X] This pull request updates the documentation
- [ ] This pull request changes library dependencies
- [X] Title of the PR is of format (example) : [#25][Github] Add Pull Request Template
<!-- NOTE: lines that start with < - - ! and end with - - > are comments and will be ignored. -->
<!-- Please include the GitHub issue number in the PR title above. If an issue does not exist, please create one.-->
<!-- Example:[#25][Github] Add Pull Request Template where [#25 refers to #25] -->
### What is the purpose of this pull request?
<!-- Please fill in changes proposed in this pull request. -->
<!-- Example: This Pull Request upgrades codebase to spark 3.0.0 -->
* Enhance BigQuery Connector to work with tokens
### How was this change validated?
<!-- Please add the Command Used, Results Snippet, details on how reviewer/committer can simulate issue & verify the change -->
<!-- Example: In addition to unit-tests, and integration-test, I ran X on the Y cluster and verified the Z output.-->
* Tested on the premise spark cluster
### Commit Guidelines
- [X] My commits all reference GH issues in their subject lines, and I have squashed multiple commits if they address the same issue. In addition, my commits follow the guidelines from "[How to write a good git commit message](http://chris.beams.io/posts/git-commit/)":
1. Subject is separated from body by a blank line
2. Subject is limited to 50 characters
3. Subject does not end with a period
4. Subject uses the imperative mood ("add", not "adding")
5. Body wraps at 72 characters
6. Body explains "what" and "why", not "how"
What feature are your requesting?
Enhance the core big query connector to work with encrypted keys, client_id and client_secrets.
What benefits do you foresee from the feature you are requesting?
This will be useful feature for running workloads on the premise spark cluster, while reading from or writing to big query.
Potential solution/ideas?
Additional context
The text was updated successfully, but these errors were encountered: