-
Notifications
You must be signed in to change notification settings - Fork 916
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
handle token during open session #1
Comments
relogin during creating kyuubisession |
yaooqinn
pushed a commit
that referenced
this issue
Nov 16, 2020
fix #250 Squashed commit of the following: commit 6028f55 Author: 金晶(000538) <jinjing@fcbox.com> Date: Mon Nov 16 16:26:09 2020 +0800 测试代码SparkOperationSuite格式调整 commit 0b14c69 Author: 金晶(000538) <jinjing@fcbox.com> Date: Mon Nov 16 14:49:42 2020 +0800 1.新增SparkSQLEngine空闲定时检测任务 2.修复SparkSQLEngine关闭session时未移除SparkSQLOperationManager中缓存的sparkSession commit 8fdb6b7 Merge: fb4bace 7bfc470 Author: zen <xinjingziranchan@gmail.com> Date: Mon Nov 16 11:36:35 2020 +0800 Merge pull request #1 from yaooqinn/master 同步更新
yaooqinn
pushed a commit
that referenced
this issue
Nov 16, 2020
fix #250 Squashed commit of the following: commit 6028f55 Author: 金晶(000538) <jinjing@fcbox.com> Date: Mon Nov 16 16:26:09 2020 +0800 测试代码SparkOperationSuite格式调整 commit 0b14c69 Author: 金晶(000538) <jinjing@fcbox.com> Date: Mon Nov 16 14:49:42 2020 +0800 1.新增SparkSQLEngine空闲定时检测任务 2.修复SparkSQLEngine关闭session时未移除SparkSQLOperationManager中缓存的sparkSession commit 8fdb6b7 Merge: fb4bace 7bfc470 Author: zen <xinjingziranchan@gmail.com> Date: Mon Nov 16 11:36:35 2020 +0800 Merge pull request #1 from yaooqinn/master 同步更新
yaooqinn
pushed a commit
that referenced
this issue
Nov 18, 2020
Squashed commit of the following: commit 808ccb7 Author: Zen <xinjingziranchan@gmail.com> Date: Wed Nov 18 14:18:46 2020 +0800 Fixed duplicate issues with the Jars directory when packaging Kyuubi commit 91ca0a0 Author: zen <xinjingziranchan@gmail.com> Date: Wed Nov 18 14:01:56 2020 +0800 fix log print and SparkSQLEngine timeoutChecker concurrent issue. commit 15343d9 Author: zen <xinjingziranchan@gmail.com> Date: Tue Nov 17 21:21:03 2020 +0800 消除打印dehug日志时,重复判断是否开启debug模式 commit eecc0bf Author: zen <xinjingziranchan@gmail.com> Date: Tue Nov 17 15:08:13 2020 +0800 Revert "修复打包时kyuubi依赖jars目录重复,导致启动时找不到org.apache.kyuubi.server.KyuubiServer启动类" This reverts commit 29e9dd4 commit 29e9dd4 Author: zen <xinjingziranchan@gmail.com> Date: Tue Nov 17 14:36:23 2020 +0800 修复打包时kyuubi依赖jars目录重复,导致启动时找不到org.apache.kyuubi.server.KyuubiServer启动类 commit 44364ec Author: zen <xinjingziranchan@gmail.com> Date: Tue Nov 17 13:48:21 2020 +0800 修复SparkSQLEngine停止时没有关闭timeoutChecker线程池,以及增加timeoutChecker打印日志 commit 63a67e2 Merge: 6028f55 64b83a4 Author: zen <xinjingziranchan@gmail.com> Date: Tue Nov 17 13:37:41 2020 +0800 Merge branch 'yaooqinn-kyuubi-master' commit 6028f55 Author: 金晶(000538) <jinjing@fcbox.com> Date: Mon Nov 16 16:26:09 2020 +0800 测试代码SparkOperationSuite格式调整 commit 0b14c69 Author: 金晶(000538) <jinjing@fcbox.com> Date: Mon Nov 16 14:49:42 2020 +0800 1.新增SparkSQLEngine空闲定时检测任务 2.修复SparkSQLEngine关闭session时未移除SparkSQLOperationManager中缓存的sparkSession commit 8fdb6b7 Merge: fb4bace 7bfc470 Author: zen <xinjingziranchan@gmail.com> Date: Mon Nov 16 11:36:35 2020 +0800 Merge pull request #1 from yaooqinn/master 同步更新
ulysses-you
pushed a commit
that referenced
this issue
Sep 29, 2021
### What is the purpose of the pull request pr for KYUUBI #939:Add Z-Order extensions to optimize table with zorder.Z-order is a technique that allows you to map multidimensional data to a single dimension. We did a performance test for this test ,we used aliyun Databricks Delta test case https://help.aliyun.com/document_detail/168137.html?spm=a2c4g.11186623.6.563.10d758ccclYtVb Prepare data for the three scenarios: 1. 10 billion data and 2 hundred files(parquet files): for big file(1G) 2. 10 billion data and 1 thousand files(parquet files): for medium file(200m) 3. one billion data and 10 hundred files(parquet files): for smaller file(200k) test env: spark-3.1.2 hadoop-2.7.2 kyubbi-1.4.0 test step: Step1: create hive tables ```scala spark.sql(s"drop database if exists $dbName cascade") spark.sql(s"create database if not exists $dbName") spark.sql(s"use $dbName") spark.sql(s"create table $connRandomParquet (src_ip string, src_port int, dst_ip string, dst_port int) stored as parquet") spark.sql(s"create table $connZorderOnlyIp (src_ip string, src_port int, dst_ip string, dst_port int) stored as parquet") spark.sql(s"create table $connZorder (src_ip string, src_port int, dst_ip string, dst_port int) stored as parquet") spark.sql(s"show tables").show(false) ``` Step2: prepare data for parquet table with three scenarios we use the following code ```scala def randomIPv4(r: Random) = Seq.fill(4)(r.nextInt(256)).mkString(".") def randomPort(r: Random) = r.nextInt(65536) def randomConnRecord(r: Random) = ConnRecord( src_ip = randomIPv4(r), src_port = randomPort(r), dst_ip = randomIPv4(r), dst_port = randomPort(r)) ``` Step3: do optimize with z-order only ip, sort column: src_ip, dst_ip and shuffle partition just as file numbers . execute 'OPTIMIZE conn_zorder_only_ip ZORDER BY src_ip, dst_ip;' by kyuubi. Step4: do optimize with z-order only ip, sort column: src_ip, dst_ip and shuffle partition just as file numbers . execute 'OPTIMIZE conn_zorder ZORDER BY src_ip, src_port, dst_ip, dst_port;' by kyuubi. --------------------- # benchmark result by querying the tables before and after optimization, we find that **10 billion data and 200 files and Query resource:200 core 600G memory** | Table | Average File Size | Scan row count | Average query time | row count Skipping ratio | | ------------------- | ----------------- | -------------- | ------------------ | ------------------------ | | conn_random_parquet | 1.2 G | 10,000,000,000 | 27.554 s | 0.0% | | conn_zorder_only_ip | 890 M | 43,170,600 | 2.459 s | 99.568% | | conn_zorder | 890 M | 54,841,302 | 3.185 s | 99.451% | **10 billion data and 2000 files and Query resource:200 core 600G memory** | Table | Average File Size | Scan row count | Average query time | row count Skipping ratio | | ------------------- | ----------------- | -------------- | ------------------ | ------------------------ | | conn_random_parquet | 234.8 M | 10,000,000,000 | 27.031 s | 0.0% | | conn_zorder_only_ip | 173.9 M | 43,170,600 | 2.668 s | 99.568% | | conn_zorder | 174.0 M | 54,841,302 | 3.207 s | 99.451% | **1 billion data and 10000 files and Query resource:10 core 40G memory** | Table | Average File Size | Scan row count | Average query time | row count Skipping ratio | | ------------------- | ----------------- | -------------- | ------------------ | ------------------------ | | conn_random_parquet | 2.7 M | 1,000,000,000 | 76.772 s | 0.0% | | conn_zorder_only_ip | 2.1 M | 406,572 | 3.963 s | 99.959% | | conn_zorder | 2.2 M | 387,942 | 3.621s | 99.961% | Closes #1178 from hzxiongyinke/zorder_performance_test. Closes #939 369a9b4 [hzxiongyinke] remove set spark.sql.extensions=org.apache.kyuubi.sql.KyuubiSparkSQLExtension; 8c8ae45 [hzxiongyinke] add index z-order-benchmark 66bd20f [hzxiongyinke] change tables to three scenarios cc80f4e [hzxiongyinke] add License 70c29da [hzxiongyinke] z-order performance_test 6f1892b [hzxiongyinke] Merge pull request #1 from apache/master Lead-authored-by: hzxiongyinke <1062376716@qq.com> Co-authored-by: hzxiongyinke <75288351+hzxiongyinke@users.noreply.github.com> Signed-off-by: ulysses-you <ulyssesyou@apache.org>
pan3793
pushed a commit
that referenced
this issue
Oct 12, 2021
<!-- Thanks for sending a pull request! Here are some tips for you: 1. If this is your first time, please read our contributor guidelines: https://kyuubi.readthedocs.io/en/latest/community/contributions.html 2. If the PR is related to an issue in https://github.com/apache/incubator-kyuubi/issues, add '[KYUUBI #XXXX]' in your PR title, e.g., '[KYUUBI #XXXX] Your PR title ...'. 3. If the PR is unfinished, add '[WIP]' in your PR title, e.g., '[WIP][KYUUBI #XXXX] Your PR title ...'. --> ### _Why are the changes needed?_ <!-- Please clarify why the changes are needed. For instance, 1. If you add a feature, you can talk about the use case of it. 2. If you fix a bug, you can clarify why it is a bug. --> ### _How was this patch tested?_ - [ ] Add some test cases that check the changes thoroughly including negative and positive cases if possible - [ ] Add screenshots for manual tests if appropriate - [ ] [Run test](https://kyuubi.readthedocs.io/en/latest/develop_tools/testing.html#running-tests) locally before make a pull request Closes #1217 from hzxiongyinke/zorder-by_and_order-by_performance_test. Closes #1217 c0232c6 [xiongyinke] format z-order-benchmark.md a7d7111 [xiongyinke] update zorder benchmark data 3bf5f81 [xiongyinke] update benchmark result secondary headlines and fix z-order test result; f5c9dfb [hzxiongyinke] Merge pull request #3 from apache/master 6f1892b [hzxiongyinke] Merge pull request #1 from apache/master Lead-authored-by: xiongyinke <1062376716@qq.com> Co-authored-by: hzxiongyinke <75288351+hzxiongyinke@users.noreply.github.com> Signed-off-by: Cheng Pan <chengpan@apache.org>
turboFei
pushed a commit
that referenced
this issue
Jul 7, 2022
### _Why are the changes needed?_ close #3007 ### _How was this patch tested?_ - [ ] Add some test cases that check the changes thoroughly including negative and positive cases if possible - [ ] Add screenshots for manual tests if appropriate - [ ] [Run test](https://kyuubi.apache.org/docs/latest/develop_tools/testing.html#running-tests) locally before make a pull request Closes #3015 from lsm1/features/Bump_scopt. Closes #3007 876659b [senmiaoliu] fix style e1a4203 [LSM] Merge pull request #1 from cxzl25/PR_3015_UT 9a34eed [sychen] fix UT 46e1dff [senmiaoliu] uodate dependencyList 8481b14 [senmiaoliu] Bump scopt from 4.0.1 to 4.1.0 Lead-authored-by: senmiaoliu <senmiaoliu@trip.com> Co-authored-by: sychen <sychen@ctrip.com> Co-authored-by: LSM <senmiaoliu@trip.com> Signed-off-by: Fei Wang <fwang12@ebay.com>
11 tasks
hddong
added a commit
to hddong/kyuubi
that referenced
this issue
Dec 16, 2022
zhaohehuhu
referenced
this issue
in zhaohehuhu/incubator-kyuubi
Dec 18, 2022
turboFei
added a commit
that referenced
this issue
Jan 7, 2023
### _Why are the changes needed?_ ### _How was this patch tested?_ - [ ] Add some test cases that check the changes thoroughly including negative and positive cases if possible - [ ] Add screenshots for manual tests if appropriate - [ ] [Run test](https://kyuubi.apache.org/docs/latest/develop_tools/testing.html#running-tests) locally before make a pull request Closes #4022 from lightning-L/kyuubi-3968. Closes #3968 8560a2f [lightning_L] Merge pull request #2 from turboFei/kyuubi-3968 5f76107 [fwang12] follow up 7f6cb1b [lightning_L] Merge pull request #1 from turboFei/kyuubi-3968 cc0d6cb [lightning_L] Merge branch 'apache:master' into kyuubi-3968 46ea82e [fwang12] nit 11b1f8c [fwang12] follow up 54fa3df [Tianlin Liao] fix NPE when folder does not exist 0353e70 [Tianlin Liao] update regex a475f7b [Tianlin Liao] list all metadata store sql files and use the one with the largest version number 043b43b [Tianlin Liao] fix license 871b60e [Tianlin Liao] fix ece1f60 [Tianlin Liao] fix 40831c5 [Tianlin Liao] [KYUUBI #3968] Upgrading and migration script for Jdbc Lead-authored-by: Tianlin Liao <tiliao@ebay.com> Co-authored-by: lightning_L <tianlinliao@163.com> Co-authored-by: fwang12 <fwang12@ebay.com> Signed-off-by: fwang12 <fwang12@ebay.com>
pan3793
added a commit
that referenced
this issue
Mar 17, 2023
### _Why are the changes needed?_ Introduce a brand new CHAT engine, it's supposed to support different backends, e.g. ChatGPT, 文心一言, etc. This PR implements the following providers: - ECHO, simply replies a welcome message. - GPT: a.k.a ChatGPT, powered by OpenAI, which requires a API key for authentication. https://platform.openai.com/account/api-keys Add the following configurations in `kyuubi-defaults.conf` ``` kyuubi.engine.chat.provider=[ECHO|GPT] kyuubi.engine.chat.gpt.apiKey=<chat-gpt-api-key> ``` Open an ECHO beeline chat engine. ``` beeline -u 'jdbc:hive2://localhost:10009/?kyuubi.engine.type=CHAT;kyuubi.engine.chat.provider=ECHO' ``` ``` Connecting to jdbc:hive2://localhost:10009/ Connected to: Kyuubi Chat Engine (version 1.8.0-SNAPSHOT) Driver: Kyuubi Project Hive JDBC Client (version 1.7.0) Beeline version 1.7.0 by Apache Kyuubi 0: jdbc:hive2://localhost:10009/> Hello, Kyuubi!; +----------------------------------------+ | reply | +----------------------------------------+ | This is ChatKyuubi, nice to meet you! | +----------------------------------------+ 1 row selected (0.397 seconds) ``` Open a ChatGPT beeline chat engine. (make sure your network can connect the open API and configure the API key) ``` beeline -u 'jdbc:hive2://localhost:10009/?kyuubi.engine.type=CHAT;kyuubi.engine.chat.provider=GPT' ``` <img width="1109" alt="image" src="https://user-images.githubusercontent.com/26535726/225813625-a002e6e2-3b0d-4194-b061-2e215d58ba94.png"> ### _How was this patch tested?_ - [x] Add some test cases that check the changes thoroughly including negative and positive cases if possible - [x] Add screenshots for manual tests if appropriate - [x] [Run test](https://kyuubi.readthedocs.io/en/master/develop_tools/testing.html#running-tests) locally before make a pull request Closes #4544 from pan3793/chatgpt. Closes #4544 87bdebb [Cheng Pan] nit f7dee18 [Cheng Pan] Update docs 9beb551 [cxzl25] chat api (#1) af38bdc [Cheng Pan] update docs 9aa6d83 [Cheng Pan] Initial implement Kyuubi Chat Engine Lead-authored-by: Cheng Pan <chengpan@apache.org> Co-authored-by: cxzl25 <cxzl25@users.noreply.github.com> Signed-off-by: Cheng Pan <chengpan@apache.org>
4 tasks
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Expected behavior
tokens dropped during open session
Actual behavior.
they actually should be handled.
Steps to reproduce the problem.
none
Specifications like the version of the project, operating system, or hardware.
nothing related.
The text was updated successfully, but these errors were encountered: