Skip to content

Commit 1cd440f

Browse files
committed
Removed virtual environment from repo
1 parent 6828e9f commit 1cd440f

38 files changed

+392
-46
lines changed

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -84,3 +84,4 @@ instance/
8484
docs/_build/
8585

8686
.DS_Store
87+
myenv/

README.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -265,14 +265,16 @@ To distribute **stackql-deploy** on PyPI, you'll need to ensure that you have al
265265
266266
First, ensure you have the latest versions of `setuptools` and `wheel` installed:
267267
268-
```
268+
```bash
269+
python3 -m venv venv
270+
source venv/bin/activate
269271
# pip install --upgrade setuptools wheel
270272
pip install --upgrade build
271273
```
272274
273275
Then, navigate to your project root directory and build the distribution files:
274276
275-
```
277+
```bash
276278
rm dist/stackql_deploy*
277279
python3 -m build
278280
# or

examples/databricks/all-purpose-cluster/README.md

Lines changed: 10 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ Now, is is convenient to use environment variables for context. Note that for o
2626
```bash
2727
#!/usr/bin/env bash
2828

29-
export ASSETS_AWS_REGION='us-east-1' # or wherever you want
29+
export AWS_REGION='us-east-1' # or wherever you want
3030
export AWS_ACCOUNT_ID='<your aws account ID>'
3131
export DATABRICKS_ACCOUNT_ID='<your databricks account ID>'
3232
export DATABRICKS_AWS_ACCOUNT_ID='<your databricks aws account ID>'
@@ -46,28 +46,20 @@ export AWS_ACCESS_KEY_ID='<your aws access key id per aws cli>'
4646
Now, let us do some sanity checks and housekeeping with `stackql`. This is purely optional. From the root of this repository:
4747

4848
```
49-
5049
source examples/databricks/all-purpose-cluster/convenience.sh
51-
5250
stackql shell
53-
5451
```
5552

5653
This will start a `stackql` interactive shell. Here are some commands you can run (I will not place output here, that will be shared in a corresponding video):
5754

5855

5956
```sql
60-
6157
registry pull databricks_account v24.12.00279;
62-
6358
registry pull databricks_workspace v24.12.00279;
6459

6560
-- This will fail if accounts, subscription, or credentials are in error.
6661
select account_id FROM databricks_account.provisioning.credentials WHERE account_id = '<your databricks account id>';
67-
68-
6962
select account_id, workspace_name, workspace_id, workspace_status from databricks_account.provisioning.workspaces where account_id = '<your databricks account id>';
70-
7163
```
7264

7365
For extra credit, you can (asynchronously) delete the unnecessary workspace with `delete from databricks_account.provisioning.workspaces where account_id = '<your databricks account id>' and workspace_id = '<workspace id>';`, where you obtain the workspace id from the above query. I have noted that due to some reponse caching it takes a while to disappear from select queries (much longer than disappearance from the web page), and you may want to bounce the `stackql` session to hurry things along. This is not happening on the `stackql` side, but session bouncing forces a token refresh which can help cache busting.
@@ -77,20 +69,20 @@ For extra credit, you can (asynchronously) delete the unnecessary workspace with
7769
Time to get down to business. From the root of this repository:
7870

7971
```bash
80-
72+
python3 -m venv myenv
8173
source examples/databricks/all-purpose-cluster/convenience.sh
82-
83-
source ./.venv/bin/activate
84-
85-
74+
source venv/bin/activate
75+
pip install stackql-deploy
8676
```
8777

78+
> alternatively set the `AWS_REGION`, `AWS_ACCOUNT_ID`, `DATABRICKS_ACCOUNT_ID`, `DATABRICKS_AWS_ACCOUNT_ID` along with provider credentials `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `DATABRICKS_CLIENT_ID`, `DATABRICKS_CLIENT_SECRET`
79+
8880
Then, do a dry run (good for catching **some** environmental issues):
8981

9082
```bash
9183
stackql-deploy build \
9284
examples/databricks/all-purpose-cluster dev \
93-
-e AWS_REGION=${ASSETS_AWS_REGION} \
85+
-e AWS_REGION=${AWS_REGION} \
9486
-e AWS_ACCOUNT_ID=${AWS_ACCOUNT_ID} \
9587
-e DATABRICKS_ACCOUNT_ID=${DATABRICKS_ACCOUNT_ID} \
9688
-e DATABRICKS_AWS_ACCOUNT_ID=${DATABRICKS_AWS_ACCOUNT_ID} \
@@ -105,7 +97,7 @@ Now, let use do it for real:
10597
```bash
10698
stackql-deploy build \
10799
examples/databricks/all-purpose-cluster dev \
108-
-e AWS_REGION=${ASSETS_AWS_REGION} \
100+
-e AWS_REGION=${AWS_REGION} \
109101
-e AWS_ACCOUNT_ID=${AWS_ACCOUNT_ID} \
110102
-e DATABRICKS_ACCOUNT_ID=${DATABRICKS_ACCOUNT_ID} \
111103
-e DATABRICKS_AWS_ACCOUNT_ID=${DATABRICKS_AWS_ACCOUNT_ID} \
@@ -128,7 +120,7 @@ We can also use `stackql-deploy` to assess if our infra is shipshape:
128120
```bash
129121
stackql-deploy test \
130122
examples/databricks/all-purpose-cluster dev \
131-
-e AWS_REGION=${ASSETS_AWS_REGION} \
123+
-e AWS_REGION=${AWS_REGION} \
132124
-e AWS_ACCOUNT_ID=${AWS_ACCOUNT_ID} \
133125
-e DATABRICKS_ACCOUNT_ID=${DATABRICKS_ACCOUNT_ID} \
134126
-e DATABRICKS_AWS_ACCOUNT_ID=${DATABRICKS_AWS_ACCOUNT_ID} \
@@ -151,7 +143,7 @@ Now, let us teardown our `stackql-deploy` managed infra:
151143
```bash
152144
stackql-deploy teardown \
153145
examples/databricks/all-purpose-cluster dev \
154-
-e AWS_REGION=${ASSETS_AWS_REGION} \
146+
-e AWS_REGION=${AWS_REGION} \
155147
-e AWS_ACCOUNT_ID=${AWS_ACCOUNT_ID} \
156148
-e DATABRICKS_ACCOUNT_ID=${DATABRICKS_ACCOUNT_ID} \
157149
-e DATABRICKS_AWS_ACCOUNT_ID=${DATABRICKS_AWS_ACCOUNT_ID} \

examples/databricks/all-purpose-cluster/convenience.sh

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -10,9 +10,9 @@ then
1010
source "${REPOSITORY_ROOT}/examples/databricks/all-purpose-cluster/sec/env.sh"
1111
fi
1212

13-
if [ "${ASSETS_AWS_REGION}" = "" ];
13+
if [ "${AWS_REGION}" = "" ];
1414
then
15-
ASSETS_AWS_REGION='us-east-1'
15+
AWS_REGION='us-east-1'
1616
fi
1717

1818
if [ "${AWS_ACCOUNT_ID}" = "" ];
@@ -57,7 +57,7 @@ then
5757
exit 1
5858
fi
5959

60-
export ASSETS_AWS_REGION
60+
export AWS_REGION
6161
export AWS_ACCOUNT_ID
6262
export DATABRICKS_ACCOUNT_ID
6363
export DATABRICKS_AWS_ACCOUNT_ID
Lines changed: 56 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,56 @@
1+
/*+ exists */
2+
SELECT COUNT(*) as count FROM
3+
(
4+
SELECT allocation_id,
5+
json_group_object(tag_key, tag_value) as tags
6+
FROM aws.ec2.eip_tags
7+
WHERE region = '{{ region }}'
8+
GROUP BY allocation_id
9+
HAVING json_extract(tags, '$.Provisioner') = 'stackql'
10+
AND json_extract(tags, '$.StackName') = '{{ stack_name }}'
11+
AND json_extract(tags, '$.StackEnv') = '{{ stack_env }}'
12+
) t
13+
14+
/*+ create */
15+
INSERT INTO aws.ec2.eips (
16+
NetworkBorderGroup,
17+
Tags,
18+
ClientToken,
19+
region
20+
)
21+
SELECT
22+
'{{ region }}',
23+
'{{ tags }}',
24+
'{{ idempotency_token }}',
25+
'{{ region }}'
26+
27+
/*+ statecheck, retries=3, retry_delay=5 */
28+
SELECT COUNT(*) as count FROM
29+
(
30+
SELECT allocation_id,
31+
json_group_object(tag_key, tag_value) as tags
32+
FROM aws.ec2.eip_tags
33+
WHERE region = '{{ region }}'
34+
GROUP BY allocation_id
35+
HAVING json_extract(tags, '$.Provisioner') = 'stackql'
36+
AND json_extract(tags, '$.StackName') = '{{ stack_name }}'
37+
AND json_extract(tags, '$.StackEnv') = '{{ stack_env }}'
38+
) t
39+
40+
/*+ exports, retries=3, retry_delay=5 */
41+
SELECT allocation_id as eip_allocation_id, public_ip as eip_public_id FROM
42+
(
43+
SELECT allocation_id, public_ip,
44+
json_group_object(tag_key, tag_value) as tags
45+
FROM aws.ec2.eip_tags
46+
WHERE region = '{{ region }}'
47+
GROUP BY allocation_id
48+
HAVING json_extract(tags, '$.Provisioner') = 'stackql'
49+
AND json_extract(tags, '$.StackName') = '{{ stack_name }}'
50+
AND json_extract(tags, '$.StackEnv') = '{{ stack_env }}'
51+
) t
52+
53+
/*+ delete */
54+
DELETE FROM aws.ec2.eips
55+
WHERE data__Identifier = '{{ eip_public_id }}|{{ eip_allocation_id}}'
56+
AND region = '{{ region }}'

0 commit comments

Comments
 (0)