Skip to content

Commit f4a32c6

Browse files
authored
Merge pull request #2 from Rathan8/main
removed $ char & adjusted timeouts for init conn
2 parents eb848e2 + 7cc4bb4 commit f4a32c6

File tree

2 files changed

+30
-29
lines changed

2 files changed

+30
-29
lines changed

README.MD

Lines changed: 25 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -25,27 +25,27 @@ to run workloads with variable traffic patterns rather than traditional approach
2525

2626
Manually set the AWS region to use create the stack
2727
```
28-
$export AWS_REGION=us-east-1
28+
export AWS_REGION=us-east-1
2929
```
3030
or use the configured region, to check current region
3131
```
32-
$export AWS_REGION=$(aws configure get region)
32+
export AWS_REGION=$(aws configure get region)
3333
```
3434
Add environment variables for your AWS Region and the account ID from your AWS configuration
3535
```
36-
$export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
36+
export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
3737
```
3838
First use aws-cli to create s3 bucket to store the cfn template, custom plugins which makes the post deployment cleanup easier
3939
```
40-
$aws s3api create-bucket --bucket blog-eks-msk-aks-$AWS_ACCOUNT_ID
40+
aws s3api create-bucket --bucket blog-eks-msk-aks-$AWS_ACCOUNT_ID
4141
```
4242
Switch to `Templates` folder deploy the cfn-eks-msk-aks.yml, the template creates the AWS resources.
4343
Deploying the CFN does requires you to pass parameters for SSH key (pass the key pair that already exists in the aws account else need to create a new one)
4444
and IP address range to access the Kafka client instance.
4545

4646
```
47-
$cd Templates
48-
$aws cloudformation deploy --template-file cfn-eks-msk-aks.yml --stack-name eks-msk-aks-sink-stack --parameter-overrides KeyName= < SSH-KEY-Name > SSHLocation=< ip-address > --tags aws-blog=eks-msk-aks-sink --s3-bucket blog-eks-msk-aks-$AWS_ACCOUNT_ID --capabilities CAPABILITY_NAMED_IAM --on-failure ROLLBACK
47+
cd Templates
48+
aws cloudformation deploy --template-file cfn-eks-msk-aks.yml --stack-name eks-msk-aks-sink-stack --parameter-overrides KeyName= < SSH-KEY-Name > SSHLocation=< ip-address > --tags aws-blog=eks-msk-aks-sink --s3-bucket blog-eks-msk-aks-$AWS_ACCOUNT_ID --capabilities CAPABILITY_NAMED_IAM --on-failure ROLLBACK
4949
```
5050
KeyName is ssh key name to be used for the kafkaclient instance. SSHLocation is ip address range which allow ssh access (default is 0.0.0.0/0).
5151

@@ -60,51 +60,51 @@ The CloudFormation stack will create the following resources:
6060

6161
Run the following command to get the output of the stack created and save it in stack_resources_output file
6262
```
63-
$aws cloudformation describe-stacks --stack-name eks-msk-aks-sink-stack --query "Stacks[0].Outputs[*].[OutputKey,OutputValue]" --output text > stack_resources_output
63+
aws cloudformation describe-stacks --stack-name eks-msk-aks-sink-stack --query "Stacks[0].Outputs[*].[OutputKey,OutputValue]" --output text > stack_resources_output
6464
```
6565

6666
Based on stack_resources_output the helper _update-param_ script updates the parameters in template files used for pod and connector deployment
6767
```
68-
$sh update-param.sh
68+
sh update-param.sh
6969
```
7070
So far you have deployed the stack using CFN template. Used the output of stack to update parameters for pod and connector deployment templates
7171
## Build and Deploy Event source application on EKS
7272

7373
Run the following command to log in in the ECR registry
7474
```
75-
$cd ..
76-
$aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com
75+
cd ..
76+
aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com
7777
```
7878

7979
Run the following command to build docker container image, add a tag, and push the container image to the ECR registry
8080
```
81-
$mvn package
82-
$docker-compose build
83-
$docker tag amazon-keyspaces-with-apache-kafka:latest $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/amazon-keyspaces-with-apache-kafka:latest
84-
$docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/amazon-keyspaces-with-apache-kafka:latest
81+
mvn package
82+
docker-compose build
83+
docker tag amazon-keyspaces-with-apache-kafka:latest $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/amazon-keyspaces-with-apache-kafka:latest
84+
docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/amazon-keyspaces-with-apache-kafka:latest
8585
```
8686

8787
aws eks update-kubeconfig command updates the default kubeconfig file to use eks-twitter-cluster as the current context
8888
```
89-
$aws eks update-kubeconfig --name eks-twitter-cluster
89+
aws eks update-kubeconfig --name eks-twitter-cluster
9090
```
9191
Run the following kubectl commands to create namespace, verify if namespace is created
9292
```
93-
$kubectl create namespace eks-msk-twitter
94-
$kubectl get namespaces
93+
kubectl create namespace eks-msk-twitter
94+
kubectl get namespaces
9595
```
9696

9797
Switch to `Templates` folder
9898
```
99-
$cd Templates
99+
cd Templates
100100
```
101101
You need to replace value of bearer token in the _twitter-cfn-app-deployment.yaml_ template for twitter access, deploy the pods to eks-msk-twitter namespace
102102

103103
_<< Bearer Token >>_, Run the following commands to deploy the pods and verify the pod status
104104

105105
```
106-
$kubectl apply -f twitter-cfn-app-deployment.yaml
107-
$kubectl get pods -n=eks-msk-twitter -o wide
106+
kubectl apply -f twitter-cfn-app-deployment.yaml
107+
kubectl get pods -n=eks-msk-twitter -o wide
108108
```
109109

110110
Now the application is deployed, and it starts reading sample stream data, parse tweet data and write it to the kafka topic, next step would be to create the msk connector to sink data from the kafka topic to Amazon keyspaces
@@ -116,7 +116,7 @@ Make changes to connector configuration in kafka-keyspaces-connector.json file
116116
Replace the username and password to connect to Amazon Keyspaces in _kafka-keyspaces-connector.json_ file "auth.username": "< keyspaces-user-at > ", "auth.password": “< password >",
117117
Run the following command to deploy the connector
118118
```
119-
$aws kafkaconnect create-connector --cli-input-json file://kafka-keyspaces-connector.json
119+
aws kafkaconnect create-connector --cli-input-json file://kafka-keyspaces-connector.json
120120
```
121121
After deploy is completed the sink connector uses interface VPC endpoints to connect and write messages from the kafka topic to Amazon Keyspaces tables
122122
Now the data pipeline is complete, you can use the cqlsh-expansion library which extends the existing cqlsh library with additional helpers, or the Amazon Keyspaces CQL console the read the data from Amazon Keyspaces tables
@@ -125,14 +125,14 @@ Use the following command to connect to Amazon Keyspaces using the cqlsh-expansi
125125

126126
To install and confgure the cqlsh-expansion utility
127127
```
128-
$pip install --user cqlsh-expansion
129-
$cqlsh-expansion.init
128+
pip install --user cqlsh-expansion
129+
cqlsh-expansion.init
130130
```
131131

132132
To connect to Amazon Keyspaces to check the data
133133

134134
```
135-
$cqlsh-expansion cassandra.$AWS_REGION.amazonaws.com 9142 --ssl --auth-provider "SigV4AuthProvider"
135+
cqlsh-expansion cassandra.$AWS_REGION.amazonaws.com 9142 --ssl --auth-provider "SigV4AuthProvider"
136136
```
137137
### Sample CQL Queries to get data from Amazon Keyspaces
138138
CQL Query to get sample data from tweet_by_tweet_id and tweet_by_user tables
@@ -158,7 +158,7 @@ Run delete-stack bash script to delete all the resources, created as part of thi
158158

159159
Run the following command to delete the stack
160160
```
161-
$./delete-stack.sh
161+
./delete-stack.sh
162162
163163
```
164164
## Summary

Templates/kafka-keyspaces-connector.json

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -45,9 +45,8 @@
4545
"datastax-java-driver.basic.load-balancing-policy.local-datacenter":"keyspaces_dc",
4646
"datastax-java-driver.advanced.retry-policy.class": "DefaultRetryPolicy",
4747
"datastax-java-driver.basic.default-idempotence": "true",
48-
"datastax-java-driver.advanced.connection.init-query-timeout": "10",
49-
"datastax-java-driver.advanced.control-connection.timeout": "10",
50-
"datastax-java-driver.basic.consistency": "LOCAL_QUORUM",
48+
"datastax-java-driver.advanced.connection.init-query-timeout": "500",
49+
"datastax-java-driver.advanced.control-connection.timeout": "500",
5150
"auth.provider":"PLAIN",
5251
"auth.username": "< keyspaces-user-at >",
5352
"auth.password": "< password >",
@@ -58,11 +57,13 @@
5857
"connectionPoolLocalSize":"3",
5958
"maxNumberOfRecordsInBatch": "1",
6059
"ssl.hostnameValidation": "false",
61-
"queryExecutionTimeout":"10",
60+
"queryExecutionTimeout":"100",
6261
"topic.twitter_input.aws_blog.tweet_by_tweet_id.deletesEnabled":"false",
62+
"topic.twitter_input.aws_blog.tweet_by_tweet_id.consistency":"false",
6363
"topic.twitter_input.aws_blog.tweet_by_tweet_id.query":"INSERT INTO aws_blog.tweet_by_tweet_id(hashtag,tweet_id,tweet,lang,username,tweet_time) VALUES (:tag,:id,:text,:lang,:username,:timestamp) USING TTL 259200;",
6464
"topic.twitter_input.aws_blog.tweet_by_tweet_id.mapping": "tag=value.hashtag, text=value.text, id=value.id, timestamp=value.createdAt, username=value.username, lang=value.lang",
6565
"topic.twitter_input.aws_blog.tweet_by_user.deletesEnabled":"false",
66+
"topic.twitter_input.aws_blog.tweet_by_user.consistency":"false",
6667
"topic.twitter_input.aws_blog.tweet_by_user.query":"INSERT INTO aws_blog.tweet_by_user(hashtag,tweet_id,tweet,lang,username,tweet_time) VALUES (:tag,:id,:text,:lang,:username,:timestamp) USING TTL 259200;",
6768
"topic.twitter_input.aws_blog.tweet_by_user.mapping": "tag=value.hashtag, text=value.text, id=value.id, timestamp=value.createdAt, username=value.username, lang=value.lang"
6869
}

0 commit comments

Comments
 (0)