Learn about drivers, connectivity and requests by running a simple API with Apache Cassandra/Astra DB as its data backend. The steps are available in several languages.
Click here for the workshop slide deck.
This workshop, the third in a series, builds on the same example used in the two previous episodes (an IoT application to access temperature measurements collected from a network of sensors).
Besides some knowledge of the example domain used in this workshop, it is desirable to have familiarity with the concepts explored in the two previous installments of the series:
It is assumed in the following that you already have created your Astra DB instance as instructed in the first episode, and that you have a valid "DB Administrator" Token. Note: the Token that is created with the database does not have all permissions we need, so you need to manually create a Token with the higher "DB Administrator" permission and use it in what comes next.
In case you haven't your Astra DB yet, go ahead and create it now for free by clicking here:
Tip: call the database
workshops
and the keyspacesensor_data
.
In case you already have a database workshops
but no sensor_data
keyspace, simply add it using the "Add Keyspace" button on the bottom right hand corner of your DB dashboard: please do so, avoiding the creation of another database with the same name. (Also, on the free tier you have to "Resume" the database if it is "Hibernated" for prolonged inactivity.)
If you don't have a "DB Administrator" token yet, log in to your Astra DB and create a token with this role. To create the token, click on the "..." menu next to your database in the main Astra dashboard and choose "Generate token". Then make sure you select the "DB Administrator" role. Download or note down all components of the token before navigating away: these will not be shown again. See here for more on token creation.
⚠️ ImportantThe instructor will show the token creation on screen, but will then destroy it immediately for security reasons.
Mind that, as mentioned already, the default Token auto-created for you when creating the database is not powerful enough for us today.
First, open this repo in Gitpod by right-clicking the following button ("open in new tab"):
In a couple of minutes you will have your Gitpod IDE up and running, with this repo cloned, ready and waiting for you (you may have to authorize the Gitpod single-sign-on to continue).
You may see a dialog about "opening this workspace in VS Code Desktop": you can safely dismiss it.
Note: The next steps are to be executed within the Gitpod IDE.
Astra CLI is preinstalled: configure it by providing your
AstraCS:...
database token when prompted:
astra setup
(Optional) Now you can use the CLI to get some info on your database(s):
astra db list
astra db get workshops
Click here if you have multiple databases called "workshops"
DB names are not required to be unique: what is unique is the "Database ID".
In case you find yourself having more than one "workshops" database, you can provide the ID instead of the name to the CLI commands and, being able to unambiguously determine the target, it will work flawlessly.
The Astra CLI can also launch a cqlsh
session for you, automatically connected to your database. Use this feature to execute a cql
script that resets the contents of the sensor_data
keyspace, creating the right tables and writing representative data on them:
# Make sure the DB exists (resuming it if hibernated)
astra db create workshops -k sensor_data --if-not-exist --wait
# Launch the initialization script
astra db cqlsh workshops -f initialize.cql
You are encouraged to peek at the contents of the script to see what it does.
(Optional) Interactively run some test queries on the newly-populated keyspace
Click to show test queries
Open an interactive cqlsh
shell with:
astra db cqlsh workshops -k sensor_data
Now you can copy-paste any of the queries below and execute them with the Enter key:
-- Q1 (note 'all' is the only partition key in this table)
SELECT name, description, region, num_sensors
FROM networks
WHERE bucket = 'all';
-- Q2
SELECT date_hour, avg_temperature, latitude, longitude, sensor
FROM temperatures_by_network
WHERE network = 'forest-net'
AND week = '2020-07-05'
AND date_hour >= '2020-07-05'
AND date_hour < '2020-07-07';
-- Q3
SELECT *
FROM sensors_by_network
WHERE network = 'forest-net';
-- Q4
SELECT timestamp, value
FROM temperatures_by_sensor
WHERE sensor = 's1003'
AND date = '2020-07-06';
To close cqlsh
and get back to the shell prompt, execute the EXIT
command.
You can use the Astra CLI to prepare a dotenv file which defines all connection parameters and secrets needed for your application to run:
astra db create-dotenv workshops -k sensor_data
A .env
file will be created (you can peek at it with Gitpod's file editor, e.g. running gp open .env
).
You can now source it with:
source .env
Note: The
.env
is handled differently in each implementation (Java, Python, Javascript), as will be shown later.Note: While creating the
.env
, the database's Secure Connect Bundle has also been downloaded for you: you may want to check that the file is about 12-13 KiB in size withls $ASTRA_DB_SECURE_BUNDLE_PATH -lh
.
Note: it is suggested to check the sensor data model in order to be better prepared for what follows. Keep it open in another tab.
Choose your path:
In order to get a badge of completion for this workshop, complete the following assignment:
Add a GET endpoint to your API corresponding to query
Q1
("Find information about all networks; order by name (asc)"). Tip: remember the data-modeling optimization of having inserted thebucket
column.
Take a screenshot of the relevant code block and of a successful request to that endpoint and head over to this form. Answer a couple of "theory" questions, attach your screenshot, and hit "Submit".
That's it! Expect to be awarded your badge in the next week or so!
This is not the end of your journey, rather the start: come visit us for more cool content, and learn how to succeed using Cassandra and Astra DB in your applications!
Congratulations and see you at our next workshop!
Sincerely yours, the DataStax Developers