This project demonstrates how to control hardware components like LEDs on a Raspberry Pi through an Alexa skill, leveraging an IO Inventory module for scalable and maintainable device management.
It sets up an Alexa skill server using Bottle and ASK SDK, with support for Gevent to handle server operations. The configuration is managed using dotenv, and the project follows best practices for logging and configuration management.
- alexa_LED_controller
- Clone the repository:
git clone https://github.com/kjpou1/alexa_LED_controller.git
cd alexa_LED_controller- Create a virtual environment and activate it:
python3 -m venv venv
source venv/bin/activate- Install the required packages:
pip install -r requirements.txt- Copy
example_envto.envand configure your environment variables:
cp example_env .envEdit .env and set the appropriate values for your configuration.
Configuration settings are managed using environment variables loaded from a .env file. The Config class in app/config/config.py handles loading these settings.
The following environment variables are used to configure the application. These should be defined in the .env file at the root of the project.
- Description: The host address on which the server will run.
- Default Value:
0.0.0.0 - Example:
SERVER_HOST=0.0.0.0
- Description: The port number on which the server will run.
- Default Value:
8080 - Example:
SERVER_PORT=8080
- Description: The custom intent to be handled by the skill.
- Default Value:
HelloWorldIntent - Example:
INTENT=HelloWorldIntent
- Description: Enables or disables debug mode.
- Default Value:
true - Example:
DEBUG=true
- Description: The path to the SSL certificate file.
- Default Value:
/etc/ssl/private/insecure.pem - Example:
SSL_CERTIFICATE="/etc/ssl/private/insecure.pem"
- Description: The path to the SSL private key file.
- Default Value:
/etc/ssl/insecure.key - Example:
SSL_PRIVATE_KEY="/etc/ssl/insecure.key"
-
Create necessary directories:
sudo mkdir -p /etc/ssl/private
-
Create the self-signed certificate and key:
./self-signed-certificate.sh
-
Set appropriate permissions: if
PermissionError: [Errno 13] Permission deniedsudo chmod 644 /etc/ssl/private/insecure.pem sudo chmod 644 /etc/ssl/insecure.key
-
Ensure ownership:
PermissionError: [Errno 13] Permission deniedsudo chown $(whoami):$(whoami) /etc/ssl/private/insecure.pem sudo chown $(whoami):$(whoami) /etc/ssl/insecure.key
This will create the following files:
/etc/ssl/insecure.key: The private key file./etc/ssl/private/insecure.pem: The SSL certificate file./etc/ssl/dhparam.pem: The DH parameter file.
Note
Ensure the /etc/ssl/private directory exists before running the script. This script creates self-signed certificates for testing purposes only and is not recommended for production environments.
Warning
Permissions are sometimes operating system dependent. Follow your own permissions strategy.
If you encounter the following error:
oscrypto.errors.LibraryNotFoundError: Error detecting the version of libcrypto
It indicates that the required cryptographic libraries are not found on your system. Follow these steps to resolve the issue:
- Ensure OpenSSL is installed.
sudo apt-get update
sudo apt-get install openssl libssl-devsudo yum install openssl openssl-develIf you are using Homebrew, you can install OpenSSL as follows:
brew install openssl
brew link openssl --force- Reinstall the
cryptographylibrary.
pip uninstall cryptography
pip install cryptography- Verify OpenSSL version in Python.
import ssl
print(ssl.OPENSSL_VERSION)To resolve the LibraryNotFoundError related to libcrypto on a Raspberry Pi, follow these steps:
Install the latest fixed revision of oscrypto:
pip install --force-reinstall https://github.com/wbond/oscrypto/archive/d5f3437ed24257895ae1edd9e503cfb352e635a8.zipAdd the GitHub URL to your requirements.txt:
# requirements.txt
https://github.com/wbond/oscrypto/archive/d5f3437ed24257895ae1edd9e503cfb352e635a8.zip
Then run:
pip install --force-reinstall -r requirements.txtIf upgrading oscrypto does not work, try using OpenSSL version 3.1.x or downgrading to an earlier version like 3.0.9.
For more detailed steps and information, visit the Snowflake Community article.
If you continue to experience issues, you may need to recompile Python with the correct OpenSSL paths.
The server is run using Gevent.
python run.pyThe Alexa skill uses several request handlers to manage different types of requests. Here are the handlers included in this project:
Purpose: Handles the launch request when the user starts the skill.
Response: Welcomes the user to the skill.
class LaunchRequestHandler(AbstractRequestHandler):
def can_handle(self, handler_input):
return is_request_type("LaunchRequest")(handler_input)
def handle(self, handler_input):
logger.info("Handling LaunchRequest")
template_renderer = JinjaTemplateRenderer()
speech_text = template_renderer.render_string_template("welcome_text")
return (
handler_input.response_builder.speak(speech_text)
.set_should_end_session(False)
.response
)Purpose: Handles a custom intent to control an LED based on user commands.
Response: Turns the LED on or off based on the user's command.
class CustomIntentHandler(AbstractRequestHandler):
intent = Config().intent
def can_handle(self, handler_input):
return is_intent_name(self.intent)(handler_input)
def handle(self, handler_input):
logger.info("Handling %s", self.intent)
# Extract the OnOff slot value from the request
slots = handler_input.request_envelope.request.intent.slots
command = slots.get("OnOff").value if slots.get("OnOff") else None
renderer = JinjaTemplateRenderer()
if command is None:
# No command was given
reprompt_text = renderer.render_string_template("command_reprompt")
return (
handler_input.response_builder.speak(reprompt_text)
.ask(reprompt_text)
.response
)
elif command in ['on', 'off']:
led_service = LEDService()
if command == "off":
# Turn off
led_service.turn_led_off()
else:
# Turn on
led_service.turn_led_on()
response_text = renderer.render_string_template('command', onOffCommand=command)
return (
handler_input.response_builder.speak(response_text)
.set_card(SimpleCard("Command", response_text))
.response
)
else:
# A valid command was not given
reprompt_text = renderer.render_string_template("command_reprompt")
return (
handler_input.response_builder.speak(reprompt_text)
.ask(reprompt_text)
.response
)Purpose: Handles the end of a session.
Response: Logs the reason for the session ending.
class SessionEndedRequestHandler(AbstractRequestHandler):
def can_handle(self, handler_input):
return is_request_type("SessionEndedRequest")(handler_input)
def handle(self, handler_input):
logger.info("Session ended with reason: %s", handler_input.request_envelope.request.reason)
return handler_input.response_builder.responsePurpose: Handles unrecognized intents using the AMAZON.FallbackIntent.
Response: Asks the user to rephrase their request.
class FallbackIntentHandler(AbstractRequestHandler):
def can_handle(self, handler_input):
return is_intent_name("AMAZON.FallbackIntent")(handler_input)
def handle(self, handler_input):
logger.info("Handling AMAZON.FallbackIntent")
template_renderer = JinjaTemplateRenderer()
speech_text = template_renderer.render_string_template("command_reprompt")
return (
handler_input.response_builder.speak(speech_text)
.ask(speech_text) # Keeps the session open to receive further input
.response
)Purpose: Handles the GoodbyeIntent.
Response: Says goodbye to the user and ends the session.
class GoodbyeIntentHandler(AbstractRequestHandler):
def can_handle(self, handler_input):
return is_intent_name("GoodbyeIntent")(handler_input)
def handle(self, handler_input):
logger.info("Handling GoodbyeIntent")
template_renderer = JinjaTemplateRenderer()
speech_text = template_renderer.render_string_template("goodbye")
return (
handler_input.response_builder.speak(speech_text)
.set_should_end_session(True)
.response
)Purpose: Handles the AMAZON.HelpIntent.
Response: Provides help information to the user.
class HelpIntentHandler(AbstractRequestHandler):
def can_handle(self, handler_input):
return is_intent_name("AMAZON.HelpIntent")(handler_input)
def handle(self, handler_input):
logger.info("Handling AMAZON.HelpIntent")
template_renderer = JinjaTemplateRenderer()
speech_text = template_renderer.render_string_template("help")
return (
handler_input.response_builder.speak(speech_text)
.ask(speech_text) # Keeps the session open to receive further input
.response
)Purpose: Handles the AMAZON.StopIntent.
Response: Says goodbye to the user and ends the session.
class StopIntentHandler(AbstractRequestHandler):
def can_handle(self, handler_input):
return is_intent_name("AMAZON.StopIntent")(handler_input)
def handle(self, handler_input):
logger.info("Handling AMAZON.StopIntent")
template_renderer = JinjaTemplateRenderer()
speech_text = template_renderer.render_string_template("stop")
return (
handler_input.response_builder.speak(speech_text)
.set_should_end_session(True)
.response
)The Alexa skill is defined by an interaction model, which specifies the intents, slots, and sample utterances that the skill recognizes.
Here is the JSON definition of the interaction model for this skill:
{
"interactionModel": {
"languageModel": {
"invocationName": "light wizard",
"intents": [
{
"name": "AMAZON.CancelIntent",
"samples": []
},
{
"name": "AMAZON.HelpIntent",
"samples": []
},
{
"name": "AMAZON.StopIntent",
"samples": []
},
{
"name": "AMAZON.NavigateHomeIntent",
"samples": []
},
{
"name": "AMAZON.FallbackIntent",
"samples": []
},
{
"name": "OnOffIntent",
"slots": [
{
"name": "OnOff",
"type": "OnOffValue",
"samples": [
"{OnOff}"
]
}
],
"samples": [
"turn {OnOff}",
"{OnOff}",
"switch {OnOff}"
]
}
],
"types": [
{
"name": "OnOffValue",
"values": [
{
"name": {
"value": "off"
}
},
{
"name": {
"value": "on"
}
}
]
}
]
},
"dialog": {
"intents": [
{
"name": "OnOffIntent",
"confirmationRequired": false,
"prompts": {},
"slots": [
{
"name": "OnOff",
"type": "OnOffValue",
"confirmationRequired": false,
"elicitationRequired": true,
"prompts": {
"elicitation": "Elicit.Slot.1602566410765.1538124825991"
}
}
]
}
],
"delegationStrategy": "ALWAYS"
},
"prompts": [
{
"id": "Elicit.Slot.1602566410765.1538124825991",
"variations": [
{
"type": "PlainText",
"value": "Do you want to turn your LED on or off"
}
]
}
]
}
}- Invocation Name: The name users say to start the skill (e.g., "Alexa, open hello world").
- Intents: The actions that the skill can perform, each represented by an intent. This includes built-in intents like
AMAZON.HelpIntentand custom intents likeHelloWorldIntent. - Slots: Parameters that the intents can accept. In this case,
HelloWorldIntenthas afirstnameslot of typeAMAZON.FirstName. - Samples: Example phrases users can say to invoke each intent. These help Alexa recognize different ways users might phrase their requests.
- Dialog: Defines the dialog management for the intents, including slot elicitation prompts to gather necessary information from the user.
- Prompts: Predefined responses Alexa can use to prompt the user for more information.
Logging is configured to provide detailed information about the server's operations. Logs include timestamps, log levels, and messages, which are crucial for debugging and monitoring.
This project is licensed under the MIT License. See the LICENSE file for details.
If you want to contribute to this project, please fork the repository and submit a pull request with your changes.
For any questions or issues, please open an issue on GitHub.