diff --git a/content/en/docs/Integrations/.Notifications/CLI_Usage.md b/content/en/docs/Integrations/.Notifications/CLI_Usage.md new file mode 100644 index 00000000..2b797f97 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/CLI_Usage.md @@ -0,0 +1,290 @@ +## :mega: Apprise CLI +This small tool wraps the apprise python library to allow individuals such as Developers, DevOps, and Administrators to send notifications from the command line. + +### Getting Started +Apprise in it's most basic form requires that you provide it a message and an Apprise URL which contains enough information to send the notification with. A list of supported services and how to build your own URL can be found [here](https://github.com/caronc/apprise/wiki#notification-services). Here is a simple [email](https://github.com/caronc/apprise/wiki/Notify_email) example: +```bash +# Set a notification to a hotmail (email) account: +apprise --body="My Message" mailto://user:password@hotmail.com +``` + +If you don't specify a **--body** (**-b**) then Apprise reads from **stdin** instead: +```bash +# without a --body, you can use a pipe | to redirect output +# into you're notification: +uptime | apprise mailto://user:password@hotmail.com +``` + +By default Apprise is very silent; If you want to have a better understanding of what is going on, just add a `-v` switch to improve the verbosity. The more `v`'s you add, the more detailed output you'll get back. + +There is no limit to the number of services you want to notify, just keep adding/chaining them one after another: +```bash +# Set a notification to a yahoo email account, Slack, and a Kodi Server +# with a bit of added verbosity (2 v's specified): +apprise -vv --body="Notify more than one service" \ + mailto://user:password@yahoo.com \ + slack://token_a/token_b/token_c \ + kodi://example.com +``` + +#### CLI Switches and Options +All of the switches and options available to you can be presented by adding `--help` (`-h`) to the command line: +```bash +# Print all of the help information: +apprise --help +``` + +The switches/options are as follows: +``` +Usage: apprise [OPTIONS] SERVER_URL [SERVER_URL2 [SERVER_URL3]] + + Send a notification to all of the specified servers identified by their + URLs the content provided within the title, body and notification-type. + + For a list of all of the supported services and information on how to use + them, check out at https://github.com/caronc/apprise + +Options: + -b, --body TEXT Specify the message body. If no body is + specified then content is read from . + -t, --title TEXT Specify the message title. This field is + complete optional. + -c, --config CONFIG_URL Specify one or more configuration locations. + -a, --attach ATTACHMENT_URL Specify one or more attachment. + -n, --notification-type TYPE Specify the message type (default=info). + Possible values are "info", "success", + "warning", and "failure". + -i, --input-format FORMAT Specify the message input format + (default=text). Possible values are "text", + "html", and "markdown". + -T, --theme THEME Specify the default theme. + -g, --tag TAG Specify one or more tags to filter which + services to notify. Use multiple --tag (-g) + entries to "OR" the tags together and comma + separated to "AND" them. If no tags are + specified then all services are notified. + -Da, --disable-async Send all notifications sequentially + -d, --dry-run Perform a trial run but only prints the + notification services to-be triggered to + stdout. Notifications are never sent using + this mode. + -l, --details Prints details about the current services + supported by Apprise. + -R, --recursion-depth INTEGER The number of recursive import entries that + can be loaded from within Apprise + configuration. By default this is set to 1. + -v, --verbose Makes the operation more talkative. Use + multiple v to increase the verbosity. I.e.: + -vvvv [x>=0] + -e, --interpret-escapes Enable interpretation of backslash escapes + -D, --debug Debug mode + -V, --version Display the apprise version and exit. + -h, --help Show this message and exit. +``` + +#### File Based Configuration +Ideally it's never safe to store your personal details on the command line; others might see it! So the best thing to do is stick your configuration into a simple [[configuration file|config]]. With respect to the above example, maybe your file will look like this: +```apache +# use hashtag/pound characters to add comments into your +# configuration file. Define all of your URLs one after +# another: +mailto://user:password@yahoo.com +slack://token_a/token_b/token_c +kodi://example.com +``` + +Then you can notify all of your services like so: +```bash +# Set a notification to a yahoo email account, Slack, and a Kodi Server: +apprise -v --body="Notify more than one service" \ + --config=/path/to/your/apprise/config.txt +``` + +If you stick your configuration in the right locations, you don't even need to reference the **--config** as it will be included automatically; the default filename paths are as follows: +* **Linux/Mac users**: + * `~/.apprise` + * `~/.config/apprise` +* **Microsoft Windows users**: + * `%APPDATA%/Apprise/apprise` + * `%LOCALAPPDATA%/Apprise/apprise` + +With default configuration file(s) in place, reference to the Apprise CLI gets even easier: +```bash +# Set a notification to a yahoo email account, Slack, and a Kodi Server: +apprise -v --body="Notify all of my services" +``` + +#### Attachment Support +Apprise even lets you send file attachments to the services you use (provided they support them). Attachments are passed along by just including the **--attach** (**-a**) switch along with your Apprise command: +```bash +# Set a simple attachment: +apprise --title="A photo of my family" --body="see attached" \ + --attach=/path/to/my/photo.jpeg + +# You can attach as many file attachments as you like: +apprise -v --title="Several great photo's of the gang" --body="see attached" \ + --attach=/path/team1.jpeg \ + --attach=/path/teambuilding-event.jpg \ + --attach=/path/paintball-with-office.jpg +``` + +**Note**: When using attachments, if one of them can't be found/retrieved for delivery then the message isn't sent. + +The great thing with attachments is that Apprise is able to make a remote web-request for them (prior to attaching them). This is easily done by just using the `http://` or `https://` protocol. This works great for things like security camera images, or just content you want to pass along hosted online: +```bash +# A web-based attachment: +apprise -v --title="A Great Github Cheatsheet" --body="see attached" \ + --attach="https://github.github.com/training-kit/downloads/github-git-cheat-sheet.pdf" +``` + +### :label: Leverage Tagging +Consider the case where you've defined all of your Apprise URLs in one file, but you don't want to notify all of them each and every time. +* :inbox_tray: Maybe you have special notifications that only fire when a download completed. +* :rotating_light: Maybe you have home monitoring that requires you to notify several different locations +* :construction_worker_man: Perhaps you work as an Administrative, Developer, and/or Devops role and you want to just notify certain people at certain times (such as when a software build completes, or a unit test fails, etc). + +Apprise makes this easy by simply allowing you to tag your URLs. There is no limit to the number of tags associate with a URL. Let's make a simple apprise configuration file; this can be done with any text editor of your choice: +```apache +# Tags in a Text configuration sit in front of the URL +# - They are comma and/or space separated (if more than one +# - To mark that you are no longer specifying tags and want to identify +# the URL, you just place an equal (=) sign and write the URL: +# +# Syntax: = + +# Here we set up a mailto:// URL and assign it the tags: me, and family +# maybe we are doing this to just identify our personal email and +# additionally tag ourselves with the family (which we will tag elsewhere +# too) +me,family=mailto://user:password@yahoo.com + +# Here we set up a mailto:// URL and assign it the tag: family +# In this example, we would email 2 people if triggered +family=mailto://user:password@yahoo.com/myspouse@example.com/mychild@example.com + +# This might be our Slack Team Server targeting the #devops channel +# We assign it the tag: team +team=slack://token_a/token_b/token_c/#general + +# Maybe our company has a special devops group too idling in another +# channel; we can add that to our list too and assign it the tag: devops +devops=slack://token_a/token_b/token_c/#devops + +# Here we assign all of our colleagues the tags: team, and email +team,email=mailto://user:password@yahoo.com/john@mycompany.com/jack@mycompany.com/jason@mycompany.com + +# Maybe we have home automation at home, and we want to notify our +# kodi box when stuff becomes available to it +mytv=kodi://example.com + +# There is no limit... fill this file to your hearts content following +# the simple logic identified above +``` + +Now there is a lot to ingest from the configuration above, but it will make more sense when you see how the content is referenced. Here are a few examples (based on config above): +```bash +# This would notify the first 2 entries they have the tag `family` +# It would 'NOT' send to any other entry defined +apprise -v --body="Hi guys, I'm going to be late getting home tonight" \ + --tag=family + +# This would only notify the first entry as it is the only one +# that has the tag: me +apprise -v --body="Don't forget to buy eggs!" \ + --tag=me +``` + +If you're building software, you can set up your continuous integration to notify your `team` AND `devops` by simply identifying 2 tags: +```bash +# notify the services that have either a `devops` or `team` tag +# If you check our our configuration; this matches 3 separate URLs +apprise -v --title="Apprise Build" \ + --body="Build was a success!" \ + --tag=devops --tag=team +``` +When you specify more than one **--tag** the contents are **OR**'ed together. + +If you identify more than one element on the same **--tag** using a space and/or comma, then these get treated as an **AND**. Here is an example: +```bash +# notify only the services that have both a team and email tag +# In this example, there is only one match. +apprise -v --title="Meeting this Friday" \ + --body="Guys, there is a meeting this Friday with our director." \ + --tag=team,email +``` + +There is a special reserved tag called `all`. `all` will match ALL of your entries regardless of what tag name you gave it. Use this with caution. + +Here is another way of looking at it: +```bash +# assuming you got your configuration in place; tagging works like so: +apprise -b "has TagA" --tag=TagA +apprise -b "has TagA OR TagB" --tag=TagA --tag=TagB + +# For each item you group with the same --tag value is AND'ed +apprise -b "has TagA AND TagB" --tag="TagA, TagB" +apprise -b "has (TagA AND TagB) OR TagC" --tag="TagA, TagB" --tag=TagC +``` + +### Testing Configuration and Tags +Once you've built your elaborate configuration file and assigned all your tags. You certainly won't want to notify everyone over and over again while you test it out. Don't worry, that's what **--dry-run** (**-d**) is for. You can use this to test your _tag logic_ out and not actually perform the notification. +```bash +# Test which services would have been notified if the tags team and email +# were activated: +apprise --title="Meeting this Friday" \ + --body="Guys, there is a meeting this Friday with our director." \ + --tag=team,email \ + --dry-run +``` + +If you use the **--dry-run** (**-d**) switch, then some rules don't apply. For one, the **--body** (**-b**) is not even a required option. The above could have been re-written like so: +```bash +# Test which services would have been notified if the tags team and email +# were activated (without actually notifying them): +apprise --tag=team,email --dry-run +``` + +## :heavy_check_mark: Compatibility and Notification Details +Apprise offers a lot of services at your fingertips, but some of them may or may not be available to you depending on your Operating system and/or what packages you have installed. You can see a list of what is available by doing the following: +```bash +# List all of the supported services available to you +# you can also use -l as well: +apprise --details +``` + +Here is an example of the output (as it is now) on the CLI: +![image](https://user-images.githubusercontent.com/850374/142778418-11e87c7f-1b07-4314-ab86-cbf8d268dabf.png) + +## :baggage_claim: Message Body Source +The Apprise CLI doesn't know what you are feeding it when sending a message to a Notification provider. It just assumes that whatever you message you feed it, it should just pass it along *as is* to the upstream provider *as text*. In most cases, this is perfect and this is the default behaviour. However, if you are passing along HTML content or markdown content, you should just let Apprise know by specifying the `--input-format` (`-i`) switch. For example: +```bash +# An HTML Example: +cat test.html | apprise --input-format=html + +# Or Markdown: +cat << _EOF | apprise --input-format=markdown +## Ways to Prepare Eggs +* Scrambled +* Sunny Side Up +* Over Easy + +There is more, but I want to keep my message short. :) +_EOF +``` + +## :star2: Tricks and Additional Notes +### Tmux Alert Bell Integration +Users of Tmux can link their `alert-bell` to use Apprise like so: +```bash +# set your tmux bell-action to type 'other': +set-option -g bell-action other + +# now set tmux to trigger on `alert-bell` actions +set-hook -g alert-bell 'run-shell "\ + apprise \ + --title \"tmux finished on #{host}\" \ + --body \"in session #{session_name} window #{window_index}:#{window_name}\" \ + discord://webhook_id/webhook_token \ + slack://TokenA/TokenB/TokenC/Channel \ + twilio://AccountSid:AuthToken@FromPhoneNo" +``` diff --git a/content/en/docs/Integrations/.Notifications/DemoPlugin_Basic.md b/content/en/docs/Integrations/.Notifications/DemoPlugin_Basic.md new file mode 100644 index 00000000..b36e0e30 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/DemoPlugin_Basic.md @@ -0,0 +1,139 @@ +# A Basic Apprise Notification Example +This example shows a basic template of how one might build a Notification Service that does a task as simple as writing to `stdout` (Standard Out) + +It's very important to save the `apprise/plugins/NotifyServiceName.py` to be exactly the name of the `NotifyServiceName` class you define within it. In this example, the class is `NotifyDemo`. This implies that the filename to activate this (and make it usable in Apprise) must be called `apprise/plugins/NotifyDemo.py`. + +## The Code +```python +# -*- coding: utf-8 -*- +from .NotifyBase import NotifyBase +from ..AppriseLocale import gettext_lazy as _ +from ..common import NotifyType + + +class NotifyDemo(NotifyBase): + """ + A Sample/Demo Notifications + """ + + # The default descriptive name associated with the Notification + # _() allows us to support future (language) translations + service_name = _('Apprise Demo Notification') + + # The default protocol/schema + # This will be what triggers your service to be activated when + # protocol:// is specified (in example demo:// will activate + # this service). + protocol = 'demo' + + # A URL that takes you to the setup/help of the specific protocol + # This is for new-comers who will want to learn how they can + # use your service. Ideally you should point to somewhere on + # the 'https://github.com/caronc/apprise/wiki/ + setup_url = 'https://github.com/caronc/apprise/wiki/Notify_Demo' + + # This disables throttling by default for this simple plugin. + request_rate_per_sec = 0 + + # + # Templating Section + # + # 1. `templates`: Identify the way you can use your plugin. + # This example is rather simple, so there isn't really much to do + # here. Check out the other demo's to see where this gets a bit more + # advanced. + # + templates = ( + '{schema}://', + ) + + # For the reasons above, we only need to identify apikey here: + def __init__(self, **kwargs): + """ + Initialize Demo Object + + """ + # Always call super() so that parent clases can set up. Make + # sure to only pass in **kwargs + super(NotifyDemo, self).__init__(**kwargs) + + # + # Now you can write any initialization you want; if you have nothing to + # initialize then you can skip the definition of the __init__() + # function all together. + # + return + + def url(self, *args, **kwargs): + """ + Returns the URL built dynamically based on specified arguments. + """ + + # Always call self.url_parameters() at some point. + # This allows your Apprise URL to handle the common/global + # parameters that are used by Apprise. This is for consistency + # more than anything: + params = self.url_parameters(*args, **kwargs) + + # Basically we need to write back a URL that looks exactly like + # the one we parsed to build from scratch. + + # If we can parse a URL and rebuild it the way it once was, + # Administrators who use Apprise don't need to pay attention to all + # of your custom and unique tokens (from on service to another). + # they only need to store Apprise URL's in their database. + + return '{schema}://?{params}'.format( + schema=self.protocol, + params=self.urlencode(params), + ) + + def send(self, body, title='', notify_type=NotifyType.INFO, **kwargs): + """ + Perform Demo Notification + """ + + # + # Always call throttle before any remote server i/o is made + # + self.throttle() + + # Perform your notification here; in this example, we just send the + # output to `stdout`: + print('{type} - {title} - {body}'.format( + type=notify_type, title=title, body=body)) + + return True + + @staticmethod + def parse_url(url): + """ + Parses the URL and returns enough arguments that can allow + us to re-instantiate this object. + + """ + # If you're URL does not define what is considered a valid host after + # your' schema/protocol such as this plugin example (demo://) where the + # hostname isn't even required, then set the verify_host to False + results = NotifyBase.parse_url(url, verify_host=False) + if not results: + # We're done early as we couldn't parse the URL + return results + + # Handle any additional parsing here if you want + # make sure to write all your changes/updates back into the results + # dictionary + + # The contents of our results (a dictionary) will become + # the arguments passed into the __init__() function we defined above. + return results +``` + +## Testing +You can test your **NotifyDemo** class using the `demo://` schema: +```bash +# using the `apprise` found in the local bin directory allows you to test +# the new plugin right away. Use the `demo://` schema we defined. You can also +# set a couple of extra `-v` switches to add some verbosity to the output: +./bin/apprise -vvv -t test -b message demo:// +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/DemoPlugin_WebRequest.md b/content/en/docs/Integrations/.Notifications/DemoPlugin_WebRequest.md new file mode 100644 index 00000000..55be10a0 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/DemoPlugin_WebRequest.md @@ -0,0 +1,384 @@ +# A Web Request Apprise Notification Example +This example shows a basic template of how one might build a Notification Service that is required to connect to an upstream web service and send a payload. + +It's very important to save the `apprise/plugins/NotifyServiceName.py` to be exactly the name of the `NotifyServiceName` class you define within it. In this example, the class is `NotifyDemo`. This implies that the filename to activate this (and make it usable in Apprise) must be called `apprise/plugins/NotifyDemo.py`. + +## The Code +```python +import requests +import json +from .NotifyBase import NotifyBase +from ..URLBase import PrivacyMode +from ..common import NotifyType +from ..AppriseLocale import gettext_lazy as _ + + +class NotifyDemo(NotifyBase): + """ + A Sample/Demo Notifications + """ + + # The default descriptive name associated with the Notification + # _() allows us to support future (language) translations + service_name = _('Apprise Demo Notification') + + # The default protocol/schema + # This will be what triggers your service to be activated when + # protocol:// is specified (in example demo:// will activate + # this service). + protocol = 'demo' + + # A URL that takes you to the setup/help of the specific protocol + # This is for new-comers who will want to learn how they can + # use your service. Ideally you should point to somewhere on + # the 'https://github.com/caronc/apprise/wiki/ + setup_url = 'https://github.com/caronc/apprise/wiki/Notify_Demo' + + # + # Templating Section + # + # 1. `templates`: Identify the way you can use your template. Use {tokens} + # that map back to what is defined in your `template_tokens` and + # `template_arg`. Today this is used for reference only, but in the + # future, this could be used to help validate and build easy to use + # wizards for people to build their Apprise URL's with. + # + # 2. `template_tokens`: You must identify all `tokens` (except + # *args and **kwargs) that are passed into: + # def `__init__(self, tokenA, tokenB, tokenN, *args, **kwargs) + # ^ ^ ^ + # | | | + # + # 3. `template_args`: This is more applicable to your Apprise URL. + # It's similar to the `template_tokens` except you can also identify + # alias entries (to ones already found in `template_tokens` here. You + # can also identify arguments that are optional (and otherwise take + # on a default setting if not otherwise specified. This section is + # entirely optional, but by adding it, you can greatly add some + # handy features to the yaml configuration. You also need to handle + # your own processing of what you define here in the `parse_url()` + # function. + # + # Here is an example Apprise demo:// URL with 2 optional arguments + # specified. + # demo://user:pass@hostname?option1=value&option2=value + # ^ ^ + # | | + # arg arg + # In the above case, if option1 an option2 are actual valid arguments + # that can (optionally) exist on the Apprise URL, then they would be + # identified here. + # + # 4. `template_kwargs`: This is only needed in some cases and not covered + # in this example. This allows you to let your user building your + # Apprise URL to define their own arguments (args) AND assign them + # values. + # + # An example of why you'd want to do this would be say an HTTP your + # service may call. You may want to let them define their own custom + # headers and assign the values. A great example of when/how this + # is used is in the XML and JSON Notification Services. + + templates = ( + '{schema}://{host}/{apikey}', + '{schema}://{host}:{port}/{apikey}', + '{schema}://{user}@{host}/{apikey}', + '{schema}://{user}@{host}:{port}/{apikey}', + '{schema}://{user}:{password}@{host}/{apikey}', + '{schema}://{user}:{password}@{host}:{port}/{apikey}', + ) + + # Define our tokens; these are the minimum tokens required required to + # be passed into this function (as arguments). The syntax appends any + # previously defined in the base package and builds onto them + template_tokens = dict(NotifyBase.template_tokens, **{ + # All tokens require: + # - name: The name of the variable. It must be wrapped with + # the gettext_lazy() function. Ideallly you should have + # the following defined at the head of your Service: + # + # - type: The type of data expected from this field. The options + # are (always lowercase): + # 1. 'string' + # 2. 'int' + # 3. 'bool' + # 4. 'float' + # + # You can also prepend 'list:' or 'choice:' to the types + # above (e.g. 'list:string'). When you use these options + # you must provide a `values` directive. + # + # - required: By default any token is not considered required. + # But you can set this value (and set it to True) as + # a way of telling the users of your service that they + # must provide this option. + # + # - min: When using int/float, you can let your users know what + # the minimum expected value can be (otherwise there is no + # limit if this isn't specifed) + # + # - max: When using int/float, you can let your users know what + # the maximum expected value can be (otherwise there is no + # limit if this isn't specifed) + # + # - private: If this token represents a password, or apikey, or just + # in general something that no one looking over a shoulder + # should see, then set this to True. + 'host': { + 'name': _('Hostname'), + 'type': 'string', + 'required': True, + }, + 'port': { + 'name': _('Port'), + 'type': 'int', + 'min': 1, + 'max': 65535, + }, + 'user': { + 'name': _('Username'), + 'type': 'string', + }, + 'password': { + 'name': _('Password'), + 'type': 'string', + 'private': True, + }, + 'apikey': { + 'name': _('apikey'), + 'type': 'string', + 'private': True, + }, + + }) + + # Not to add any confusion, but the following arguments are always + # automatically set and available to you (always) and therefore + # do not need to be identified in the __init__() call; they are: + # - host : Always identifies the hostname (if parsed from URL) + # - password : Identfies the password (if parsed from URL) + # - user : Identifies the username (if parsed from the URL) + # - port : Identifies the port (if parsed from the URL) + # - fullpath : Identifies the full path specified (parsed from URL) + # + # For the reasons above, we only need to identify apikey here: + def __init__(self, apikey, **kwargs): + """ + Initialize Demo Object + + """ + # Always call super() so that parent clases can set up. Make + # sure to only pass in **kwargs + super(NotifyDemo, self).__init__(**kwargs) + + # At this point we already have access to (this all got parsed + # automatially from the super() call above: + # - self.user + # - self.password + # - self.host + # - self.port + + # + # Now you can write any initialization you want + # + + # You may want to save your apikey read from the URL + # so we can use it later in the `send()` and `url()` function. + + # You will want to raise a TypeError() in the event any of the + # provided data is invalid: + self.apikey = apikey + if not self.apikey: + msg = 'An invalid Demo API Key ' \ + '({}) was specified.'.format(apikey) + self.logger.warning(msg) + raise TypeError(msg) + + self.apikey = apikey + + return + + def url(self, privacy=False, *args, **kwargs): + """ + Returns the URL built dynamically based on specified arguments. + """ + + # Always call self.url_parameters() at some point. + # This allows your Apprise URL to handle the common/global + # parameters that are used by Apprise. This is for consistency + # more than anything: + params = self.url_parameters(privacy=privacy, *args, **kwargs) + + # Basically we need to write back a URL that looks exactly like + # the one we parsed to build from scratch. + + # If we can parse a URL and rebuild it the way it once was, + # Administrators who use Apprise don't need to pay attention to all + # of your custom and unique tokens (from on service to another). + # they only need to store Apprise URL's in their database. + + # The below uses a combination of the following to rebuild the + # URL exactly as it was: + # - self.user + # - self.password + # - self.host + # - self.port + # - self.apikey <- the one we defined + + # Determine Authentication + auth = '' + if self.user and self.password: + auth = '{user}:{password}@'.format( + user=self.quote(self.user, safe=''), + password=self.pprint( + self.password, privacy, mode=PrivacyMode.Secret, safe=''), + ) + elif self.user: + auth = '{user}@'.format( + user=self.quote(self.user, safe=''), + ) + + return '{schema}://{auth}{hostname}{port}/{apikey}/?{params}'.format( + schema=self.protocol, + auth=auth, + # never encode hostname since we're expecting it to be a valid one + hostname=self.host, + port='' if self.port is None else ':{}'.format(self.port), + # Always quote/encode any variable you're passing back into the URL + apikey=self.quote(self.apikey, safe='/'), + params=self.urlencode(params), + ) + + def send(self, body, title='', notify_type=NotifyType.INFO, **kwargs): + """ + Perform Demo Notification + """ + + # Prepare our headers + # In this example, we're going to place the API Key + # into the payload through the headers: + headers = { + 'User-Agent': self.app_id, + 'Content-Type': 'application/xml', + # Here is were we leverage a token provided in the Apprise URL + # we parsed: + 'Authorization': 'Bearer {}'.format(self.apikey), + } + + # Now we just assemble some basic auth (if required) + auth = None + if self.user: + auth = (self.user, self.password) + + url = 'http://{}'.format(self.host) + if isinstance(self.port, int): + url += ':%d' % self.port + + # Define our payload we plan on sending + payload = { + 'type': notify_type, + 'title': title, + 'body': body, + } + + # It helps to add some logging if ou want + self.logger.debug('Demo POST URL: %s', url) + self.logger.debug('Demo Payload: %s', str(payload)) + + # + # Always call throttle before any remote server i/o is made + # + self.throttle() + + try: + # A simple request object + r = requests.post( + url, + data=json.dumps(payload), + headers=headers, + auth=auth, + + # These variables are defined by the parent + # classes. The timeout is very important! + verify=self.verify_certificate, + timeout=self.request_timeout, + ) + + if r.status_code != requests.codes.ok: + # We had a problem + status_str = \ + self.http_response_code_lookup(r.status_code) + + self.logger.warning( + 'Failed to send Demo notification: ' + '{}{}error={}.'.format( + status_str, + ', ' if status_str else '', + r.status_code)) + + self.logger.debug('Response Details:\r\n{}'.format(r.content)) + + # Return; we're done + return False + + else: + self.logger.info('Sent Demo notification.') + + except requests.RequestException as e: + self.logger.warning( + 'A Connection error occurred sending Demo ' + 'notification to %s.' % self.host) + self.logger.debug('Socket Exception: %s' % str(e)) + + # Return; we're done + return False + + return True + + @staticmethod + def parse_url(url): + """ + Parses the URL and returns enough arguments that can allow + us to re-instantiate this object. + + """ + results = NotifyBase.parse_url(url) + if not results: + # We're done early as we couldn't parse the URL + return results + + # Now fetch our api key from the path in the url. + # This is identified as a `fullpath` argument in our results + # we want to extract the first element + try: + # We need to store the 'apikey' id because that's what we + # identified in our __init__() function + results['apikey'] = \ + NotifyDemo.split_path(results['fullpath'])[0] + + except IndexError: + # Force some bad values that will get caught in the __init__ + results['apikey'] = None + + # The contents of our results (a dictionary) will become + # the arguments passed into the __init__() function we defined above. + return results +``` + +## Testing +If you pasted the above file correctly into your Apprise library, you can test it with a tool such as netcat (`nc`). + +In one terminal window you can set yourself up to listen on port `8080`: +```bash +# Listen on port 80 so we can watch apprise delivery our new payload +nc -l -p 8080 +``` + +While in another terminal window you can test your **NotifyDemo** class using the `demo://` schema: +```bash +# using the `apprise` found in the local bin directory allows you to test +# the new plugin right away. Use the `demo://` schema we defined. You can also +# set a couple of extra `-v` switches to add some verbosity to the output: +./bin/apprise -vvv -t test -b message demo://localhost:8080/myapikey +``` + diff --git a/content/en/docs/Integrations/.Notifications/Development_API.md b/content/en/docs/Integrations/.Notifications/Development_API.md new file mode 100644 index 00000000..fc2392bf --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Development_API.md @@ -0,0 +1,357 @@ +# Development API +Apprise is very easy to use as a developer. The **Apprise()** object handles everything for you, meanwhile the **AppriseAsset()** Object allows you to stray away from some default configuration to personalize the users experience (and perhaps fit your application better): +* **[[The Apprise Object|Development_API#the-apprise-object]]** +* **[[The Apprise Asset Object|Development_API#the-apprise-asset-object]]** + +Some additional functionality is available via the **[[The Apprise Notification Object|Development_API#the-apprise-notification-object]]** for those who want to manage the notifications themselves. + +Another useful class that can help you out with sending notifications is the **[[The LogCapture Object|Development_LogCapture]]**. It can be used to capture the events that surrounded the success (and potential failure) of the notifications being delivered so that you can work with them. + +## The Apprise Object +The Apprise() object is the heart and soul of this library. To instantiate an instance of the object, one might do the following: +```python +# Import this library +import apprise + +# create an Apprise instance and assign it to variable `apobj` +apobj = apprise.Apprise() +``` + +### add(): Add a New Notification Service By URL(s) +Use the **add()** function to append the notification URLs we want to provide notifications for. +```python +# Add all of the notification services by their server url. +# A sample email notification +isokay = apobj.add('mailto://myemail:mypass@gmail.com') + +# add() will return a True if the URL was successfully parsed and added into +# our notification pool. Otherwise it returns False. + +# A sample pushbullet notification +isokay = apobj.add('pbul://o.gn5kj6nfhv736I7jC3cj3QLRiyhgl98b') + +# You can additionally add URLs via a list/set/tuple: +isokay = apobj.add([ + # A sample growl service + 'growl://192.168.40.23', + + # Our Microsoft Windows desktop + 'windows://', +]) +``` + +#### Tagging +Tagging is a great way to add a richer experience to the notification flow. +You can associate one or more _tags_ with the notifications you choose to **add()**. Doing so grants you to flexibility to later call _on just some_ (_or all_) of the services you added. It effectively grants you the additional ability to filter notifications based on your workflow. + +Here is an example: +```python +# import our library +import apprise + +# Create our object +apobj = apprise.Apprise() + +# Add a tag by a simple string +apobj.add('json://localhost/tagA/', tag="TagA") + +# Add 2 tags by string; the comma and/or space auto delimit +# our entries (spaces and comma's are ignored): +apobj.add('json://localhost/tagAB/', tag="TagA, TagB") + +# Add 2 tags using a list; this works with tuples and sets too! +apobj.add('json://localhost/tagCD/', tag=["TagC", "TagD"]) +``` + +### notify() : Send Notification(s) +You can now send a notification to all of the loaded notifications services by just providing it a **title** and a **body** like so: +```python +# Then notify these services any time you desire. The below would +# notify all of the services loaded into our Apprise object. +apobj.notify( + body='what a great notification service!', + title='my notification title', +) +``` + +Developers should know that Apprise passes everything it gets _as is_ which will work for most circumstances. However sometimes it's useful to let apprise know the data you're feeding it. This information is used to guarantee that the upstream provider can handle the content, and if it can't, it _will be modified_ so that it does. +```python +# Include the NotifyFormat object +from apprise import NotifyFormat + +# our body might be read from a file, it might be just input from +# our end users +body=""" +...a lot of content +that could span multiple lines ... +""" + +# Now we can send our notification while controlling the input source +# and knowing the upstream plugins will be able to handle it +apobj.notify( + body=body, + # Possible formats are TEXT, MARDOWN, and HTML + body_format=NotifyFormat.TEXT, +) +``` +#### Leverage Tagging +If you associated tags with your notification services when you called **add()** earlier, you can leverage it's full potential through the **notify()** function here. Tagging however allows you to trigger notifications only when a criteria is met. The tagging logic can be interpreted as follows: + +| apprise.notify(tag=_match_) | Notify Services Having Tag(s): | +| -------------------------------- | ------------------------------ | +| "TagA" | TagA +| "TagA, TagB" | TagA **OR** TagB +| ['TagA', 'TagB'] | TagA **OR** TagB +| [('TagA', 'TagC'), 'TagB'] | (TagA **AND** TagC) **OR** TagB +| [('TagB', 'TagC')] | TagB **AND** TagC + +Now that we've added our services and assigned them tags, this is how we can access them: +```python +# Has TagA +apobj.notify( + body="a body", title='a title', tag="tagA") + +# Has TagA OR TagB +apobj.notify( + body="a body", title='a title', tag=["tagA", "TagB"]) + +# Has TagA AND TagB +apobj.notify( + body="a body", title='a title', tag=[("tagA", "TagB")]) + +# Has TagA OR TagB OR (TagC AND TagD) +apobj.notify( + body="a body", title='a title', tag=["tagA", "TagB", ["TagC", "TagD"]]) + +# No reference to tags; alert all of the added services +apobj.notify(body="my body", title="my title") +``` + +#### Tagging and Categories +Another use case for tagging might be to instead interpret them as categories. A system owner could simply fill their code with clean logic like: +```python +#... stuff happening +apobj.notify(body="a body", title='a title', tag="service-message") + +# ... uh oh, something went wrong +apobj.notify(body="a body", title='a title', tag="debug-message") + +# ... more stuff happening +apobj.notify(body="a body", title='a title', tag="broadcast-notice") +# ... +``` + +The idea would be that somewhere when the Apprise Object (_apobj_) was first created, you (as a system owner) would have retrieved the user settings and only load the tags based on what they're interested in: +```python +# import our library +import apprise + +# Create our object +apobj = apprise.Apprise() + +# Poll our user for their setting and add them +apobj.add('mailto://myuser:theirpassword@hotmail.com', tag=[ + # Services we want our user subscribed to: + "service-message", + "broadcast-notice" +]) +``` +**Takeaway**: In this example (above), the user would never be notified for "_debug-message_" calls. Yet the developer of this system does not need to provide any additional logic around the apprise calls other than the _tag_ that should trigger the notification. Just let _Apprise_ handle the logic of what notifications to send for you. + +#### Message Types and Themes +By default, all notifications are sent as type **NotifyType.INFO** using the _default_ theme. The following other types are included with this theme: + +| Notification Type | Text Representation | Image | +| -------------------- | ------------------- | ----- | +| **NotifyType.INFO** | _info_ | [![Build Status](https://github.com/caronc/apprise/blob/master/apprise/assets/themes/default/apprise-info-72x72.png)](https://github.com/caronc/apprise/tree/master/apprise/assets/themes/default) | +| **NotifyType.SUCCESS** | _success_ | [![Build Status](https://github.com/caronc/apprise/blob/master/apprise/assets/themes/default/apprise-success-72x72.png)](https://github.com/caronc/apprise/tree/master/apprise/assets/themes/default) | +| **NotifyType.WARNING** | _warning_ | [![Build Status](https://github.com/caronc/apprise/blob/master/apprise/assets/themes/default/apprise-warning-72x72.png)](https://github.com/caronc/apprise/tree/master/apprise/assets/themes/default) | +| **NotifyType.FAILURE** | _failure_ | [![Build Status](https://github.com/caronc/apprise/blob/master/apprise/assets/themes/default/apprise-failure-72x72.png)](https://github.com/caronc/apprise/tree/master/apprise/assets/themes/default) | + +Should you want to send a notification using a different status, simply include it as part of your **notify()** call: +```python +# Import our NotifyType +from apprise import NotifyType + +# Then notify these services any time you desire. The below would +# notify all of the services loaded into our Apprise object as a WARNING. +apobj.notify( + body='what a great notification service!', + title='my notification title', + notify_type=NotifyType.WARNING, +) +``` + +You can alter the theme as well; this is discussed lower down using the [[the Apprise Asset Object|Development_API#the-apprise-asset-object]]. + +### len(): Returns Number of Notification Services loaded + +We can retrieve a list of the active and loaded notification services by using python's built in +**len()** function. +```python +# len(apobj) returns the number of notifications loaded +# the below calls this and prints it to the screen: +print("There are %d notification services loaded" % len(apobj)) +``` +### clear(): Reset our Apprise Object +If you ever want to reset our Apprise object, and eliminate all of the services we had previously loaded into it, you can use the **clear()** function. +```python +# clears out all of the loaded notification services associated with our +# Apprise Object. +apobj.clear() +``` +### details(): Dynamic View Into Available Notification Services Apprise Offers +Developers who wish to be able to generate information based on this library dynamically can use the *details()** function: +```python +# returns an object containing details about the plugin for dynamic integration. +apobj.details() +``` +The output will look like: +```json +{ + "version": "0.5.2", + "asset": { + "default_extension": ".png", + "app_desc": "Apprise Notifications", + "image_path_mask": "https://github.com/caronc/apprise/raw/master/apprise/assets/themes/{THEME}/apprise-{TYPE}-{XY}{EXTENSION}", + "app_id": "Apprise", + "theme": "default", + "image_url_logo": "https://github.com/caronc/apprise/raw/master/apprise/assets/themes/{THEME}/apprise-logo.png", + "image_url_mask": "https://github.com/caronc/apprise/raw/master/apprise/assets/themes/{THEME}/apprise-{TYPE}-{XY}{EXTENSION}" + }, + "schemas": [ + { + "service_name": "Boxcar", + "setup_url": "https://github.com/caronc/apprise/wiki/Notify_boxcar", + "service_url": "https://boxcar.io/", + "protocols": null, + "secure_protocols": [ + "boxcar" + ] + }, + { + "service_name": "Discord", + "setup_url": "https://github.com/caronc/apprise/wiki/Notify_discored", + "service_url": "https://discordapp.com/", + "protocols": null, + "secure_protocols": [ + "discord" + ] + }, + { + "service_name": "E-Mail", + "setup_url": "https://github.com/caronc/apprise/wiki/Notify_email", + "service_url": null, + "protocols": [ + "mailto" + ], + "secure_protocols": [ + "mailtos" + ] + }, + + "... etc, ..." + + ], + "details": { + "templates": { + ... + }, + "tokens": { + ... + }, + "args": { + ... + }, + "kwargs": { + ... + }, + }, +} +``` +The idea behind the **details()** function is that it allows developers to pass details back through their program letting their users know what notifications are supported. Thus as this library deprecates and introduces new notification services, calling front end applications (built around the **details()** function) can automatically serve this information back to their user base. + +More detailed information about this object can be found [[here|Development_Apprise_Details]]. + +## The Apprise Asset Object +The apprise object allows you to customize your alarms by offering it different images, different sources and different themes. Different notification services support different ways of passing images into it (and some don't support images at all). Apprise offers a way of supporting both local and hosted images and looks after passing the correct one to the correct service (when requested). + +Even when you're just using the **Apprise()** object, behind the scenes a generic **AppriseAsset()** object is created which retrieves all of it's information from this path: https://github.com/caronc/apprise/tree/master/apprise/assets/themes/default (which is the _default_ theme directory). + +A default **AppriseAsset()** object might have the following defined in it: + +| Variable | Default | Type | Description | +| -------- | ------- | ------ | ----------- | +| **app_id** | _Apprise_ | String | A Short Identifier defining the name of the application. | +| **app_desc** | _Apprise Notifications_ | String | A brief way of describing your notification system +| **app_url** | _https://github.com/caronc/apprise_ | String | The URL someone could go to to find more information about your application if they so desired. +| **image_url_mask** | `https://github.com/caronc/apprise/raw/master`
`/apprise/assets/themes/{THEME}/apprise-{TYPE}-{XY}{EXTENSION}` | String | A URL accessible from the internet that contains the images you want your notifications to reference. The URL should make use of available TEMPLATES MASKS that are encapsulated in **{}** brackets. +| **image_path_mask** | `abspath(join(`
` dirname(__file__),`
` 'assets', `
` 'themes', '{THEME}', `
` 'apprise-{TYPE}-{XY}{EXTENSION}',`
` ))`
| String | A locally accessible path that contains the images you want your notifications to reference. The path should make use of available TEMPLATES MASKS that are encapsulated in **{}** brackets.
**Note**: Don't let the python code above confuse you. It is used to dynamically figure out the path relative to where you installed apprise to so that it can point to the image files the product ships with. + +The **AppriseAsset()** object also performs some dynamic _templating_ of the specified image and URL paths. First I'll explain the template values, and then I'll explain how it works: + +| Template Value | Variable | Type | Default | Description | +| ---------------- | -------- | ---- | ------- | ----------- | +| **{THEME}** | **theme** | String | _default_ | The theme to reference. | +| **{EXTENSION}** | **default_extension** | String | _.png_ | The image file extension | +| **{TYPE}** | | | | The type of notification being preformed. For example, if the user calling the notify() function specifies a _notify_type_ of _NotifyType.WARNING_, the string _warning_ would be placed as the _{TYPE}_ | +| **{XY}** | | | | The image size to use which is in the format of **AAxBB** (as an example 72x72). Depending on the notification service being called; this value will vary. If you plan on defining your own images, you should facilitate the sizes: **32x32**, **72x72**, **128x128**, and **256x256**| + +Everytime the **notify()** function is called from the Apprise object, it uses the URL and/or local path and applies all of the templated masked values so that it can figure out what image to display. Here is an example how one might over-ride apprise to suit their own custom project needs: +```python +# Import this library +import apprise + +# Create our AppriseAsset and populate it with some of our new values: +asset = apprise.AppriseAsset( + # The following would allow you to support: + # C:\Path\To\My\Images\info-32x32.jpeg + # C:\Path\To\My\Images\warning-72x72.jpeg + # etc... + image_path_mask="C:\Path\To\My\Images\{TYPE}-{XY}{EXTENSION}", + default_extension=".jpeg" +) + +# Change our name a bit: +asset.app_id = "My App" +asset.app_desc = "My App Announcement" +asset.app_url = "http://nuxref.com/" + +# create an Apprise instance and assign it our asset we created: +apobj = apprise.Apprise(asset=asset) + +# At this point you can use the Apprise() object knowing that all of the +# default configuration has been over-ridden. +``` + +## The Apprise Notification Object +The **[[The Apprise Object|Development_API#the-apprise-object]]** actually already does a really good managing these for you. But if you want to manage the notifications yourself here is how you can do it: + +```python +# Import this library +import apprise + +# Instantiate an object. This is what the apprise object +# would have otherwise done under the hood: + obj = apprise.Apprise.instantiate('glib://') + +# Now you can use the notify() function to pass notifications. +# notify() is similar to Apprise.notify() except the overhead of +# of tagging is not present. There also no handling of the +# the text input type (HTML, MARKUP, etc). This is on you +# to manipulate before passing in the content. +obj.notify( + body=u"A test message", + title=u"a title", +) + +# send() is a very low level call which directly posts the +# body and title you specify to the remote notification server +# There is NO data manipulation here, no overflow handling +# nothing. But this allows you to free form your own +# messages and pass them along using the apprise handling +obj.send( + body=u"A test message", + title=u"a title", +) +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Development_Apprise_Details.md b/content/en/docs/Integrations/.Notifications/Development_Apprise_Details.md new file mode 100644 index 00000000..e2c6ef44 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Development_Apprise_Details.md @@ -0,0 +1,374 @@ +# Apprise details() +```python +{ + "version": "X.Y.Z", + "asset": { ... }, + "schemas": [ ... ], +} +``` + +A call to the **Apprise().details()** object returns a list of supported notification services available. It's syntax can be broken down into 3 major categories: +* **version**: A string representation of the current version of the Apprise library. +* **asset**: Some developers will provide their own [[Apprise Asset Object|Development_API#the-apprise-asset-object]] tailored to their own system. This is merely a view into the current loaded configuration. +* **schemas**: This is a identifying all of the supported notification services and a very high level point of reference to them such as their official documentation, the apprise documentation, and the name of the service itself. + +A simple way to look at all of the data available to you can be done like so: +```python +import apprise +from json import dumps + +# Our Apprise Object +a = apprise.Apprise() + +# Our details we've always used in the past but now provide much more +# detail. The below shows how you can simply view them for yourself as +# this is all explained below +print(dumps(a.details(), indent=2) +``` + +## Version +This is just a simple string that you can use as a reference to help identify what version of Apprise is loaded. The version identified here can have a direct impact on what notification services have been made available to you and additions to this very API. + +While there is no intent to change the API at this time, the API IS subject to potentially be structured differently or _could_ include potential breaking changes _IF the major changes_. Hence given a version of **X**.Y.Z, **X** would be identified as _the major_. + +In all other circumstances, content may be added to the API, but NEVER removed or changed in such a way it would break systems referencing it. + +## Asset +[[The Apprise Asset Object|Development_API#the-apprise-asset-object]] during the initialization of Apprise can be altered to conform to different products. The Asset object really just defines some static globals that are referenced through-out the entire life of the Apprise object itself. + +This section of the JSON response simply allows one to review what was set/specified. By default, the output might look like this: +```python + "asset": { + "app_id": "Apprise", + "app_desc": "Apprise Notifications", + + "theme": "default", + "default_extension": ".png", + + "image_path_mask": "https://github.com/caronc/apprise/raw/master/apprise/assets/themes/{THEME}/apprise-{TYPE}-{XY}{EXTENSION}", + "image_url_logo": "https://github.com/caronc/apprise/raw/master/apprise/assets/themes/{THEME}/apprise-logo.png", + "image_url_mask": "https://github.com/caronc/apprise/raw/master/apprise/assets/themes/{THEME}/apprise-{TYPE}-{XY}{EXTENSION}" + }, +``` + +## Schemas +Here is where all of the supported notification services are identified and all of the details you need to know about in order to use one. + +Below is an example of what the output would look like: +```python + "schemas": { + "service_name": "Boxcar", + "service_url": "https://boxcar.io/", + + "setup_url": "https://github.com/caronc/apprise/wiki/Notify_boxcar", + "protocols": null, + "secure_protocols": [ + "boxcar" + ], + # Only available if show_disabled=True (otherwise ONLY + # enabled plugins are returned in this response) + "enabled": true, + # Details are discussed a bit lower since there is a lot of information + # here. + "details": {...}, + # Requirements are only shown if show_requirements=True + "requirements": {...} + }, + { + "service_name": "Discord", + "service_url": "https://discordapp.com/", + + "setup_url": "https://github.com/caronc/apprise/wiki/Notify_discored", + "protocols": null, + "secure_protocols": [ + "discord" + ], + # Only available if show_disabled=True (otherwise ONLY + # enabled plugins are returned in this response) + "enabled": true, + # Details are discussed a bit lower since there is a lot of information + # here. + "details": {...}, + # Requirements are only shown if show_requirements=True + "requirements": {...} + }, +``` +* `service_name` gives you a general description of the notification service itself. +* `service_url` provides information to the official source of the notification service. +* `setup_url` provides the URL you can reference to see examples on how to construct your Apprise URL. +* `protocols` identifies the accepted schema for non-encrypted references to the service. It is not uncommon to have this field set to `null` simply stating that there simply isn't a non-encrypted form of using this service. +* `secure_protocols` identifies the accepted schema for encrypted references to the service. +* `enabled` is set to either True or False if the service/plugin is available (based on administrative/platform/environment dependencies). This field is ONLY present if you specified `show_disabled=True` on your call to **details()** +* `requirements` provides details on what is required for a plugin to be functional (With respect to the platform/environment. This field is ONLY present if you specified `show_requirements=True` on your call to **details()** +* `details` goes into a much more granular view of the protocol. This is covered in the next section. + +All services will have _AT LEAST_ one protocol/schema you can use to access it by. + +The details function can also take a few keyword arguments that generate a little more overhead, but can additionally provide you information on services Apprise supports that you do not have access to (due to your Platform/Environment). +```python +import apprise +from json import dumps + +# Our Apprise Object +a = apprise.Apprise() + +# Get our details and include all other services available to us as well: +details = apprise.details(show_disabled=True) +``` +The payload from the above call will look almost identical as it did except that it will additionally include a variable called **enabled** which is set to either `True` or `False`. One difference is that the call without this flag set ONLY returns enabled plugins. + +### Details +This identifies a much more granular view of the schema object itself. There is enough details in here on every single supported notification service that an end user could ask for simple to read arguments like `token`, `password`, `target_users` and dynamically construct the URL on their own. You can also just feed these tokens into the [[Apprise Notification Object|Development_API#the-apprise-notification-object]] directly for those using this product at a very low level. + +This section was the newest addition to what is provided. There are 4 core sections within the `details` part of the of the JSON output: + +1. **templates**: Identifies the URL structure associated with the specific service, eg: +```json + { + "service_name": "Discord", + ... + "details": { + "templates": [ + "{schema}://{webhook_id}/{webhook_token}", + "{schema}://{botname}@{webhook_id}/{webhook_token}" + ] + ... + } +... +``` +2. **tokens**: This provides the full mappings of each entry identified in the **templates** (identified above). It gives some data to easily build a web page and/or application from by allowing developers to dynamically generate the Apprise URLs.
It also provides a **map_to** argument which maps the token directly to the Apprise Notification Class (should you want to manually initialize it this way instead of via a URL). Some tokens can be combined into one single token (as a list). The **map_to** argument additionally provides this connection as well. This is discussed more below. +```json + { + "service_name": "Discord", + ... + "details": { + ... + "tokens": { + "webhook_token": { + "map_to": "webhook_token", + "required": true, + "type": "string", + "name": "Webhook Token", + "private": true + }, + "schema": { + "name": "Schema", + "default": "discord", + "required": true, + "private": false, + "map_to": "schema", + "values": [ + "discord" + ], + "type": "choice:string" + }, + "botname": { + "type": "string", + "required": false, + "map_to": "user", + "name": "Bot Name", + "private": false + }, + "webhook_id": { + "map_to": "webhook_id", + "required": true, + "type": "string", + "name": "Webhook ID", + "private": true + } + } + ... + } +... +``` +3. **args**: This identifies any URL arguments you want to define. The arguments reside after the URL is defined, such as `http://path/?arg=val&arg2=val2`. URL arguments are never mandatory for a URL's construction with Apprise and merely provide extended options. A continued example (with respect to Discord) would look like this: +```json +... + { + "service_name": "Discord", + ... + "details": { + ... + "args": { + "footer": { + "name": "Display Footer", + "default": false, + "required": false, + "private": false, + "map_to": "footer", + "type": "bool" + }, + "tts": { + "name": "Text To Speech", + "default": false, + "required": false, + "private": false, + "map_to": "tts", + "type": "bool" + }, + "format": { + "name": "Notify Format", + "default": "text", + "type": "choice:string", + "required": false, + "private": false, + "map_to": "format", + "values": [ + "text", + "html", + "markdown" + ] + }, + "footer_logo": { + "name": "Footer Logo", + "default": true, + "required": false, + "private": false, + "map_to": "footer_logo", + "type": "bool" + }, + "avatar": { + "name": "Avatar Image", + "default": true, + "required": false, + "private": false, + "map_to": "avatar", + "type": "bool" + }, + "overflow": { + "name": "Overflow Mode", + "default": "upstream", + "type": "choice:string", + "required": false, + "private": false, + "map_to": "overflow", + "values": [ + "upstream", + "truncate", + "split" + ] + } + } + ... + } +... +``` +4. **kwargs**: Simiar to args, these are never required, the subtle difference between **args** and **kwargs* is with **args** the key names are already defined. with **kwargs** the user defines both the key and it's value when building the `?+key=value&-key2=value`. Custom **kwargs** in Apprise are _ALWAYS_ prefixed with a plus (**+**) or minus (**-**) symbol; for this reason there will ALWAYS be a **prefix** field that identities which symbol is applicable. There are very few notification services at this time that use this, but to support them, you'll find them here. JSON and XML URLs for example allow one to set the _HTTP Headers_ passed to the server they _POST_ to. + +```json +... + { + "service_name": "JSON", + ... + "details": { + ... + "kwargs": { + "headers": { + "name": "HTTP Header", + "required": false, + "private": false, + "prefix": "+", + "map_to": "headers", + "type": "string" + } + } + ... + } +... +``` +## Argument Breakdown +Here I'll break down the arguments a bit more and what they mean: + +| Argument | Values | Description | +| --------- |:-------------:| ------------ | +| **type** | **int**, **float**, **bool**, **string**
**list:int**, **list:float**, **list:string**
**choice:int**, **choice:float**, **choice:string** | The **type** field will always be present except if an **alias_of** exists. It will allow you to determine what the expected object should be. Many of the additional arguments that can reside in this new section will be completely conditional on the type. | +| **name** | **string** | This is a fully translatable string. That said, at this time this pull request only supports English; but opens the door for others who want to translate this into other languages. The **name** field will always be present except if an **alias_of** exists. +| **values** | **list()**| The **values** field **ONLY** exists if the **type** was a **choice:** or **bool** (choice:bool is redudant). This provides the actual choices that are explicitly allowed. +| **required** | **bool**| This is only set if you can be rest assured the plugin will fail to initialize if this value isn't set. +| **default** | _some value_ | To simplify a users life; sometimes it's easier to pre-provide default values they can use. +| **private** | **bool** | This is set to **True** if the argument contains something that would otherwise be private to the user making the notification. This could be something such as a password, or a private token, an authentication key, etc. If you're building a website, it might be kind of you to place the **password** input type on these. +| **min** | **int** / **float** | When the **type** is of **int** or **float** this would identify the minimum value that would be accepted. The **min** will not always be present if there are no restrictions set. This is just a field that the developer can use to help with some early verification. +| **max** | **int** / **float** | When the **type** is of **int** or **float** this would identify the maximum value that would be accepted. The **max** will not always be present if there are no restrictions set. This is just a field that the developer can use to help with some early verification. +| **delim** | **list** | If we are dealing with a **list:** type, then we are accepting more then one element. For that we need a way to separate one element from another. This identifies one or more entries that are acceptable as delimiters. The **delim** argument will only be present if the type is of **list:**. +| **regex** | **(regex, options)** | If we are dealing with a **string** type (this includes a **list:string**), then we provide a regular expression the developer can optionally use to validate the strings specified. The result is always returned as a tuple where index zero (0) is the actual regex string and index one (1) is the regex options as a string ('i' = case insensitive, etc). It's important to note that not all **string** entries have this entry. So you shouldn't depend on it's presence. +| **prefix** | **string** | Some arguments are identified by apprise based on a prefix value placed in front of them. For example with slack, the at (@) symbol identifies a user where as pound/hashtag (#) identifies a channel. If a prefix is identified, it _usually_ means that the attribute has a **map_to** argument causing to map to a shared list. It's important to make sure the prefix exists when constructing the URL and/or passing the argument directly into the Apprise Plugin for it to be effective. +| **alias_of** | **string** | If you see this, then you won't even see a **type** block. In fact **alias_of** is a lone wolf and when it exists it merely points to a **token** entry (_never another **arg**_) where you can get the details of this item from. Think of it as a _symbolic-link_; to make apprise easy to work with, some Notification services have more than one way to provide the same information. **apprise_of** prevents ambiguity of defining the same thing twice. +| **map_to** | **string** | This has one core meaning and one helping one. First off, it's primary reason for existence is for those people who don't want to build URLs from the **templates** and want to directly _instantiate_ their own instance of the Notification service manually (using the class object). **map_to** always points to the function argument name.
The other use of this is in cases where the **prefix** is used. You should always check the **token** table to see if the **map_to** can be mapped back to an element already identified here. + +## Using The Tokens +So, let's presume you built your website and/or application and provided all of these options to the user. They provided you with all of these options/tokens populated with their data and now you need to send a notification for them. + +No problem, the **Apprise.add()** function now supports dictionaries of URL arguments (not just the URL strings themselves). +```python +import apprise + +# First you'll get your details which will provide your app with all the information you need +# to get the data you need from the user with any supported notification service. +a = apprise.Apprise() +apprise_details = a.details() + +# work your magic and get the user to populate the tokens associated with the +# notification services +results = your_code_that_gets_input_from_user(apprise_details) + +# Presuming you have all of your tokens now from one of the notification services: +# for example, for email you might have: +# results = { +# 'schema': 'mailto', +# 'host': 'google.com', +# 'user': 'myuser', +# 'password': 'mypassword', +# } +# +# Simply add your results: +a.add(results) + +# Done! +``` +**Note:** The dictionary keys that you pass into Apprise.add() must be based on the **map_to** directive. + +## Internationalization +Full I18n support is built into this pull request allowing the **name** directive to be translated into other languages (making the **details()** support people who've built their website around multi-language support). + +The language is automatically translated on the fly to each call to **details()**. At this time only English is supported, but I welcome anyone wishing to help with translations into other languages. + +Here is how it works: +```python +import apprise + +# Our Apprise Object +a = apprise.Apprise() + +# Get our details in the local language detected from the OS +# apprise is installed on +details = a.details() + +# Get our details in English +details = a.details('en') + +# Get our details in French +# Note: While this is possible, i need the translations still +details = a.details('fr') +``` + +Here is how you can help if you want to. Let's say you want to help translate apprise into another language: +```bash +# First checkout your own copy of apprise and change into the directory +# it downloaded to + +# Next prepare your language (the below prepares the French - fr) +python setup.py init_catalog -l fr + +# Now have a look at a new file that appeared; with respect to the +# above command it will be: apprise/i18n/fr/LC_MESSAGES/apprise.po +# Use your favorite editor to edit this file: +code apprise/i18n/fr/LC_MESSAGES/apprise.po +# or # +gvim apprise/i18n/fr/LC_MESSAGES/apprise.po +# or # +notpad apprise/i18n/fr/LC_MESSAGES/apprise.po +# or # +emacs apprise/i18n/fr/LC_MESSAGES/apprise.po +# you get the idea... +# add your translations and pass them back; it's that easy +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Development_Contribution.md b/content/en/docs/Integrations/.Notifications/Development_Contribution.md new file mode 100644 index 00000000..5207f626 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Development_Contribution.md @@ -0,0 +1,188 @@ +# Introduction +Thanks to all who have landed on this page with the intent of contributing to the apprise library. Any changes you make are going to easily make it upstream as long as there is there are: +* **Unit tests**: apprise is currently sitting at 100% test coverage. The goal is to keep it this way! :slightly_smiling_face: +* **PEP8 Compliance**: Following the [PEP 8 Style Guide for Python](https://www.python.org/dev/peps/pep-0008/) is a must. Most editors have PEP8 plugins and allow you to keep everything compliant as you go. +* **Python 2.7 backwards support**. I'd like to support Python 2.7 for as long as i can only because there is a huge amount of servers still using this today. When you push your code upstream, a code-runner will test all this for you if you're uncertain. + +The following should get you all set up: +```bash +# Install our apprise development requirements +pip install --requirement requirements.txt --requirement dev-requirements.txt +``` + +# Building Your Own Notification Plugin + +It basically boils down to this: +```python +# Whatever your class is called that inherits NotifyBase, make sure +# to also make that the filename as well. Below is an example of what +# plugins/NotifyFooBar.py might look like: + +from .NotifyBase import NotifyBase +from ..AppriseLocale import gettext_lazy as _ + +class NotifyFooBar(NotifyBase): + # Define a human readable description of our object + # _() is wrapped for future language translations + service_name = _('FooBar Notification') + + # Our protocol:// Apprise will detect and hand off further + # parsing of the URL to your parse_url() function you write: + protocol = 'foobar' + + # Where can people get information on how to use your plugin? + setup_url = 'https://github.com/caronc/apprise/wiki/Notify_FooBar' + + def __init__(self, **kwargs): + """ + Your class initialization + """ + super(NotifyFooBar, self).__init__(**kwargs) + + def url(self, privacy=False, *args, **kwargs): + """ + Always be able to build your Apprise URL exactly the way you parsed + it in the parse_url() function + """ + return self.protocol + "://" + + def send(self, body, title='', **kwargs): + """ + Perform Notification here using the provided body and title + """ + + print("Foobar Notification Sent to STDOUT") + + # always return True if your notification was sent successfully + # otherwise return False if it failed. + return True + + @staticmethod + def parse_url(url): + """ + The URL that starts with foobar:// + """ + # NotifyBase.parse_url() will make the initial parsing of your string + # very easy to use. It will tokenize the entire URL for you. The tokens + # are then passed into your __init__() function you defined to generate + # you're object + tokens = NotifyBase.parse_url(url, verify_host=False) + + # massage your tokens here + + return tokens +``` + +With respect to the above example: +- You just need to create a single notification python file as `/plugins/NotifyServiceName.py` +- Make sure you call the class inside `NotifyServiceName` and inherit from `NotifyBase` +- Make sure your class object name is the same as the filename you create. This is very important! +- From there you just need to at a bare minimum define: + - **the class objects**: + - `service_name`: A string that acts as a default descriptive name associated with the Notification + - `service_url`: A string that identifies the platform/services URL. This is used purely as meta data for those who seek it. But this field is required. + - `protocol` and/or `secure_protocol`: A string (or can be a list of strings) identifying the scheme:// keyword that apprise uses to map to the Plugin Class it's associated with. For example, `slack` is mapped to the `NotifySlack` class found in the [`/plugins/NotifySlack.py` file](https://github.com/caronc/apprise/blob/master/apprise/plugins/NotifySlack.py). This must be defined so that people can leverage your plugin. You must choose a protocol name that isn't already taken. + - `setup_url`: A string that identifies the URL a user can use to get information on how to use this Apprise Notification. At this time I'm just creating URLs that point back to my GitHub Wiki page. + + - **the functions**: + 1. `__init__(self, *args, **kwargs)`: Define what is required to initialize your object/notification. Just make sure to cross reference it in the `template*` stuff (explained above). + 1. `send(self, body, title='', *args, **kwargs)` at a bare minimum. See other Notify scripts as to how you can expand on this. But just take the `body` and `title` and construct your message and send it. + 1. `url()`. This function must be able to construct a URL that would re-generate a copy of the exact same object if passed into `parse_url()` + 1. `parse_url(url)`: this is a **staticmethod** that parses the Apprise URL and breaks it into a dictionary of the components. The dictionary it creates must map up to what the `__init__()` takes as it's arguments + + - **Putting it together**: + ```python + from Apprise.plugins import NotifyMyService + import Apprise + + # Apprise is nothing but a manager of individual plugins + a = Apprise() + + # Under the table, add just calls the NotifyMyService.parse_url() and passes + # the result set into your new services __init__() function. + a.add('myscheme://details/?more=details&are=optional') + + # There would be one new service added to our manager now: + assert(len(a), 1) + + # you can directly access the notification services if you wanted to this way: + # index element 0 exists because we added it successfully above (assuming you properly + # followed all the rules above): + assert isinstance(a[0], NotifyMyService) + + # So we know we can access the notification, then this would create a second notification service: + # The only thing add does is match the schema up with the class it should use and then call it's + # NotifyServiceName.parse_url() + + # So parse_url() is in charge of preparing all of the arguments we can use to instantiate our object + # With that, it can then do Object(**parse_url_response) + a.add(a[0].url()) + + # Hopefully this is making sense so far.... But now we've called add() twice... so we'll ahve 2 entries + # and if we built our 3 core functions (__init__, `url()` and `parse_url()` correctly, they should be almost + # copies of one another (yet 2 instances) + assert(len(a) == 2) + + # URLs are the same + assert(a[0].url() == a[1].url()) + + # that's really all there is too it... when you call `a.notify()`; there is some functions and tools + # that handle some common things, but at the end of the day, it will call your `send()` function + # you defined. + ``` + + - **Putting it together without the overhead of the Apprise manager**: + ```python + from Apprise.plugins import NotifyMyService + + # You can do this manually too if you want to test around the overhead + # of the Apprise manager itself: + results = NotifyMyService.parse_url('myscheme://details/?more=details&are=optional') + + # A simple dictionary of all of our arguments ready to go: + assert isinstance(results, dict) + + # Now instantiate your object: + obj = NotifyMyService(**results) + + # If you build your NotifyMyService correctly, then you should be able + # to build a copy of the object perfectly using it's url() call + # Now instantiate your object: + clone_results = NotifyMyService.parse_url(obj.url()) + + # A simple dictionary of all of our arguments ready to go: + assert isinstance(clone_results, dict) + + # if you did this properly, you'll have a second working instance + # you can work with (this is a great test to make sure you coded + # your new notification service perfect) + clone = NotifyMyService(**clone_results) + + # The best test of all to ensure you did everything well; both the + # clone and original object you created should produce the same + # url() + assert clone.url() == obj.url() + ``` + + Any other functions you want to define can be done to you hearts content (if it helps with organization, structure, etc) + Just avoid conflicting with any function written in `NotifyBase()` and `URLBase()` + + If your service is really complex (and requires a lot of code), maybe it's easier to split your code into multiple files. This is how i handled the [NotifyFCM plugin i wrote](https://github.com/caronc/apprise/tree/master/apprise/plugins/NotifyFCM) which was based on Google's version. +- Don't be afraid to just copy and paste another already created service and update it for your own usage. + - [plugins/NotifyJSON.py](https://github.com/caronc/apprise/blob/master/apprise/plugins/NotifyJSON.py) is a simple reference you use (not too complicated). + - [plugins/NotifyGitter.py](https://github.com/caronc/apprise/blob/master/apprise/plugins/NotifyGitter.py) is a bit more complicated, but introduces an upstream API interface with attachments. + - [plugin/NotifyFCM](https://github.com/caronc/apprise/tree/master/apprise/plugins/NotifyFCM) is a much more complex design but illustrates how you can build your notification into smaller components. + - All in all.... just have a look at the [plugins directory](https://github.com/caronc/apprise/tree/master/apprise/plugins) and feel free to use this as a reference to help structure and solve your own notification service you might be building + +You can have a look at the NotifyBase object and see all of the other entries you can define that Apprise can look after for you (such as restricting the message length, title length, handling TEXT -> Markdown, etc). You can also look at how other classes were built. + +## Demo Plugins +Some people learn by just working with already pre-written code. So here are some sample plugins I put together that you can copy and paste and start your own notification service off of. Each plugin tries to explain with a lot of in-line code comments what is going on and why things are the way they are: + +- [A Very Basic Plugin](DemoPlugin_Basic) That simply posts the message to stdout +- [An HTTP Web Request Based Plugin](DemoPlugin_WebRequest) + +# Testing +There is a few tools that work right out of the box in the root of any branch you're working in. These tools allow you to clone the Apprise branch and immediately test your changes without having to install anything into your environment. + +More details can be found [here](https://github.com/caronc/apprise/tree/master/bin) about them. diff --git a/content/en/docs/Integrations/.Notifications/Development_LogCapture.md b/content/en/docs/Integrations/.Notifications/Development_LogCapture.md new file mode 100644 index 00000000..132fdb2b --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Development_LogCapture.md @@ -0,0 +1,116 @@ +# LogCapture +`apprise.LogCapture()` allows you to capture all of the logging information within your program. You may wish to relay the information to screen, or maybe you just want to have a look at it's contents when one or more notifications fail to be delivered. + +The class can capture information into a temporary (or permanent) log file, or you can just capture it straight into memory. It's incredibly easy to use too. + +## Learn by Example +### Capture to Memory +**Your code changes from this:** +```python +import apprise + +# Instantiate our object +apobj = apprise.Apprise() + +# add your configuration +apobj.add('mailto://user:pass@example.com') +apobj.add('kodi://kodi.example.com') + +# Send our notification +apobj.notify(title="hello", body="world") +``` + +**To this:** +```python +import apprise + +# Instantiate our object +apobj = apprise.Apprise() + +# add your configuration +apobj.add('mailto://user:pass@example.com') +apobj.add('kodi://kodi.example.com') + +# Prepare a LogCapture() that sets logging to INFO. This means your +# logs will include all INFO, WARNING, ERROR, and CRITICAL entries +with apprise.LogCapture(level=apprise.logging.INFO) as output: + # Send our notification + apobj.notify(title="hello", body="world") + + # At this point of our code, we can have a look at our output + # to see all of the logging that surrounded our notification(s) + # Note that `output` is a StringIO object: + print(output.getvalue()) +``` + +### Capture to File +The class can write directly to memory (as you saw above) where content was written into a `io.StringIO` object. You can also write content into a temporary file as well: +```python +# In this example: +# - we write to a log file /tmp/apprise.tmp +# - we only capture WARNING, ERROR, and CRITICAL entries +# - we assume that `apobj` is an Apprise object already loaded with a few notification +# services. +with apprise.LogCapture(path='/tmp/apprise.tmp', level=apprise.logging.WARNING) as output: + # Send our notification + apobj.notify(title="hello", body="world") + + # At this point of our code, we can have a look at our output + # to see all of the logging that surrounded our notification(s) + # Note that `output` is a File object because we specified the `path` above + print(output.read()) + +# What is VERY important to note is that whatever was specified with the `path=` above +# entry (in this case /tmp/apprise.tmp) will no longer exist at this point (outside of the +# `with` block). +``` + +If you want to write to a file and have it persist after your `with` block has completed, the syntax is very similar, you just need to add `delete=False` to the **LogCapture()** initialization like so: +```python +# In this example: +# - we write to a log file /tmp/apprise.tmp +# - we only capture WARNING, ERROR, and CRITICAL entries +# - we assume that `apobj` is an Apprise object already loaded with a few notification +# services. +with apprise.LogCapture( + path='/tmp/apprise.persistent', level=apprise.logging.WARNING, + delete=False) as output: + + # Send our notification + apobj.notify(title="hello", body="world") + + # At this point of our code, we can have a look at our output + # to see all of the logging that surrounded our notification(s) + # Note that `output` is a File object because we specified the `path` above + print(output.read()) + +# tmp/apprise.persistent will exist still on disk at this point +``` + +## Class Details + +- By default if no `level=` is specified, then the log level you set globally in your program is used. + + +### Tricks +Format your logs for HTML: +```python +# The default fmt for LogCapture() is: '%(asctime)s - %(levelname)s - %(message)s' + +# But you can use this to leverage how you want the content formatted; so you can +# build your HTML content in advance here if you like: +fmt = '
  • %(asctime)s' \ + '%(levelname)s:' \ + '

    %(message)s

  • ' + +# Now specify our format (and over-ride the default): +with apprise.LogCapture(level=apprise.logging.WARNING, fmt=fmt) as logs: + apobj.notify("hello world") + + # Wrap logs in `
      ` tag: + html = '
        {}
      '.format(logs.getvalue()) + + # Now `html` consists of a formatted code; keep in mind that + # this solution isn't bulletproof as `%(message)s` isn't + # pre-escaped/encoded. +``` diff --git a/content/en/docs/Integrations/.Notifications/Home.md b/content/en/docs/Integrations/.Notifications/Home.md new file mode 100644 index 00000000..ff83a01f --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Home.md @@ -0,0 +1,122 @@ +## Introduction +Apprise lets you send notifications to a large number of support notification services. The lightweight framework can be easily integrated into an of your python applications. Or you can simply send notifications right from the command line. + +It's primary design was to eliminate the inconsistencies in usage from one notification service to another. By harnessing a simple URL string, you can drive any of the 70+ supported services. + +## Notification Services: +All of the notification services supported by Apprise can be found in this section. + +**Legend** +* :books: : *Supports File Attachments* +* :calling: : *SMS Based Services* + +Detailed instructions on how to connect your notification service(s) up with Apprise can be acquired by clicking on the appropriate link(s) below: +1. [[Apprise API|Notify_apprise_api]] +1. [[AWS SES :books:|Notify_ses]] +1. [[AWS SNS :calling:|Notify_sns]] +1. [[Boxcar|Notify_boxcar]] +1. [[ClickSend :calling:|Notify_clicksend]] +1. [[DAPNET|Notify_dapnet]] +1. [[DingTalk :calling:|Notify_dingtalk]] +1. [[Discord :books:|Notify_discord]] +1. [[D7 Networks :calling:|Notify_d7networks]] +1. [[E-Mail :books:|Notify_email]] +1. [[Emby|Notify_emby]] +1. [[Enigma2 Devices|Notify_enigma2]] +1. [[Faast|Notify_faast]] +1. [[FCM - (Google) Firebase Cloud Messaging|Notify_fcm]] +1. [[Flock|Notify_flock]] +1. [[Gitter|Notify_gitter]] +1. [[Google Chat|Notify_googlechat]] +1. [[Gotify|Notify_gotify]] +1. [[Growl|Notify_growl]] +1. [[Home Assistant|Notify_homeassistant]] +1. [[IFTTT|Notify_ifttt]] +1. [[Join|Notify_join]] +1. [[Kavenegar :calling:|Notify_kavenegar]] +1. [[KODI|Notify_kodi]] +1. [[Kumulos|Notify_kumulos]] +1. [[LaMetric Time/Clock|Notify_lametric]] +1. [[Mailgun :books:|Notify_mailgun]] +1. [[Matrix|Notify_matrix]] +1. [[Mattermost|Notify_mattermost]] +1. [[MessageBird :calling:|Notify_messagebird]] +1. [[Microsoft Teams|Notify_msteams]] +1. [[MQTT|Notify_mqtt]] +1. [[MSG91 :calling:|Notify_msg91]] +1. [[Nexmo :calling:|Notify_nexmo]] +1. [[Nextcloud Messaging|Notify_nextcloud]] +1. [[Nextcloud Talk|Notify_nextcloudtalk]] +1. [[Notica|Notify_notica]] +1. [[Notifico|Notify_notifico]] +1. [[Office 365|Notify_office365]] +1. [[OneSignal|Notify_onesignal]] +1. [[Opsgenie|Notify_opsgenie]] +1. [[Parse Platform|Notify_parseplatform]] +1. [[Popcorn Notify|Notify_popcornnotify]] +1. [[Prowl|Notify_prowl]] +1. [[PushBullet :books:|Notify_pushbullet]] +1. [[Pushed|Notify_pushed]] +1. [[Pushjet|Notify_pushjet]] +1. [[Pushover :books:|Notify_pushover]] +1. [[PushSafer :books:|Notify_pushsafer]] +1. [[Reddit|Notify_reddit]] +1. [[Rocket.Chat|Notify_rocketchat]] +1. [[Ryver|Notify_ryver]] +1. [[SendGrid|Notify_sendgrid]] +1. [[ServerChan|Notify_serverchan]] +1. [[SimplePush|Notify_simplepush]] +1. [[Sinch|Notify_sinch]] +1. [[Slack :books:|Notify_slack]] +1. [[SMTP2Go :books:|Notify_smtp2go]] +1. [[SparkPost :books:|Notify_sparkpost]] +1. [[Spontit|Notify_spontit]] +1. [[Streamlabs|Notify_streamlabs]] +1. [[Syslog|Notify_syslog]] +1. [[Techulus Push|Notify_techulus]] +1. [[Telegram :books:|Notify_telegram]] +1. [[Twilio :calling:|Notify_twilio]] +1. [[Twist|Notify_twist]] +1. [[Twitter|Notify_twitter]] +1. [[XBMC|Notify_xbmc]] +1. [[XMPP|Notify_xmpp]] +1. [[Webex Teams|Notify_wxteams]] +1. [[Zulip|Notify_zulip]] + +### Custom Notification Services +The following are just some general notification services you can configure to have posted to any website of your choice. From there you can decide what actions you want to take. +1. [[FORM :books:|Notify_Custom_Form]] +1. [[JSON :books:|Notify_Custom_JSON]] +1. [[XML :books:|Notify_Custom_XML]] + +### Desktop Notification Services +The following services work locally on the same PC they're ran on. +1. Linux Notifications: + 1. [[Gnome|Notify_gnome]] + 1. [[Qt|Notify_dbus]] + 1. [[DBus|Notify_dbus]] +1. [[MacOS X Notifications|Notify_macosx]] +1. [[Windows Notifications|Notify_windows]] + +## Configuration +Configuration can be retrieved via a flat file on your local system or from a remote server via the http(s) protocol. You can learn more about this here: +* [[General Configuration|config]] + +The following configuration formats are supported: +* [[TEXT|config_text]] +* [[YAML|config_yaml]] + +## Installation +Apprise can be installed as easy as: +```bash +pip install apprise +``` + +## Other +* :mega: [[Using the CLI|CLI_Usage]] +* :gear: [[Configuration Help|config]] +* :bulb: [[Contributing|Development_Contribution]] +* :wrench: [[Troubleshooting|Troubleshooting]] +* :earth_americas: [Apprise API/Web Interface](https://github.com/caronc/apprise-api) +* :heart: [[Apprise Sponsors|Sponsors]] +* :skull: [[Notification Graveyard|Notification_Graveyard]] \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_Custom_Form.md b/content/en/docs/Integrations/.Notifications/Notify_Custom_Form.md new file mode 100644 index 00000000..e080fea9 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_Custom_Form.md @@ -0,0 +1,82 @@ +## FORM HTTP POST Notifications +* **Source**: n/a +* **Icon Support**: No +* **Attachment Support**: yes +* **Message Format**: application/x-www-form-urlencoded +* **Message Limit**: 32768 Characters per message + +This is just a custom Notification that allows you to have this tool post to a web server as a simple FORM (`application/x-www-form-urlencoded`). This is useful for those who want to be notified via their own custom methods. + +The payload will include a `body`, `title`, `version`, and `type` in it's response. You can add more (see below for details). + +The *type* will be one of the following: +* **info**: An informative type message +* **success**: A successful report +* **failure**: A failure report +* **warning**: A warning report + +### Syntax +Valid syntax is as follows: +* `form://{hostname}` +* `form://{hostname}:{port}` +* `form://{user}:{password}@{hostname}` +* `form://{user}:{password}@{hostname}:{port}` + +The secure versions: +* `forms://{hostname}` +* `forms://{hostname}:{port}` +* `forms://{user}:{password}@{hostname}` +* `forms://{user}:{password}@{hostname}:{port}` + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| hostname | Yes | The Web Server's hostname +| port | No | The port our Web server is listening on. By default the port is **80** for **form://** and **443** for all **forms://** references. +| user | No | If you're system is set up to use HTTP-AUTH, you can provide _username_ for authentication to it. +| password | No | If you're system is set up to use HTTP-AUTH, you can provide _password_ for authentication to it. +| method | No | Optionally specify the server http method; possible options are `post`, `put`, `get`, `delete`, and `head`. By default if no method is specified then `post` is used. + +**Note:**: If you include file attachments; each one is concatenated into the same single post to the upstream server. The `Content-Type` header request also changes from `application/x-www-form-urlencoded` to `multipart/form-data` in this case. + +#### Example +Send a FORM Based web request to our web server listening on port 80: +```bash +# Assuming our {hostname} is my.server.local +apprise form://my.server.local +``` + +### Header Manipulation +Some users may require special HTTP headers to be present when they post their data to their server. This can be accomplished by just sticking a plus symbol (**+**) in front of any parameter you specify on your URL string. +```bash +# Below would set the header: +# X-Token: abcdefg +# +# Assuming our {hostname} is localhost +# Assuming our {port} is 8080 +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + "form://localhost:8080/path/?+X-Token=abcdefg" + +# Multiple headers just require more entries defined: +# Below would set the headers: +# X-Token: abcdefg +# X-Apprise: is great +# +# Assuming our {hostname} is localhost +# Assuming our {port} is 8080 +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + "form://localhost:8080/path/?+X-Token=abcdefg&+X-Apprise=is%20great" +``` + +### Payload Manipulation +The payload can have entries added to it in addition to the default `body`, `title`, and `type` values. This can be accomplished by just sticking a colon symbol (**:**) in front of any parameter you specify on your URL string. + +```bash +# Below would set the header: +# X-Token: abcdefg +# +# Assuming our {hostname} is localhost +# Assuming we want to include app=mysystem as part of the payload: +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + "form://localhost/?:app=payload" +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_Custom_JSON.md b/content/en/docs/Integrations/.Notifications/Notify_Custom_JSON.md new file mode 100644 index 00000000..0c4046e7 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_Custom_JSON.md @@ -0,0 +1,75 @@ +## JSON HTTP POST Notifications +* **Source**: n/a +* **Icon Support**: No +* **Attachment Support**: yes +* **Message Format**: JSON +* **Message Limit**: 32768 Characters per message + +This is just a custom Notification that allows you to have this tool post to a web server as a simple JSON string. This is useful for those who want to be notified via their own custom methods. + +The format might look something like this: +```json +{ + "version": "1.0", + "title": "Some Great Software Downloaded Successfully", + "message": "Plenty of details here", + "type": "info" +} +``` + +The *type* will be one of the following: +* **info**: An informative type message +* **success**: A successful report +* **failure**: A failure report +* **warning**: A warning report + +### Syntax +Valid syntax is as follows: +* `json://{hostname}` +* `json://{hostname}:{port}` +* `json://{user}:{password}@{hostname}` +* `json://{user}:{password}@{hostname}:{port}` + +The secure versions: +* `jsons://{hostname}` +* `jsons://{hostname}:{port}` +* `jsons://{user}:{password}@{hostname}` +* `jsons://{user}:{password}@{hostname}:{port}` + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| hostname | Yes | The Web Server's hostname +| port | No | The port our Web server is listening on. By default the port is **80** for **json://** and **443** for all **jsons://** references. +| user | No | If you're system is set up to use HTTP-AUTH, you can provide _username_ for authentication to it. +| password | No | If you're system is set up to use HTTP-AUTH, you can provide _password_ for authentication to it. +| method | No | Optionally specify the server http method; possible options are `post`, `put`, `get`, `delete`, and `head`. By default if no method is specified then `post` is used. + +#### Example +Send a JSON notification to our web server listening on port 80: +```bash +# Assuming our {hostname} is json.server.local +apprise json://json.server.local +``` + +### Header Manipulation +Some users may require special HTTP headers to be present when they post their data to their server. This can be accomplished by just sticking a plus symbol (**+**) in front of any parameter you specify on your URL string. +```bash +# Below would set the header: +# X-Token: abcdefg +# +# Assuming our {hostname} is localhost +# Assuming our {port} is 8080 +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + "json://localhost:8080/path/?+X-Token=abcdefg" + +# Multiple headers just require more entries defined: +# Below would set the headers: +# X-Token: abcdefg +# X-Apprise: is great +# +# Assuming our {hostname} is localhost +# Assuming our {port} is 8080 +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + "json://localhost:8080/path/?+X-Token=abcdefg&+X-Apprise=is%20great" +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_Custom_XML.md b/content/en/docs/Integrations/.Notifications/Notify_Custom_XML.md new file mode 100644 index 00000000..270bc4ce --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_Custom_XML.md @@ -0,0 +1,82 @@ +## XML HTTP POST Notifications +* **Source**: n/a +* **Icon Support**: No +* **Message Format**: XML +* **Attachment Support**: yes +* **Message Limit**: 32768 Characters per message + +This is just a custom Notification that allows you to have this tool post to a web server as a simple XML string. This is useful for those who want to be notified via their own custom methods. + +The format might look something like this: +```xml + + + + + 1.0 + What A Great Movie Downloaded Successfully + info + Plenty of details here... + + + +``` +The *MessageType* will be one of the following: +* **info**: An informative type message +* **success**: A successful report +* **failure**: A failure report +* **warning**: A warning report + +### Syntax +Valid syntax is as follows: +* `xml://{hostname}` +* `xml://{hostname}:{port}` +* `xml://{user}:{password}@{hostname}` +* `xml://{user}:{password}@{hostname}:{port}` + +The secure versions: +* `xmls://{hostname}` +* `xmls://{hostname}:{port}` +* `xmls://{user}:{password}@{hostname}` +* `xmls://{user}:{password}@{hostname}:{port}` + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| hostname | Yes | The Web Server's hostname +| port | No | The port our Web server is listening on. By default the port is **80** for **xml://** and **443** for all **xmls://** references. +| user | No | If you're system is set up to use HTTP-AUTH, you can provide _username_ for authentication to it. +| password | No | If you're system is set up to use HTTP-AUTH, you can provide _password_ for authentication to it. +| method | No | Optionally specify the server http method; possible options are `post`, `put`, `get`, `delete`, and `head`. By default if no method is specified then `post` is used. + +#### Example +Send a XML notification to our web server listening on port 80: +```bash +# Assuming our {hostname} is xml.server.local +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + xml://xml.server.local +``` +### Header Manipulation +Some users may require special HTTP headers to be present when they post their data to their server. This can be accomplished by just sticking a plus symbol (**+**) in front of any parameter you specify on your URL string. +```bash +# Below would set the header: +# X-Token: abcdefg +# +# Assuming our {hostname} is localhost +# Assuming our {port} is 8080 +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + "xml://localhost:8080/path/?+X-Token=abcdefg" + +# Multiple headers just require more entries defined: +# Below would set the headers: +# X-Token: abcdefg +# X-Apprise: is great +# +# Assuming our {hostname} is localhost +# Assuming our {port} is 8080 +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + "xml://localhost:8080/path/?+X-Token=abcdefg&+X-Apprise=is%20great" +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_apprise_api.md b/content/en/docs/Integrations/.Notifications/Notify_apprise_api.md new file mode 100644 index 00000000..a84a1dea --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_apprise_api.md @@ -0,0 +1,74 @@ +## Apprise API Notifications +* **Source**: https://github.com/caronc/apprise-api +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 32768 Characters per message + +The idea is to allow users to use Apprise and hit an existing Apprise API server. + +### Syntax +The syntax is as follows: +- `apprise://{host}/{token}` +- `apprise://{host}:{port}/{token}` +- `apprise://{user}@{host}:{port}/{token}` +- `apprise://{user}:{password}@{host}:{port}/{token}` + +For a secure connection, just use `apprises` instead. +- `apprises://{host}:{port}/{token}` +- `apprises://{host}:{port}/{token}` +- `apprises://{user}@{host}:{port}/{token}` +- `apprises://{user}:{password}@{host}:{port}/{token}` + + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| hostname | Yes | The Web Server's hostname +| port | No | The port our Web server is listening on. By default the port is **80** for **apprise://** and **443** for all **apprises://** references. +| user | No | If you're system is set up to use HTTP-AUTH, you can provide _username_ for authentication to it. +| password | No | If you're system is set up to use HTTP-AUTH, you can provide _password_ for authentication to it. +| tags | No | You can optional set the tags you want to supply with your call to the Apprise API server + +#### Example +Send a notification along to an Apprise API server listening on port 80: +```bash +# Assuming our {hostname} is apprise.server.local +# Assuming our {token} is token +apprise apprise://apprise.server.local/token +``` + +### Header Manipulation +Some users may require special HTTP headers to be present when they post their data to their server. This can be accomplished by just sticking a plus symbol (**+**) in front of any parameter you specify on your URL string. +```bash +# Below would set the header: +# X-Token: abcdefg +# +# Assuming our {hostname} is localhost +# Assuming our {port} is 8080 +# Assuming our {token} is apprise +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + "apprise://localhost:8080/apprise/?+X-Token=abcdefg" + +# Multiple headers just require more entries defined: +# Below would set the headers: +# X-Token: abcdefg +# X-Apprise: is great +# +# Assuming our {hostname} is localhost +# Assuming our {port} is 8080 +# Assuming our {token} is apprise +# In this example we allow for a custom URL path to be defined +# in the event we're hosting our Apprise API here instead +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + "apprise://localhost:8080/path/apprise/?+X-Token=abcdefg&+X-Apprise=is%20great" +``` + +**Note:** this service is a little redundant because you can already use the CLI and point it's configuration to an existing Apprise API server (using the `--config` on the CLI or `AppriseConfig()` class via it's own internal API). +```bash +# A simple example of the Apprise CLI using a Config file instead: +# pulling down previously stored configuration +# Assuming our {hostname} is localhost +# Assuming our {port} is 8080 +# Assuming our {token} is apprise +apprise --body="test message" --config=http://localhost:8080/get/apprise +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_boxcar.md b/content/en/docs/Integrations/.Notifications/Notify_boxcar.md new file mode 100644 index 00000000..24d19c92 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_boxcar.md @@ -0,0 +1,50 @@ +--- +title: "Boxcar Notifications" +linkTitle: "Boxcar Notifications" +weight: -8 +description: >- + The Apprise Guide provides an overview of setting up notification channels and the specifics of each channel type. +--- + +## Boxcar Notifications +* **Source**: https://boxcar.io/ +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 10000 Characters per Message + +Boxcar just has a development platform these days. You can't get notifications through your AppleOS or Android devices anymore. You can still however sign up for an account [on their website](https://boxcar.io/). From there you can create projects through them. + +Each _project_ you create with them will grant you access to your own unique **Access Key** and a **Secret Key**. You can post notifications knowing these 2 values. + +### Syntax +Valid authentication syntaxes are as follows: +* `boxcar://{access_key}/{secret_key}` + +Tags support: +* `boxcar://{access_key}/{secret_key}/@{tag_id}` +* `boxcar://{access_key}/{secret_key}/@{tag_id01}/@{tag_id02}/@{tag_idNN}` + +Device Tokens: +* `boxcar://{access_key}/{secret_key}/{device_id}` +* `boxcar://{access_key}/{secret_key}/{device_id01}/{device_id02}/{device_idNN}` + +You can also form any combination of the above and perform updates from one url: +* `boxcar://{access_key}/{secret_key}/@{tag_id}**/{device_id}` + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| access_key | Yes | This is required for your account to work. You will be provided one from boxcar's website upon creating an account with them. +| secure_key | Yes | This is required for your account to work. You will be provided one from boxcar's website upon creating an account with them. +| device_id | No | Associated devices with your Boxcar setup. All _device_ids_ are 64 characters in length. +| tag_id | No | Tags must be prefixed with a @ symbol or they will be interpreted as a _device_id_ and/or _alias_. + +#### Example +Send a Boxcar notification to all devices associated with a project: +```bash +# Assume: +# - our {access_key} is pJz1KEP5zGo9KwDnIb-7_Kab +# - our {secret_key} is j300012fl9y0b5AW9g9Nsejb8P +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + boxcar://pJz1KEP5zGo9KwDnIb-7_Kab/j300012fl9y0b5AW9g9Nsejb8P +``` diff --git a/content/en/docs/Integrations/.Notifications/Notify_clicksend.md b/content/en/docs/Integrations/.Notifications/Notify_clicksend.md new file mode 100644 index 00000000..f403e5a4 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_clicksend.md @@ -0,0 +1,34 @@ +## ClickSend +* **Source**: https://clicksend.com +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 160 Characters per message + +### Syntax +Valid syntaxes are as follows: +* `clicksend://{user}:{password}@/{PhoneNo}` +* `clicksend://{user}:{password}@{PhoneNo1}/{PhoneNo2}/{PhoneNoN}` + +### Parameter Breakdown +| Variable | Required | Description +| --------------- | -------- | ----------- +| user | Yes | The _username_ associated with your ClickSend account. +| password | Yes | The _password_ associated with your ClickSend account. +| PhoneNo | Yes | At least one phone number MUST identified to use this plugin. This field is also very friendly and supports brackets, spaces and hyphens in the event you want to format the number in an easy to read fashion. +| batch | No | ClickSend allows a batch mode. If you identify more then one phone number, you can send all of the phone numbers you identify on the URL in a single shot instead of the normal _Apprise_ approach (which sends them one by one). Enabling batch mode has both it's pro's and cons. By default batch mode is disabled. + +#### Example +Send a ClickSend Notification as an SMS: +```bash +# Assuming our {user} is l2g +# Assuming our {password} is appriseIsAwesome +# Assuming our {PhoneNo} - is in the US somewhere making our country code +1 +# - identifies as 800-555-1223 +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + clicksend://l2g:appriseIsAwesome@18005551223 + +# the following would also have worked (spaces, brackets, +# dashes are accepted in a phone no field): +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + clicksend://l2g:appriseIsAwesome@1-(800) 555-1223 +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_d7networks.md b/content/en/docs/Integrations/.Notifications/Notify_d7networks.md new file mode 100644 index 00000000..8878e031 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_d7networks.md @@ -0,0 +1,41 @@ +## Direct 7 (D7) Networks +* **Source**: https://d7networks.com +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 160 Characters per message + +### Account Setup +To use this service you will need a D7 Networks account from their [website](https://d7networks.com/) + +After you've established your account you can get your api login credentials (both _user_ and _password_) from the API Details section from within your [account profile area](https://d7networks.com/accounts/profile/). + +### Syntax +Valid syntaxes are as follows: +* `d7sms://{user}:{password}@/{PhoneNo}` +* `d7sms://{user}:{password}@/{PhoneNo1}/{PhoneNo2}/{PhoneNoN}` + +### Parameter Breakdown +| Variable | Required | Description +| --------------- | -------- | ----------- +| user | Yes | The _username_ associated with your D7 Networks account. This is available to you via the **API Details** section from within your [account profile area](https://d7networks.com/accounts/profile/). +| password | Yes | The _API Secret_ associated with your D7 Networks account. This is available to you via the **API Details** section from within your [account profile area](https://d7networks.com/accounts/profile/). +| PhoneNo | Yes | At least one phone number MUST identified to use this plugin. This field is also very friendly and supports brackets, spaces and hyphens in the event you want to format the number in an easy to read fashion. +| from | No | Originating address,In cases where the rewriting of the sender's address is supported or permitted by the SMS-C. This is used to transmit the message, this number is transmitted as the originating address and is completely optional. +| priority | No | By default a priority of zero (0) is set (low). You can set 0, 1, 2, or 3 if you wish to adjust this value where as 3 represents a high priority. +| batch | No | D7 Networks allows a batch mode. If you identify more then one phone number, you can send all of the phone numbers you identify on the URL in a single shot instead of the normal _Apprise_ approach (which sends them one by one). Enabling batch mode has both it's pro's and cons. By default batch mode is disabled. + +#### Example +Send a D7 Network Notification as an SMS: +```bash +# Assuming our {user} is l2g +# Assuming our {password} is appriseIsAwesome +# Assuming our {PhoneNo} - is in the US somewhere making our country code +1 +# - identifies as 800-555-1223 +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + d7sms://l2g:appriseIsAwesome@18005551223 + +# the following would also have worked (spaces, brackets, +# dashes are accepted in a phone no field): +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + d7sms://l2g:appriseIsAwesome@1-(800) 555-1223 +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_dapnet.md b/content/en/docs/Integrations/.Notifications/Notify_dapnet.md new file mode 100644 index 00000000..4f743136 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_dapnet.md @@ -0,0 +1,63 @@ +## DAPNET/Hampager Notifications +* **Source**: https://hampager.de/ +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 80 Characters per message + +![apprise](https://user-images.githubusercontent.com/76180229/147219640-6ce23b59-bc12-4a30-b5f2-f4d4d2d3fd3c.jpg) + +### Account Setup +Make sure you register your Amateur Radio Call Sign and create an account with [Hampager](https://hampager.de). + +### Syntax +Valid syntax's are as follows: +* `dapnet://{userid}:{password}@{callsign}` +* `dapnet://{userid}:{password}@{callsign1}/{callsign2}/{callsignN}/` + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| callsign | Yes | One or more Amateur Radio Call sign's is required to send a notification. +| userid | Yes | Your [Hampager](https://hampager.de) account login +| password | Yes | Your [Hampager](https://hampager.de) account password +| priority | No | The message priority; if this isn't specified then `normal` is used by default. The possible options are `emergency` and `normal`. +| txgroups | No | The transmitter group(s) to associate with your message. Use a comma (`,`) to identify more then one. By default if this value isn't specified then the group `dl-all` is used. +| batch | No | [Hampager](https://hampager.de) allows for a batch mode. If you identify more then one call sign, you can send all of them in a single shot instead of the normal Apprise approach (which sends them one by one). Enabling batch mode has both it's pro's and cons. By default batch mode is disabled. + +## Constraints + +* The DAPNET API permits you to specify more than one target call sign. Any unknown or invalid call sign in that list will [terminate the whole message broadcast for all call signs](https://hampager.de/dokuwiki/doku.php?id=dapnetapisendcall) +* If the message exceeds 80 characters, the plugin will automatically truncate the content to DAPNET's max message length +* If you specify an Apprise 'title' parameter, Apprise will automatically add that title to the message body along with a trailing ``\r\n`` control sequence which may result in undesired experiences. It is recommended to refrain from using Apprise's 'title' parameter. +* For messages, it is recommended to stick to the English alphabet as DAPNET cannot process extended character sets like the cyrillic alphabet. The DAPNET API will still process messages with this content but the user's pager may not display them in a proper format. +* In order to gain access to the DAPNET API, you need to be a licensed ham radio operator. + +### Example + +Send a DAPNET Notification : + +```bash +# Assuming our {user} is df1abc +# Assuming our {password} is appriseIsAwesome +# Assuming our {callsign} - df1def +# +apprise -vv -b "Test Message Body" \ + "dapnet://df1abc:appriseIsAwesome@df1def" + +# Assuming our {user} is df1abc +# Assuming our {password} is appriseIsAwesome +# Assuming our {callsign}s are - df1def,df1ghi and df1def-12 +# This will result in two target call signs as the plugin is going +# to strip the '-12' ssid and detect the dupe call sign +# +apprise -vv -b "Test Message Body" \ + dapnet://df1abc:appriseIsAwesome@df1def/df1ghi/df1def-12 + +# Assuming our {user} is df1abc +# Assuming our {password} is test +# Assuming our {callsign} - df1def +# Assuming our {priority} - emergency +# Assuming our {txgroups} - 'dl-all', 'all' +apprise -vv -b "Test Message Body" \ + "dapnet://df1abc:test@df1def?txgroups=dl-all,all&priority=emergency" +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_dbus.md b/content/en/docs/Integrations/.Notifications/Notify_dbus.md new file mode 100644 index 00000000..502843d1 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_dbus.md @@ -0,0 +1,32 @@ +## DBus Desktop Notifications +* **Source**: n/a +* **Icon Support**: Yes +* **Message Format**: Text +* **Message Limit**: 250 Characters per message + +Display notifications right on your Gnome or KDE desktop. This only works if you're sending the notification to the same system you're currently accessing. Hence this notification can not be sent from one PC to another. + +This plugin was based on lower level calls similar to how the _notify-send_ tool works that ships with some Linux distributions. It taps into the _Desktop Bus_ (DBus) and directly writes the message for QT and GLib Desktop notifications. + +### Syntax +There are currently no options you can specify for this kind of notification, so it's really easy to reference: +* **dbus**:// + * This is the probably best use of this plugin as it will attempt to connect to a QT DBus (usually KDE based) if it can, otherwise it will secondly try to connect to a glib DBus (usually Gnome/Unity based). + +* **qt**:// + * This will explicitly only attempt to access the QT DBus (even if the GLib one is present). +* **kde**:// + * This is just an alias to qt:// for simplicity purposes. Like qt://, this explicitly only attempt to access the QT DBus (even if the GLib one is present). +* **glib**:// + * This will explicitly only attempt to access the GLib DBus (even if the QT one is present). A gnome:// alias was not created as Gnome support is already handled using a more mature/newer approach defined [[here|Notify_gnome]]. + +### Parameter Breakdown +There are no parameters at this time. + +#### Example +Assuming we're on an OS that allows us to host the Gnome Desktop, we can send a notification to ourselves like so: +```bash +# Send ourselves a DBus related desktop notification +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + dbus:// +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_dingtalk.md b/content/en/docs/Integrations/.Notifications/Notify_dingtalk.md new file mode 100644 index 00000000..1b3e985e --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_dingtalk.md @@ -0,0 +1,38 @@ +## DingTalk +* **Source**: https://www.dingtalk.com/ +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 160 Characters per message + +### Account Setup +To use DingTalk, you will need to acquire your _API Key_. + +### Syntax +Valid syntax is as follows: + +* `dingtalk://{ApiKey}/{ToPhoneNo}` +* `dingtalk://{ApiKey}/{ToPhoneNo1}/{ToPhoneNo2}/{ToPhoneNoN}` +* `dingtalk://{Secret}@{ApiKey}/{ToPhoneNo}` +* `dingtalk://{Secret}@{ApiKey}/{ToPhoneNo1}/{ToPhoneNo2}/{ToPhoneNoN}` + +### Parameter Breakdown +| Variable | Required | Description +| --------------- | -------- | ----------- +| ApiKey | Yes | The _API Key_ associated with your DingTalk account. This is available to you via the DingTalk Dashboard. +| ToPhoneNo | No | A phone number to send your notification to +| Secret | No | The optional secret key to associate with the message signing + +#### Example +Send a DingTalk Notification as an SMS: +```bash +# Assuming our {APIKey} is gank339l7jk3cjaE +# Assuming our {ToPhoneNo} - is in the US somewhere making our country code +1 +# - identifies as 1-123-555-1223 +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + dingtalk://gank339l7jk3cjaE/11235551223 + +# the following would also have worked (spaces, brackets, +# dashes are accepted in a phone no field): +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + dingtalk://gank339l7jk3cjaE/1-(123) 555-1223 +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_discord.md b/content/en/docs/Integrations/.Notifications/Notify_discord.md new file mode 100644 index 00000000..bd436bb6 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_discord.md @@ -0,0 +1,61 @@ +## Discord Notifications +* **Source**: https://discordapp.com/ +* **Icon Support**: Yes +* **Attachment Support**: Yes +* **Message Format**: Text +* **Message Limit**: 2000 Characters per message + +### Account Setup +Creating a Discord account is easy. The only part that requires a little bit of extra work is once you've got a channel set up (by default discord puts you in a #General channel). Click on the Gear icon (Settings) and from here you need to enable webhooks. + +The webhook will end up looking something like this: +```https://discordapp.com/api/webhooks/4174216298/JHMHI8qBe7bk2ZwO5U711o3dV_js``` + +This effectively equates to: +```https://discordapp.com/api/webhooks/{WebhookID}/{WebhookToken}``` + +**Note:** Apprise supports this URL _as-is_ (_as of v0.7.7_); you no longer need to parse the URL any further. However there is slightly less overhead (internally) if you do. + +The last part of the URL you're given make up the 2 tokens you need to send notifications with. With respect to the above example the tokens are as follows: +1. **WebhookID** is ```4174216298``` +2. **WebhookToken** is ```JHMHI8qBe7bk2ZwO5U711o3dV_js``` + +### Syntax +Valid syntaxes are as follows: +* `https://discordapp.com/api/webhooks/{WebhookID}/{WebhookToken}` +* `discord://{WebhookID}/{WebhookToken}/` +* `discord://{user}@{WebhookID}/{WebhookToken}/` + +Discord can also support a variety of website arguments, the below identifies the defaults and therefore do not need to be specified unless you want to override them: +* `discord://{WebhookID}/{WebhookToken}/?tts=No&avatar=Yes&footer=No&image=Yes` + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| WebhookID | Yes | The first part of 2 tokens provided to you after creating a *incoming-webhook* +| WebhookToken| Yes | The second part of 2 tokens provided to you after creating a *incoming-webhook* +| user | No | Identify the name of the bot that should issue the message. If one isn't specified then the default is to just use your account (associated with the *incoming-webhook*). +| tts | No | Enable Text-To-Speech (by default is is set to **No**) +| footer | No | Include a message footer (by default is is set to **No**) +| image | No | Include an image in-line with the message describing the notification type (by default is is set to **Yes**) +| avatar | No | Over-ride the default discord avatar icon and replace it with one identify the notification type (by default is is set to **Yes**) +| avatar_url | No | Over-ride the default discord avatar icon URL. By default this is not set and Apprise chooses the URL dynamically based on the type of message (info, success, warning, or error). +| format | No | The default value of this is _text_. But if you plan on managing the format yourself, you can optionally set this to _markdown_. If the mode is set to markdown, apprise will scan for header entries (usually on lines by themselves surrounded by hashtags (#)) and will place these inside embedded objects. This is done to give a nicer presentation. + +#### Example +Send a discord notification: +```bash +# Assuming our {WebhookID} is 4174216298 +# Assuming our {WebhookToken} is JHMHI8qBe7bk2ZwO5U711o3dV_js +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + discord://4174216298/JHMHI8qBe7bk2ZwO5U711o3dV_js +``` + +If you want to have your own custom avatar URL you're already hosting from another website, you could set the following: +```bash +# Assuming our {WebhookID} is 4174216298 +# Assuming our {WebhookToken} is JHMHI8qBe7bk2ZwO5U711o3dV_js +# Assuming our {AvatarURL} is https://i.imgur.com/FsEpmwg.jpeg +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + discord://4174216298/JHMHI8qBe7bk2ZwO5U711o3dV_js?avatar_url=https://i.imgur.com/FsEpmwg.jpeg +``` diff --git a/content/en/docs/Integrations/.Notifications/Notify_email-Fastmail.md b/content/en/docs/Integrations/.Notifications/Notify_email-Fastmail.md new file mode 100644 index 00000000..79de3b83 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_email-Fastmail.md @@ -0,0 +1,120 @@ +The following Fastmail Email servers are supported (based on what was available on Dec 13th, 2018): + +| Email Domain | Apprise Syntax | +| -------- | -------- | +|**fastmail.com** | mailto://{user}:{app-password}@fastmail.com| +|**fastmail.cn** | mailto://{user}:{app-password}@fastmail.cn| +|**fastmail.co.uk** | mailto://{user}:{app-password}@fastmail.co.uk| +|**fastmail.com.au** | mailto://{user}:{app-password}@fastmail.com.au| +|**fastmail.de** | mailto://{user}:{app-password}@fastmail.de| +|**fastmail.es** | mailto://{user}:{app-password}@fastmail.es| +|**fastmail.fm** | mailto://{user}:{app-password}@fastmail.fm| +|**fastmail.fr** | mailto://{user}:{app-password}@fastmail.fr| +|**fastmail.im** | mailto://{user}:{app-password}@fastmail.im| +|**fastmail.in** | mailto://{user}:{app-password}@fastmail.in| +|**fastmail.jp** | mailto://{user}:{app-password}@fastmail.jp| +|**fastmail.mx** | mailto://{user}:{app-password}@fastmail.mx| +|**fastmail.net** | mailto://{user}:{app-password}@fastmail.net| +|**fastmail.nl** | mailto://{user}:{app-password}@fastmail.nl| +|**fastmail.org** | mailto://{user}:{app-password}@fastmail.org| +|**fastmail.se** | mailto://{user}:{app-password}@fastmail.se| +|**fastmail.to** | mailto://{user}:{app-password}@fastmail.to| +|**fastmail.tw** | mailto://{user}:{app-password}@fastmail.tw| +|**fastmail.uk** | mailto://{user}:{app-password}@fastmail.uk| +|**fastmail.us** | mailto://{user}:{app-password}@fastmail.us| +|**123mail.org** | mailto://{user}:{app-password}@123mail.org| +|**airpost.net** | mailto://{user}:{app-password}@airpost.net| +|**eml.cc** | mailto://{user}:{app-password}@eml.cc| +|**fmail.co.uk** | mailto://{user}:{app-password}@fmail.co.uk| +|**fmgirl.com** | mailto://{user}:{app-password}@fmgirl.com| +|**fmguy.com** | mailto://{user}:{app-password}@fmguy.com| +|**mailbolt.com** | mailto://{user}:{app-password}@mailbolt.com| +|**mailcan.com** | mailto://{user}:{app-password}@mailcan.com| +|**mailhaven.com** | mailto://{user}:{app-password}@mailhaven.com| +|**mailmight.com** | mailto://{user}:{app-password}@mailmight.com| +|**ml1.net** | mailto://{user}:{app-password}@ml1.net| +|**mm.st** | mailto://{user}:{app-password}@mm.st| +|**myfastmail.com** | mailto://{user}:{app-password}@myfastmail.com| +|**proinbox.com** | mailto://{user}:{app-password}@proinbox.com| +|**promessage.com** | mailto://{user}:{app-password}@promessage.com| +|**rushpost.com** | mailto://{user}:{app-password}@rushpost.com| +|**sent.as** | mailto://{user}:{app-password}@sent.as| +|**sent.at** | mailto://{user}:{app-password}@sent.at| +|**sent.com** | mailto://{user}:{app-password}@sent.com| +|**speedymail.org** | mailto://{user}:{app-password}@speedymail.org| +|**warpmail.net** | mailto://{user}:{app-password}@warpmail.net| +|**xsmail.com** | mailto://{user}:{app-password}@xsmail.com| +|**150mail.com** | mailto://{user}:{app-password}@150mail.com| +|**150ml.com** | mailto://{user}:{app-password}@150ml.com| +|**16mail.com** | mailto://{user}:{app-password}@16mail.com| +|**2-mail.com** | mailto://{user}:{app-password}@2-mail.com| +|**4email.net** | mailto://{user}:{app-password}@4email.net| +|**50mail.com** | mailto://{user}:{app-password}@50mail.com| +|**allmail.net** | mailto://{user}:{app-password}@allmail.net| +|**bestmail.us** | mailto://{user}:{app-password}@bestmail.us| +|**cluemail.com** | mailto://{user}:{app-password}@cluemail.com| +|**elitemail.org** | mailto://{user}:{app-password}@elitemail.org| +|**emailcorner.net** | mailto://{user}:{app-password}@emailcorner.net| +|**emailengine.net** | mailto://{user}:{app-password}@emailengine.net| +|**emailengine.org** | mailto://{user}:{app-password}@emailengine.org| +|**emailgroups.net** | mailto://{user}:{app-password}@emailgroups.net| +|**emailplus.org** | mailto://{user}:{app-password}@emailplus.org| +|**emailuser.net** | mailto://{user}:{app-password}@emailuser.net| +|**f-m.fm** | mailto://{user}:{app-password}@f-m.fm| +|**fast-email.com** | mailto://{user}:{app-password}@fast-email.com| +|**fast-mail.org** | mailto://{user}:{app-password}@fast-mail.org| +|**fastem.com** | mailto://{user}:{app-password}@fastem.com| +|**fastemail.us** | mailto://{user}:{app-password}@fastemail.us| +|**fastemailer.com** | mailto://{user}:{app-password}@fastemailer.com| +|**fastest.cc** | mailto://{user}:{app-password}@fastest.cc| +|**fastimap.com** | mailto://{user}:{app-password}@fastimap.com| +|**fastmailbox.net** | mailto://{user}:{app-password}@fastmailbox.net| +|**fastmessaging.com** | mailto://{user}:{app-password}@fastmessaging.com| +|**fea.st** | mailto://{user}:{app-password}@fea.st| +|**fmailbox.com** | mailto://{user}:{app-password}@fmailbox.com| +|**ftml.net** | mailto://{user}:{app-password}@ftml.net| +|**h-mail.us** | mailto://{user}:{app-password}@h-mail.us| +|**hailmail.net** | mailto://{user}:{app-password}@hailmail.net| +|**imap-mail.com** | mailto://{user}:{app-password}@imap-mail.com| +|**imap.cc** | mailto://{user}:{app-password}@imap.cc| +|**imapmail.org** | mailto://{user}:{app-password}@imapmail.org| +|**inoutbox.com** | mailto://{user}:{app-password}@inoutbox.com| +|**internet-e-mail.com** | mailto://{user}:{app-password}@internet-e-mail.com| +|**internet-mail.org** | mailto://{user}:{app-password}@internet-mail.org| +|**internetemails.net** | mailto://{user}:{app-password}@internetemails.net| +|**internetmailing.net** | mailto://{user}:{app-password}@internetmailing.net| +|**jetemail.net** | mailto://{user}:{app-password}@jetemail.net| +|**justemail.net** | mailto://{user}:{app-password}@justemail.net| +|**letterboxes.org** | mailto://{user}:{app-password}@letterboxes.org| +|**mail-central.com** | mailto://{user}:{app-password}@mail-central.com| +|**mail-page.com** | mailto://{user}:{app-password}@mail-page.com| +|**mailandftp.com** | mailto://{user}:{app-password}@mailandftp.com| +|**mailas.com** | mailto://{user}:{app-password}@mailas.com| +|**mailc.net** | mailto://{user}:{app-password}@mailc.net| +|**mailforce.net** | mailto://{user}:{app-password}@mailforce.net| +|**mailftp.com** | mailto://{user}:{app-password}@mailftp.com| +|**mailingaddress.org** | mailto://{user}:{app-password}@mailingaddress.org| +|**mailite.com** | mailto://{user}:{app-password}@mailite.com| +|**mailnew.com** | mailto://{user}:{app-password}@mailnew.com| +|**mailsent.net** | mailto://{user}:{app-password}@mailsent.net| +|**mailservice.ms** | mailto://{user}:{app-password}@mailservice.ms| +|**mailup.net** | mailto://{user}:{app-password}@mailup.net| +|**mailworks.org** | mailto://{user}:{app-password}@mailworks.org| +|**mymacmail.com** | mailto://{user}:{app-password}@mymacmail.com| +|**nospammail.net** | mailto://{user}:{app-password}@nospammail.net| +|**ownmail.net** | mailto://{user}:{app-password}@ownmail.net| +|**petml.com** | mailto://{user}:{app-password}@petml.com| +|**postinbox.com** | mailto://{user}:{app-password}@postinbox.com| +|**postpro.net** | mailto://{user}:{app-password}@postpro.net| +|**realemail.net** | mailto://{user}:{app-password}@realemail.net| +|**reallyfast.biz** | mailto://{user}:{app-password}@reallyfast.biz| +|**reallyfast.info** | mailto://{user}:{app-password}@reallyfast.info| +|**speedpost.net** | mailto://{user}:{app-password}@speedpost.net| +|**ssl-mail.com** | mailto://{user}:{app-password}@ssl-mail.com| +|**swift-mail.com** | mailto://{user}:{app-password}@swift-mail.com| +|**the-fastest.net** | mailto://{user}:{app-password}@the-fastest.net| +|**the-quickest.com** | mailto://{user}:{app-password}@the-quickest.com| +|**theinternetemail.com** | mailto://{user}:{app-password}@theinternetemail.com| +|**veryfast.biz** | mailto://{user}:{app-password}@veryfast.biz| +|**veryspeedy.net** | mailto://{user}:{app-password}@veryspeedy.net| +|**yepmail.net** | mailto://{user}:{app-password}@yepmail.net| diff --git a/content/en/docs/Integrations/.Notifications/Notify_email.md b/content/en/docs/Integrations/.Notifications/Notify_email.md new file mode 100644 index 00000000..42bad283 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_email.md @@ -0,0 +1,122 @@ +## E-Mail Notifications +* **Source**: n/a +* **Icon Support**: no +* **Attachment Support**: yes +* **Message Format**: Text +* **Message Limit**: 32768 Characters per message + +## Using Built-In Email Services +If you are using one of the following Built-In E-Mail services, then setting up this notification service has never been easier. If your provider isn't on the list and you'd like to request it, just [open up a ticket](https://github.com/caronc/apprise/issues) and let me know. The alternative the the below list is to use a custom email server configuration; these are a little bit more complicated to set up, but still work great. Custom email configuration is discussed in the [next section](https://github.com/caronc/apprise/wiki/Notify_email/_edit#using-custom-servers-syntax). + +The following syntax works right out of the box: +* mailto://{user}:{password}@**yahoo.com** +* mailto://{user}:{password}@**hotmail.com** +* mailto://{user}:{password}@**live.com** +* mailto://{user}:{password}@**prontomail.com** +* mailto://{user}:{password}@**gmail.com** +* mailto://{user}:{app-password}@**fastmail.com** +* mailto://{user}:{password}@**zoho.com** +* mailto://{user}:{password}@**yandex.com** +* mailto://{user}:{password}@**sendgrid.com**?from=noreply@{validated_domain} +* mailto://{user}:{password}@**qq.com** +* mailto://{user}:{password}@**163.com** + +Secure connections are always implied whether you choose to use **mailto://** or **mailtos://** + +**Note** Google Users using the 2 Step Verification Process will be required to generate an **app-password** from [here](https://security.google.com/settings/security/apppasswords) that you can use in the {password} field. + +**Note** Fastmail Users are required to generate a custom App password before you can connect it up to send email to (from a 3rd party tool like this one). You must assign the _SMTP_ option to the new App you generate. This Fastmail portion of this plugin currently supports [[the following 116 domains|Notify_email/Fastmail]]. Just make sure you identify the email address you're using when you build the mailto:// url and everything will work as intended. + +**Note** SendGrid users just need to be sure to use a Validated Domain (through their service) as part of the required **from=** email address (on the URL) or it will not work. It's additionally worth pointing out that [[sendgrid://|Notify_sendgrid]] has it's own separate integration as well if you do not need to use the SMTP service. + +## Using Custom Servers Syntax +If you're using your own SMTP Server or one that simply isn't in the *Built-In* list defined in the previous section then things get a wee-bit more complicated. + +First off, secure vs insecure emails are defined by **mailto://** (port 25) and **mailtos://** (port 587) where **mailtos://** will enable TLS prior to sending the user and password. + +Here are some more example syntax you can use when doing the custom approach: +* **mailto**://**{user}**:**{password}**@**{domain}** +* **mailto**://**{user}**:**{password}**@**{domain}**:**{port}**?smtp=**{smtp_server}** +* **mailto**://**{user}**:**{password}**@**{domain}**:**{port}**?from=**{from_email}**&name=**{from_name}** + +Using a local relay server that does not require authentication? No problem, use this: +* **mailto**://**{user}**:**{password}**@**{domain}**:**{port}**?from=**{from_email}**&to=**{to_email}** + +Some mail servers will require your {user} to be your full email address. In these cases, you'll need to specify your username in the url as an attribute like so: +* **mailto**://**{password}**@**{domain}**:**{port}**?user=**{user}** + +#### Custom Syntax Examples +If your SMTP server is identified by a different hostname than what is identified by the suffix of your email, then you'll need to specify it as an argument; for example: +* **mailtos**://**user**:**password**@**server.com**?smtp=**smtp.server.com** + +If you want to adjust the email's *ReplyTo* address, then you can do the following: +* **mailtos**://**user**:**password**@**server.com**?smtp=**smtp.server.com**&from=**noreply@server.com** + +You can also adjust the ReplyTo's Name too: +* **mailtos**://**user**:**password**@**server.com**?smtp=**smtp.server.com**&from=**noreply@server.com**&name=**Optional%20Name** + +To send an email notification via a smtp server that does not require authentication, simply leave out the user and pass parameters in the URL: +* **mailto**://**server.com**?smtp=**smtp.server.com**&from=**noreply@server.com**&to=**myemail@server.com** + +Since URL's can't have spaces in them, you'll need to use '**%20**' as a place-holder for one (if needed). In the example above, the email would actually be received as *Optional Name*. + +### Multiple To Addresses +By default your `mailto://` URL effectively works out to be `mailto://user:pass@domain` and therefore attempts to send your email to `user@domain` unless you've otherwise specified a `to=`. But you can actually send an email to more then one address using the same URL. Here are some examples (written slightly differently but accomplish the same thing) that send an email to more then one address: +* `mailto://user:pass@domain/?to=target@example.com,target2@example.com` +* `mailto://user:pass@domain/target@example.com/target2@example.com` + +There is no limit to the number of addresses you either separate by comma (**,**) and/or add to your `mailto://` path separated by a slash (**/**). + +The Carbon Copy (**cc=**) and Blind Carbon Copy (**bcc=**) however are applied to each email sent. Hence if you send an email to 3 target users, the entire *cc* and *bcc* lists will be part of all 3 emails. + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| user | Yes | The account login to your SMTP server; if this is an email you must specify this near the end of the URL as an argument. You can over-ride this by specifying `?user=` on the URL string.
      **Note:** Both the `user` and `pass` are not required if you're using an anonymous login. +| pass | Yes | The password required to send an email via your SMTP Server. You can over-ride this by specifying `?pass=` on the URL string.
      **Note:** Both the `user` and `pass` are not required if you're using an anonymous login. +| domain | Yes | If your email address was **test@example.com** then *example.com* is your domain. You must provide this as part of the URL string! +| port | No | The port your SMTP server is listening on. By default the port is **25** for **mailto://** and **587** for all **mailtos://** references. +| smtp | No | If the SMTP server differs from your specified domain, then you'll want to specify it as an argument in your URL. +| from | No | If you want the email address *ReplyTo* address to be something other then your own email address, then you can specify it here. +| to | No | This will enforce (or set the address the email is sent To). This is only required in special circumstances. The notification script is usually clever enough to figure this out for you. +| name | No | With respect to {from_email}, this allows you to provide a name with your *ReplyTo* address. +| cc | No | Carbon Copy email address(es). More than one can be separated with a space and/or comma. +| bcc | No | Blind Carbon Copy email address(es). More than one can be separated with a space and/or comma. +| mode | No | This is only referenced if using **mailtos://** (a secure url). The Mode allows you to change the connection method. Some sites only support SSL (mode=**ssl**) while others only support STARTTLS (mode=**starttls**). The default value is **starttls**. + +To eliminate any confusion, any url parameter (key=value) specified will over-ride what was detected in the url; hence: +* `mailto://usera:pass123@domain.com?user=foobar`: the user of `foobar` would over-ride the user `usera` specified. However since the password was not over-ridden, the password of `pass123` would be used still. + +#### Example +Send a email notification to our hotmail account: +```bash +# It's really easy if you're using a built in provider +# Built-In providers look after handling the little details such as +# the SMTP server, port, enforcing a secure connection, etc +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + mailto:///example:mypassword@hotmail.com + +# Send an email to a custom provider: +# Assuming the {domain} is example.com +# Assuming the {user} is george +# Assuming the {password} is pass123 +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + mailto://george:pass123@example.com + +# The above URL could also have been written like: +# mailto://example.com?user=george&pass=pass123 + +# In some cases, the {user} is an email address. In this case +# you can place this information in the URL parameters instead: +# Assuming the {domain} is example.com +# Assuming the {user} is george@example.com +# Assuming the {password} is pass123 +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + "mailto://example.com?user=george@example.com&pass=pass123" + +# Note that the ampersand (&) that is used in the URL to separate +# one argument from another is also interpreted by the CLI as +# run in the background. So to make sure the URL sticks together +# and your terminal doesn't break your URL up, make sure to wrap +# it in quotes! +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_emby.md b/content/en/docs/Integrations/.Notifications/Notify_emby.md new file mode 100644 index 00000000..db4c1d00 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_emby.md @@ -0,0 +1,33 @@ +## Emby Notifications +* **Source**: https://emby.media +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 32768 Characters per message + +### Syntax +Valid syntaxes are as follows: +* `emby://{hostname}` +* `emby://{hostname}:{port}` +* `emby://{userid}:{password}@{hostname}:{port}` +* `embys://{hostname}` +* `embys://{hostname}:{port}` +* `embys://{userid}:{password}@{hostname}:{port}` + +Secure connections (via https) should be referenced using **embys://** where as insecure connections (via http) should be referenced via **emby://**. + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| hostname | Yes | The server Emby is listening on. +| port | No | The port Emby is listening on. By default the port is **8096** for both **emby://** and **embys://** references. +| userid | Yes | The account login to your Emby server. +| password | No | The password associated with your Emby Server. +| modal | No | Defines if the notification should appear as a modal type box. By default this is set to No. + +#### Example +Send a Emby notification to our server listening on port 8096: +```bash +# Assuming our {hostname} is emby.server.local +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + emby://emby.server.local +``` diff --git a/content/en/docs/Integrations/.Notifications/Notify_enigma2.md b/content/en/docs/Integrations/.Notifications/Notify_enigma2.md new file mode 100644 index 00000000..d8c6a709 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_enigma2.md @@ -0,0 +1,62 @@ +## Enimga2 Device Notifications +* **Source**: n/a +* **Icon Support**: No +* **Message Format**: XML +* **Message Limit**: 1000 Characters per message + +A [_E2OpenPlugin_](https://github.com/E2OpenPlugins) called [OpenWebif](https://github.com/E2OpenPlugins/e2openplugin-OpenWebif) can allow you to communicate with your Enigma2 devices (such as [Dreambox](http://www.dream-multimedia-tv.de/), [Vu+](http://www.vuplus.com), etc.) using a API. + +Once [OpenWebif](https://github.com/E2OpenPlugins/e2openplugin-OpenWebif) is installed, Apprise can utilize it's API to send notifications to your Enigma2 device. + +Installation instructions on how to install OpenWebif onto your Engima2 device can be found on it's [GitHub Page](https://github.com/E2OpenPlugins/e2openplugin-OpenWebif). + +### Syntax +Valid syntaxes are as follows: +* `enigma2://{host}` +* `enigma2://{host}:{port}` +* `enigma2://{user}@{host}` +* `enigma2://{user}@{host}:{port}` +* `enigma2://{user}:{password}@{host}` +* `enigma2://{user}:{password}@{host}:{port}` +* `enigma2s://{host}` +* `enigma2s://{host}:{port}` +* `enigma2s://{user}@{host}` +* `enigma2s://{user}@{host}:{port}` +* `enigma2s://{user}:{password}@{host}` +* `enigma2s://{user}:{password}@{host}:{port}` + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| hostname | Yes | The Enigma2 devices IP/hostname +| port | No | The port our Web server is listening on. By default the port is **80** for **enigma2s://** and **443** for all **enigma2://** references. +| user | No | If you're system is set up to use HTTP-AUTH, you can provide _username_ for authentication to it. +| password | No | If you're system is set up to use HTTP-AUTH, you can provide _password_ for authentication to it. +| timeout | No | The number of seconds delivered notification stay on the screen for. The default value is 13. + +#### Example +Send an notification to our Enigma2 Device: +```bash +# Assuming our {hostname} is dreambox +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + enigma2://dreambox +``` +### Header Manipulation +Some users may require special HTTP headers to be present when they post their data to their server. This can be accomplished by just sticking a hyphen (**-**) in front of any parameter you specify on your URL string. +```bash +# Below would set the header: +# X-Token: abcdefg +# +# Assuming our {hostname} is vu-device +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + "enigma2://localhost/?-X-Token=abcdefg" + +# Multiple headers just require more entries defined with a hyphen in front: +# Below would set the headers: +# X-Token: abcdefg +# X-Apprise: is great +# +# Assuming our {hostname} is localhost +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + "enigma2://localhost/path/?-X-Token=abcdefg&-X-Apprise=is%20great" +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_faast.md b/content/en/docs/Integrations/.Notifications/Notify_faast.md new file mode 100644 index 00000000..5747a270 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_faast.md @@ -0,0 +1,25 @@ +## Faast Notifications +* **Source**: http://www.faast.io/ +* **Icon Support**: Yes +* **Message Format**: Text +* **Message Limit**: 32768 Characters per message + +There isn't too much configuration for Faast notifications. The message is basically just passed to your online Faast account and then gets relayed to your device(s) you've setup from there. + +### Syntax +Valid syntax is as follows: +* **faast**://**{authorizationtoken}** + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| authorizationtoken | Yes | The authorization token associated with your Faast account. +| image | No | Associate an image with the message. By default this is enabled. + +#### Example +Send a Faast notification +```bash +# Assuming our {authorizationtoken} is abcdefghijklmnop-abcdefg +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + faast://abcdefghijklmnop-abcdefg +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_fcm.md b/content/en/docs/Integrations/.Notifications/Notify_fcm.md new file mode 100644 index 00000000..308cf473 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_fcm.md @@ -0,0 +1,77 @@ +## Firebase Cloud Messaging (FCM) +* **Source**: https://firebase.google.com/docs/cloud-messaging +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 5000 Characters per message + +### Account Setup +You'll need to create an account with Google's Firebase Cloud Messaging Service (FCM) first to use this. + +From there you will access the FCM Management Console and choose which mode you wish to leverage when sending your notifications. The modes are **legacy** and **oauth2**. Both have their pros and con. Depending on which mode you choose, you will be required to construct your Apprise URL slightly diferent:
      +![Firebase](https://user-images.githubusercontent.com/850374/106963460-9dd33600-670e-11eb-8aaa-8499121e3147.png) + +### Syntax +Valid syntax is as follows: + +#### Legacy Mode +The legacy mode doesn't seem to suggest it will be decommissioned anytime soon, however this is how the FCM refers to it as. This only requires the APIKey generated through the FCM Management Console. + +* `fcm://{APIKey}/{Device}` +* `fcm://{APIKey}/{Device1}/{Device2}/{DeviceN}` +* `fcm://{APIKey}/#{Topic}` +* `fcm://{APIKey}/#{Topic1}/#{Topic2}/#{TopicN}` + +You can mix and match these entries as well: + +* `fcm://{APIKey}/{Device1}/#{Topic1}/` + +#### OAuth2 Mode +The OAuth2 mode is what FCM seems to hint that you use. But it has much more overhead then the legacy way of doing things. It also requires you to point to a specially generated `JSON` file you can generate from your FCM Management Console. + +You can point to the `JSON` file generated locally (if you saved it onto your PC) or refer to it by it's web URL (if you're sharing it somewhere on your network) like so: + +* `fcm://{Project}/{Device}/?keyfile=/path/to/keyfile` +* `fcm://{Project}/{Device1}/{Device2}/{DeviceN}/?keyfile=https://user:pass@localhost/web/location` +* `fcm://{Project}/#{Topic}/?keyfile=/path/to/keyfile` +* `fcm://{Project}/#{Topic1}/#{Topic2}/#{TopicN}/?keyfile=https://user:pass@localhost/web/location` + +You can mix and match these entries as well: + +* `fcm://{Project}/{Device1}/#{Topic1}/?keyfile={JSON_KeyFile}` + +### Parameter Breakdown +| Variable | Required | Description +| --------------- | -------- | ----------- +| APIKey | Yes | The generated _API Key_ from the FCM Management Console. This is only required if you intend to use the **Legacy** method. +| Project | Yes | The generated _Project ID_ from the FCM Management Console. This is only required if you intend to use the **OAuth2** method. +| KeyFile | Yes | The location of the _JSON Keyfile__ generated from the FCM Management Console. This is only required if you intend to use the **OAuth2** method. +| Device | No | The device you wish send your message to +| Topic | No | The topic you want to publish your message to. +| mode | No | The mode can be set to either **legacy** or **oauth2**. This is automatically detected depending on what you provide the Apprise URL. But you can explicitly set this here if you require. +| priority | No | The FCM Priority. By default the priority isn't passed into the payload so it takes on all upstream defaults. Valid options here are `min`, `low`, `normal`, `high`, and `max`. +| image | No | Set this to `yes` if you want to include an image as part of the payload. Depending on your firebase subscription; this may or may not incur charges. By default this is set to `no` +| image_url | No | Specify your own custom image_url to include as part of the payload. If this is provided, it is assumed `image` is `yes`. You an additionally set `image=no` to enforce that this assumption does not happen. +| color | No | Identify the colour of your notification by specifying your own custom RGB value (in format \#RRGGBB where the hashtag (`#`) is optional. The other options are `yes` and `no`. When set to `no`, the `color` argument simply is not part of the payload at all. When set to `yes` (default), Apprise chooses the color based on the message type (info, warning, etc). + +**Note:** This notification service does not use the title field; only the _body_ is passed along. + +#### Example +Send a Legacy FCM notification: +```bash +# Assuming our {APIKey} is bu1dHSdO22pfaaVy +# Assuming our {Device} is ABCD:12345 + +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + "fcm://bu1dHSdO22pfaaVy/ABCD:12345" + +``` + +Send a OAuth2 FCM notification: +```bash +# Assuming our {Project} is Apprise +# Assuming the path to our JSON {Keyfile} is /etc/apprise/fcm/keyfile.json +# Assuming our {Device} is ABCD:12345 + +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + "fcm://Apprise/ABCD:12345/?keyfile=/etc/apprise/fcm/keyfile.json" +``` diff --git a/content/en/docs/Integrations/.Notifications/Notify_flock.md b/content/en/docs/Integrations/.Notifications/Notify_flock.md new file mode 100644 index 00000000..912be33a --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_flock.md @@ -0,0 +1,58 @@ +## Flock Notifications +* **Source**: https://flock.com/ +* **Icon Support**: Yes +* **Message Format**: Text +* **Message Limit**: 32768 Characters per message + +### Account Setup +Flock has a lot of similarities to Slack. Flock notifications require an *incoming-webhook* or a *app/bot* it can connect to. + +## Incoming Webhook + +You can generate an Incoming webhook from [here](https://dev.flock.com/webhooks](https://dev.flock.com/webhooks). Just follow the wizard to pre-determine the channel(s) you want your message to broadcast to. When you've completed this process you will receive a URL that looks something like this: +```https://api.flock.com/hooks/sendMessage/134b8gh0-eba0-4fa9-ab9c-257ced0e8221``` + +This effectively equates to: +```https://api.flock.com/hooks/sendMessage/{token}``` + +**Note:** Apprise supports this URL _as-is_ (_as of v0.7.7_); you no longer need to parse the URL any further. However there is slightly less overhead (internally) if you do. + +In this example the token is `134b8gh0-eba0-4fa9-ab9c-257ced0e8221` + +## Bot +Bots are a bit more difficult and presume that you followed their instructions on setting on up [your own app](https://docs.flock.com/display/flockos/Creating+an+App#CreatinganApp-HowdoIcreateaFlockOSapp?). Just like a webhook, you'll get your own **{token}** provided to you that allows you to message people and channels directly. + +### Syntax +Valid syntaxes with an *incoming webhook* are: +* `https://api.flock.com/hooks/sendMessage/{token}` +* flock://**{token}**/ +* flock://**{botname}**@**{token}**/ + +Valid syntaxes with an *application / bot* are: +**Note:** the **userid** and **channelid** belong to the actual encoded id and not the public displayed value. For instance; if you have a channel called #general, it will have an encoded id associated with it that might look something like **g:abcd1234defg**. Users are identified in a similar fashion but are prefixed with **u:** instead of **g:**. These are the values you must specify here: +* flock://**{token}**/**u:userid** +* flock://**{botname}**@**{token}**/**u:{user}** +* flock://**{botname}**@**{token}**/**u:{user1}**/**u:{user2}**/**u:{userN}**/ +* flock://**{botname}**@**{token}**/**g:{channel}** +* flock://**{token}**/**g:{channel}** +* flock://**{botname}**@**{token}**/**g:{channel1}**/**g:{channel2}**/**g:{channelN}**/ +* flock://**{botname}**@**{token}**/**g:{channel}**/**u:{user}**/ + + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| token | Yes | The first part of 3 tokens provided to you after creating a *incoming-webhook* and or an *application/bot* +| channel | No | Channels must be prefixed with a hash tag **#** or **g:**. They must represent the encoded id of the channel name (not the human readable reference) You can specify as many channels as you want by delimiting each of them by a forward slash (/) in the url. +| user | No | Users must be prefixed with an at symbol **@** or **u:**! They must represent the encoded id of the user name (not the human readable reference) You can specify as many users as you want by delimiting each of them by a forward slash (/) in the url. +| botname | No | Identify the name of the bot that should issue the message. If one isn't specified then the default is to just use your account (associated with the *incoming-webhook*). +| image | No | Associate an image with the message. By default this is enabled. + +#### Example +Send a flock notification to our channel #nuxref (which is identified as `g:abcd1234efgh`): +```bash +# Assuming our {token} is 134b8gh0-eba0-4fa9-ab9c-257ced0e8221 +# our channel nuxref is represented as g:abcd1234efgh +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + flock:///134b8gh0-eba0-4fa9-ab9c-257ced0e8221/g:abcd1234efgh +``` diff --git a/content/en/docs/Integrations/.Notifications/Notify_gitter.md b/content/en/docs/Integrations/.Notifications/Notify_gitter.md new file mode 100644 index 00000000..208f057f --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_gitter.md @@ -0,0 +1,38 @@ +## Gitter Notifications +* **Source**: https://gitter.im/ +* **Icon Support**: Yes +* **Message Format**: Markdown +* **Message Limit**: 32768 Characters per message + +### Account Setup +Gitter isn't to difficult to get yourself an account [on their website](https://gitter.im/). + +From here, you just need to get your Gitter **Personal Access Token** which is as simple as visiting their [development website](https://developer.gitter.im/apps) and signing in (if you're not already). Almost immediately you should see a pop-up box providing you your token. + +**Note: You can ignore the App generation feature here as it's not relevant to sending an apprise notification. + +The last thing you need to know about this is you need to have already joined the channel you wish to send notifications to. The **Personal Access Token** represents you, so even if you join a channel and close out of your web browser, you're still actually a part of that channel (until you log back in and leave the channel). + +Channels identify themselves as **name**/community; you only need to focus on the name. So if the channel was [**apprise**/community](https://gitter.im/apprise-notifications/community), the channel name can be assumed to be **apprise** when using this script. +### Syntax +Valid syntaxes are as follows: +* **gitter**://**{token}**/**{room}**/ +* **gitter**://**{token}**/**{room1}**/**{room2}**/**{roomN}**/ +* **gitter**://**{token}**/**{room}**/?**image=Yes** + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| token | Yes | The Personal Access Token associated with your account. This is available to you after signing into their [development website](https://developer.gitter.im/apps). +| room | No | The room you want to notify. You can specify as many as you want of these on the URL. +| image | No | Send an image representing the message type prior to sending the message body. This is disabled by default. +| to | No | This is an alias to the room variable. + +#### Example +Send a gitter notification to our channel _apprise/community_: +```bash +# Assuming our {token} is abcdefghij1234567890 +# Assuming our {room} is apprise/community +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + gitter:///abcdefghij1234567890/apprise +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_gnome.md b/content/en/docs/Integrations/.Notifications/Notify_gnome.md new file mode 100644 index 00000000..c9102e42 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_gnome.md @@ -0,0 +1,23 @@ +## Gnome Desktop Notifications +* **Source**: n/a +* **Icon Support**: Yes +* **Message Format**: Text +* **Message Limit**: 250 Characters per message + +Display notifications right on your Gnome desktop. This only works if you're sending the notification to the same system you're currently accessing. Hence this notification can not be sent from one PC to another. + +### Syntax +There are currently no options you can specify for this kind of notification, so it's really easy to reference: +* **gnome**:// + + +### Parameter Breakdown +There are no parameters at this time. + +#### Example +Assuming we're on an OS that allows us to host the Gnome Desktop, we can send a notification to ourselves like so: +```bash +# Send ourselves a Gnome desktop notification +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + gnome:// +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_googlechat.md b/content/en/docs/Integrations/.Notifications/Notify_googlechat.md new file mode 100644 index 00000000..ca4891a4 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_googlechat.md @@ -0,0 +1,60 @@ +## Google Chat Notifications +* **Source**: https://chat.google.com/ +* **Icon Support**: No +* **Message Format**: Markdown +* **Message Limit**: 4000 Characters per message + +For this to work correctly you a GSuite account (there are free trials if you don't have one). You then need to create a Webhook; they can be done as follows: + +1. [Open Google Chat in your browser](https://chat.google.com/) +1. Go to the room to which you want to add a bot. +1. From the room menu at the top of the page, select **Manage webhooks**. +1. Provide it a name and optional avatar and click **SAVE** +1. Copy the URL associated with your new webhook. +1. Click outside the dialog box to close. + +When you've completed, you'll get a URL that looks a little like this: +``` +https://chat.googleapis.com/v1/spaces/AAAAkM/messages?key=AIzaSSjMm-WEfqKqqsHI&token=O7bnyri_WEXKcyFk%3D + ^ ^ ^ ^ ^ ^ + | | | | | | + workspace ... webhook_key... ..webhook_token.. +``` +Simplified, it looks like this: +- `https://chat.googleapis.com/v1/spaces/WORKSPACE/messages?key=WEBHOOK_KEY&token=WEBHOOK_TOKEN` + +Now it's important to note that while this Apprise plugin uses `gchat://`, you can also just use this URL exactly the way it was provided to you from Google when you copied and pasted. This is a perfectly valid Google Chat Apprise URL as well. + +### Syntax +Valid syntax is as follows: +- `https://chat.googleapis.com/v1/spaces/{workspace}/messages?key={webhook_key}&token={webhook_token}` +- `gchat://{workspace}/{webhook_key}/{webhook_token}` + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| workspace | Yes | The workspace associated with your Google Chat account. +| webhook_key | Yes | The webhook key associated with your Google Chat account. +| webhook_token | Yes | The webhook token associated with your Google Chat account. + +#### Example +Send a Google Chat notification +```bash +# Assuming our {workspace} is AAAAkM +# Assuming our {webhook_key} is AIzaSSjMm-WEfqKqqsHI +# Assuming our {webhook_token} is O7bnyri_WEXKcyFk%3D + +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + gchat://AAAAkM/AIzaSSjMm-WEfqKqqsHI/O7bnyri_WEXKcyFk%3D +``` + +Remember, you can also just use the URL as it was provided to you when configuring your Webhook: +Send a Google Chat notification +```bash +# Assuming our {workspace} is AAAAkM +# Assuming our {webhook_key} is AIzaSSjMm-WEfqKqqsHI +# Assuming our {webhook_token} is O7bnyri_WEXKcyFk%3D + +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + https://chat.googleapis.com/v1/spaces/AAAAkM/messages?key=AIzaSSjMm-WEfqKqqsHI&token=O7bnyri_WEXKcyFk%3D +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_gotify.md b/content/en/docs/Integrations/.Notifications/Notify_gotify.md new file mode 100644 index 00000000..8c6ea674 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_gotify.md @@ -0,0 +1,65 @@ +## Gotify Notifications +* **Source**: https://github.com/gotify/server +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 32768 Characters per Message + +### Syntax +Valid syntaxes are as follows: +* `gotify://{hostname}/{token}` +* `gotifys://{hostname}/{token}` +* `gotifys://{hostname}:{port}/{token}` +* `gotifys://{hostname}/{path}/{token}` +* `gotifys://{hostname}:{port}/{path}/{token}` +* `gotifys://{hostname}/{token}/?priority=high` + +Secure connections (via https) should be referenced using **gotifys://** where as insecure connections (via http) should be referenced via **gotify://**. + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| hostname | Yes | The Rocket.Chat server you're sending your notification to. +| token | Yes | The Application Token you generated on your Gotify Server +| port | No | The port the Gotify server is listening on. By default the port is **80** for **gotify://** and **443** for all **gotifys://** references. +| path | No | For those that host their Gotify server on a hostname that requires you to specify an additional path prefix may just include this as part of their URL string (the default is '**/**'). What is important here is the final entry of your URL must still be the _token_. +| priority | No | The priority level to pass the message along as. Possible values are **low**, **moderate**, **normal**, and **high**. If no priority is specified then **normal** is used. +| format | No | The message format to announce to Gotify. By default all information is identified as `text`. But you can alternatively set this value to `markdown` as well. + +#### Example +Send a Gotify message: +```bash +# Assuming our {hostname} is localhost +# Assuming our {token} is abcdefghijklmn +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + "gotify://gotify.server.local/abcdefghijklmn" + +# If your server is being hosted elsewhere and requires you to specify an +# additional path to get to it, you can notify it as follows: +# Assuming our {hostname} is localhost +# Assuming our {token} is abcdefghijklmn +# Assuming our {path} is /my/gotify/path/ +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + "gotify://gotify.server.local/my/gotify/path/abcdefghijklmn" +``` + +There is also **markdown** support if you want to leverage it; simply add `format=markdown` into your URL: +```bash +# Assuming our {hostname} is localhost +# Assuming our {token} is abcdefghijklmn +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + "gotify://gotify.server.local/abcdefghijklmn?format=markdown" +# ^ ^ +# | | +``` + +## Setup +Here is how I set up a quick Gotify server to test against. This may or may not be useful to other people. +### Docker +Based on [this source](https://hub.docker.com/_/gotify/server/): +```bash +# Docker (assuming a connection to docker.io) +sudo docker pull gotify/server + +sudo docker run -p 80:80 -v /var/gotify/data:$(pwd)/data gotify/server +# Then visit http://localhost +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_growl.md b/content/en/docs/Integrations/.Notifications/Notify_growl.md new file mode 100644 index 00000000..4e3eca96 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_growl.md @@ -0,0 +1,44 @@ +## Growl Notifications +* **Source**: http://growl.info/ +* **Icon Support**: Yes +* **Message Format**: Text +* **Message Limit**: 32768 Characters per message + +Growl requires this script to pre-register the notifications it sends before being able to actually send something. Make sure you are configured to allow application registration! + +### Syntax +Valid syntaxes are as follows: +* **growl**://**{hostname}** +* **growl**://**{hostname}**:**{port}** +* **growl**://**{password}**@**{hostname}** +* **growl**://**{password}**@**{hostname}**:**{port}** +* **growl**://**{hostname}**/?**priority={priority}** + +Depending on the version of your Apple OS, you may wish to enable the legacy protocol version (v1.4) as follows if you have problems receiving the icon in version 2 (the default): +* **growl**://**{password}**@**{hostname}**?version=**1** + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| hostname | Yes | The server Growl server is listening on. +| port | No | The port Growl Server is listening on. By default the port is **23053**. You will probably never have to change this. +| password | No | The password associated with the Growl server if you set one up. +| version | No | The default version is 2, but you can specify the attribute ?version=1 if you would require the 1.4 version of the protocol. +| priority | No | Can be **low**, **moderate**, **normal**, **high**, or **emergency**; the default is **normal** if a priority isn't specified. +| image | No | Whether or not to include an icon/image along with your message. By default this is set to **yes**. +| sticky | No | The Gotify sticky flag; by default this is set to **no**. + +#### Example +Send a Growl notification to our server +```bash +# Assuming our {hostname} is growl.server.local +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + growl://growl.server.local +``` + +Some versions of Growl don't display the image/icon correctly, you can also try the following to see if this solves it for you: +```bash +# Send a Growl notification using a a raw binary image (instead of URL - internally) +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + growl://growl.server.local?version=1 +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_homeassistant.md b/content/en/docs/Integrations/.Notifications/Notify_homeassistant.md new file mode 100644 index 00000000..cceaae2b --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_homeassistant.md @@ -0,0 +1,67 @@ +## Home Assistant Notifications +* **Source**: https://www.home-assistant.io/ +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 32768 Characters per message + +### Setup +1. Access your profile after logging into your Home Assistant website. +2. You need to generate a **Long-Lived Access Tokens** via the **Create Token** button (very bottom of profile page) + +### Syntax +Valid syntax is as follows: +* `hassio://{host}/{long-lived-access-token}` +* `hassio://{user}:{pass}:{host}/{access_token}` +* `hassio://{user}:{pass}:{host}:{port}/{access_token}` +* `hassio://{host}/optional/path/{access_token}` +* `hassio://{user}:{pass}:{host}/optional/path/{access_token}` +* `hassio://{user}:{pass}:{host}:{port}/optional/path/{access_token}` + +By default `hassio://` will use port `8123` (unless you otherwise specify). If you use `hassios://` (adding an `s`) to the end, then you use the `https` protocol on port `443` (unless otherwise specified). + +So the same URL's above could be written using a secure connection/port as: +* `hassios://{host}/{access_token}` +* `hassios://{user}:{pass}:{host}/{access_token}` +* `hassios://{user}:{pass}:{host}:{port}/{access_token}` +* `hassios://{host}/optional/path/{access_token}` +* `hassios://{user}:{pass}:{host}/optional/path/{access_token}` +* `hassios://{user}:{pass}:{host}:{port}/optional/path/{access_token}` + +The other thing to note is that Home Assistant requires a `notification_id` associated with each message sent. If the ID is the same as the previous, then the previous message is over-written with the new. This may or may not be what your goal is. + +So by default Apprise will generate a unique ID (thus a separate message) on every call. If this isn't the effect you're going for, then define your own Notification ID like so: +* `hassio://{host}/{long-lived-access-token}?nid=myid` + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| access_token | Yes | The generated **Long Lived Access Token** from your profile page. +| hostname | Yes | The Web Server's hostname +| port | No | The port our Web server is listening on. By default the port is **8123** for **hassios://** and **443** for all **jsons://** references. +| user | No | If you're system is set up to use HTTP-AUTH, you can provide _username_ for authentication to it. +| password | No | If you're system is set up to use HTTP-AUTH, you can provide _password_ for authentication to it. +| nid | No | Allows you to specify the **Notification ID** used when sending the notifications to Home Assistant. By doing this, each message sent to Home Assistant will replace the last. + +#### Example +Send a Home Assistant notification: +```bash +# Assuming the {hostname} we're hosting Home Assistant on is just myserver.local (port 8123) +# Assuming our {access_token} is 4b4f2918fd-dk5f-8f91f +apprise -vvv hassio:///noreply@myserver.local/4b4f2918fd-dk5f-8f91f +``` + +Send a Home Assistant notification that always replaces the last one sent: +```bash +# Assuming the {hostname} we're hosting Home Assistant on is just myserver.local (port 8123) +# Assuming our {access_token} is 4b4f2918fd-dk5f-8f91f +# Fix our Notification ID to anything we want: +apprise -vvv hassio:///noreply@myserver.local/4b4f2918fd-dk5f-8f91f?nid=apprise +``` + +Secure access to Home Assistant just requires you to add an `s` to the schema. Hence `hassio://` becomes `hassios://` like so: +```bash +# Assuming the {hostname} we're hosting a secure version of Home Assistant +# is accessible via my.secure.server.local (port 443) +# Assuming our {access_token} is 4b4f2918fd-dk5f-8f91f +apprise -vvv hassios:///noreply@my.secure.server.local/4b4f2918fd-dk5f-8f91f +``` diff --git a/content/en/docs/Integrations/.Notifications/Notify_ifttt.md b/content/en/docs/Integrations/.Notifications/Notify_ifttt.md new file mode 100644 index 00000000..473b08b6 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_ifttt.md @@ -0,0 +1,70 @@ +## IFTTT (If This Than That) Notifications +* **Source**: https://ifttt.com/ +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 32768 Characters per message + +### Account Setup +Creating a IFTTT account is easy. Visit there website and create your free account. + +Once you're hooked up, you'll want to visit [this URL](https://ifttt.com/services/maker_webhooks/settings) on Webhooks. This will be the gateway Apprise will use to signal any Applets you create. When you visit this page it will give you your API key in the form of a URL. + +The URL might something like this: +```https://maker.ifttt.com/use/b1lUk7b9LpGakJARKBwRIZ``` + +This effectively equates to: +```https://maker.ifttt.com/use/{WebhookID}``` + +In the above example the **WebhookID** is ```b1lUk7b9LpGakJARKBwRIZ```. You will need this value! + +**Note:** Apprise supports this URL _as-is_ (_as of v0.7.7_); you no longer need to parse the URL any further. However there is slightly less overhead (internally) if you do. + +### Syntax +Valid syntaxes are as follows: +* `https://maker.ifttt.com/use/{WebhookID}` +* **ifttt**://**{WebhookID}**@**{Event}**/ +* **ifttt**://**{WebhookID}**@**{Event1}**/**{Event2}**/**{EventN}**/ +* **ifttt**://**{WebhookID}**@**{Event}**/**?+NewArg=ArgValue** +* **ifttt**://**{WebhookID}**@**{Event}**/**?-value3** + +By default these are the the assign default template entries: +* **{value1}** : The **title** will go here +* **{value2}** : The **body** will go here +* **{value3}** : The **message type** will go here (it will read either _info_, _warning_, _critical_, or _success_) + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| WebhookID | Yes | Your webhooks API Key you got from [the settings area of the webhooks service itself](https://ifttt.com/services/maker_webhooks) +| Event | Yes | This is the **Event Name** you assigned to the Applet you created. You must at least pass in one of these. This is the event plan on triggering through the webhook. +| +Arg=Val | No | Add an additional **{Arg}** into the payload and assign it the value of **{Val}**. It's very important that your argument starts with a plus (**+**) symbol in order to use this option. +| -Arg | No | This is useful if you want to eliminate one of the pre-defined arguments discussed below. You might want to include **?-value1&-value2** to just pass **value3** in the payload. It's very important that your argument starts with a hyphen/minus (**-**) symbol in order to use this option. As mentioned above, your payload will ALWAYS include **value1**, **value2**, and **value3** in it unless you specify otherwise. + +#### Examples +Send a IFTTT notification: +```bash +# Assuming our {WebhookID} is b1lUk7b9LpGakJARKBwRIZ +# Assuming our {Event} is sms_message +# Assuming you want {value1} to read "My Title" +# Assuming you want {value2} to read "My Body" +# Assuming you want {value3} to read "info" +apprise -vv -t "My Title" -b "My Value" \ + ifttt:///b1lUk7b9LpGakJARKBwRIZ@sms_message +``` + +Now I realize not everyone will want to use the default **{valueX}** entries defined. In fact, you may want to just use apprise to turn on a light switch and set some complete different value like **{switch}** to '_on_'. Here is how you could accomplish this: +``` +# Send {switch} a value of 'on' +# Assuming our {WebhookID} is b1lUk7b9LpGakJARKBwRIZ +# Assuming our {Event} is my_light +# Any argument prefixed with a minus/hyphen (-) eliminates an +# argument from our payload. Since we know value1, value2, and +# value3 are present in every payload, we eliminate them. +# +# Now we use a plus (+) symbol in front of an argument to tell +# the remote server we want to include a new option called +# switch and set it's value to 'on' +apprise -vv -b "" ifttt:///b1lUk7b9LpGakJARKBwRIZ@my_light/?-value1&-value2&-value3&+switch=on +``` + +**Thoughts**: The +/- options are relatively new, but it still feels like this plugin could be made even easier to use. If you have any idea's please open a ticket and let me know your ideas! diff --git a/content/en/docs/Integrations/.Notifications/Notify_join.md b/content/en/docs/Integrations/.Notifications/Notify_join.md new file mode 100644 index 00000000..aafe3972 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_join.md @@ -0,0 +1,50 @@ +## Join Notifications +* **Source**: https://joaoapps.com/join/ +* **Icon Support**: Yes +* **Message Format**: Text +* **Message Limit**: 1000 Characters per message + +To use this plugin: +1. Ensure your browser allows popups and visit [joinjoaomgcd.appspot.com](https://joinjoaomgcd.appspot.com/). +2. To register you just need to allow the page to link with your Google Profile. The good news is it doesn't ask for anything too personal. +3. Download the app for your phone from the [Android Store here](https://play.google.com/store/apps/details?id=com.joaomgcd.join). +4. Using your phone, when you first open the application, it will ask for a series of permissions and ask you a couple questions. +4. If you just recently registered your device (in the previous step), you should now be able to refresh your browser at [joinjoaomgcd.appspot.com](https://joinjoaomgcd.appspot.com/). Your device should list itself. From here you can retrieve the API you need to worth with Apprise. + +### Syntax +Valid syntax is as follows: +* **join**://**{apikey}**/ +* **join**://**{apikey}**/**{device_id}** +* **join**://**{apikey}**/**{device_id1}**/**{device_id2}**/**{device_idN}** + +**Note**: If no device is specified, then by default **group.all** is used. + +Groups can be referenced like this (the *group.* part is optional): +* **join**://**{apikey}**/group.**{group_id}** +* **join**://**{apikey}**/group.**{group_id1}**/group.**{group_id2}**/group.**{group_idN}** +* **join**://**{apikey}**/**{group_id}** +* **join**://**{apikey}**/**{group_id1}**/**{group_id2}**/**{group_idN}** + +If what you specify isn't a `group` or `device_id` then it is interpreted as a `device_name` as a fallback: +* **join**://**{apikey}**/**{device_name}** +* **join**://**{apikey}**/**{device_name1}**/**{device_name1}**/**{device_nameN}** + +You can freely mix and match these combinations as well: +* **join**://**{apikey}**/**{device_id}**/**{group_id}**/**{device_name}** + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| apikey | Yes | The api key associated with your Join account. +| device_id | No | The device identifier to send your notification to (a 32 bit alpha-numeri string). +| device_name | No | The device name (PC, Nexus, etc) +| group_id | No | The group identifier to send your notification to. + +#### Example +Send a Join notification to all of our configured devices: +```bash +# Assuming our {apikey} is abcdefghijklmnop-abcdefg +# Assume we're sending to the group: all +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + join://abcdefghijklmnop-abcdefg/group.all +``` diff --git a/content/en/docs/Integrations/.Notifications/Notify_kavenegar.md b/content/en/docs/Integrations/.Notifications/Notify_kavenegar.md new file mode 100644 index 00000000..264cd06c --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_kavenegar.md @@ -0,0 +1,38 @@ +## Kavenegar +* **Source**: https://kavenegar.com +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 160 Characters per message + +### Account Setup +To use Kavenegar, first register an account on [their website](https://kavenegar.com/). After you've done so, you can get your API Key from the [account profile](https://panel.kavenegar.com/client/setting/account) section. + +### Syntax +Valid syntaxes are as follows: + +* `kavenegar://{apikey}/{to_phone_no}` +* `kavenegar://{from_phone_no}@{apikey}/{to_phone_no}` +* `kavenegar://{apikey}/{to_phone_no}/{to_phone_no2}/{to_phone_noN}/` +* `kavenegar://{from_phone_no}@{apikey}/{to_phone_no}/{to_phone_no2}/{to_phone_noN}/` + +### Parameter Breakdown +| Variable | Required | Description +| --------------- | -------- | ----------- +| ApiKey | Yes | The _API Key_ associated with your Kavengar account. This is available to you via the [account profile](https://panel.kavenegar.com/client/setting/account) section of their website (after logging in). +| ToPhoneNo | Yes | Kavengar does not handle the `+` in front of the country codes. You need to substitute the correct amount of zero's in front of the outbound number in order for the call to be completed. +| FromPhoneNo | No | The number you wish to identify your call is coming from. This argument is optional. + +#### Example +Send a Kavenegar Notification as an SMS: +```bash +# Assuming our {ApiKey} is gank339l7jk3cjaE +# Assuming our {PhoneNo} - is in the US somewhere making our country code 001 +# - identifies as 800-555-1223 +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + kavenegar://gank339l7jk3cjaE/0018005551223 + +# the following would also have worked (spaces, brackets, +# dashes are accepted in a phone no field): +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + kavenegar://gank339l7jk3cjaE/001 - (800) 555-1223 +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_kodi.md b/content/en/docs/Integrations/.Notifications/Notify_kodi.md new file mode 100644 index 00000000..e28265f2 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_kodi.md @@ -0,0 +1,32 @@ +## KODI Notifications +* **Source**: http://kodi.tv/ +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 250 Characters per message + +### Syntax +Valid syntaxes are as follows: +* **kodi**://**{hostname}** +* **kodi**://**{hostname}**:**{port}** +* **kodi**://**{userid}**:**{password}**@**{hostname}**:**{port}** +* **kodis**://**{hostname}** +* **kodis**://**{hostname}**:**{port}** +* **kodis**://**{userid}**:**{password}**@**{hostname}**:**{port}** + +Secure connections (via https) should be referenced using **kodis://** where as insecure connections (via http) should be referenced via **kodi://**. + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| hostname | Yes | The server Kodi is listening on. +| port | No | The port Kodi is listening on. By default the port is **80** for **kodi://** and **443** for all **kodis://** references. +| userid | No | The account login to your KODI server. +| password | No | The password associated with your KODI Server. + +#### Example +Send a Kodi notification to our server listening on port 80: +```bash +# Assuming our {hostname} is kodi.server.local +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + kodi://kodi.server.local +``` diff --git a/content/en/docs/Integrations/.Notifications/Notify_kumulos.md b/content/en/docs/Integrations/.Notifications/Notify_kumulos.md new file mode 100644 index 00000000..94bd465d --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_kumulos.md @@ -0,0 +1,31 @@ +## Kumulos +* **Source**: https://kumulos.com/ +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 240 Characters per message + +### Account Setup +To use this plugin, you must have a Kumulos account set up. Add at least 1 client and link it with your phone using the phone app (using your _Companion App_ option in the profile menu area): + - [Android App](https://play.google.com/store/apps/details?id=com.kumulos.companion) + - [iOS](https://apps.apple.com/us/app/kumulos/id1463947782) + +To use Kumulos, you will need to acquire your _API Key_ and _Server Key_. Both of these are accessible via the Kumulos Dashboard. + +### Syntax +Valid syntax ia as follows: +* `kumulos://{ApiKey}/{ServerKey}` + +### Parameter Breakdown +| Variable | Required | Description +| --------------- | -------- | ----------- +| ApiKey | Yes | The _API Key_ associated with your Kumulos account. +| ServerKey | Yes | The _Server Secret_ associated with your Kumulos account. + +#### Example +Send a Kumulos Notification: +```bash +# Assuming our {APIKey} is 8b799edf-6f98-4d3a-9be7-2862fb4e5752 +# Assuming our {ServerKey} is aNe8IVQvUay79KEOt8jEh2GPWOwRKAXG+lP7 +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + kumulos://8b799edf-6f98-4d3a-9be7-2862fb4e5752/aNe8IVQvUay79KEOt8jEh2GPWOwRKAXG+lP7 +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_lametric.md b/content/en/docs/Integrations/.Notifications/Notify_lametric.md new file mode 100644 index 00000000..20f4e9e0 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_lametric.md @@ -0,0 +1,85 @@ +## LaMetric Time/Clock Notifications +* **Source**: https://lametric.com +* **Icon Support**: Yes +* **Message Format**: Text +* **Message Limit**: 32768 Characters per message + +## Setup +You now have to methods of notifying your LaMetric Device: +1. **Device Mode**: Directly query your LaMetric Device on your local network to send it a notification. +2. **Cloud Mode**: A secure query to LaMetric's API server in the cloud to send a message to your clock. You will have limited options with this method. + +#### Device Mode Setup +With Device Mode, your Apprise query will directly interface with the LaMetric Time Device on your local network. +1. Sign Up and login to the [Developer Webpage](https://developer.lametric.com). +1. Locate your Device **API Key**; you can find it [here](https://developer.lametric.com/user/devices): +1. You now need to know the IP address your device resides on. Your devices **IP Address** can be found in LaMetric Time app at: **Settings** -> **Wi-Fi** -> **IP Address** + +#### Cloud Mode Setup + +**Note**: It appears that at some point in time Lametric dropped support and usage of their cloud mode. While documented in their forums with screenshots and usage examples. None of this seems to be available for the end user anymore to play/work with. For those who still have access to their upstream servers can leverage this. Alternatively those who use this Apprise plugin will need to focus on the normal Device Mode (explained above) instead. + +Using Cloud Mode, you will interface with your LaMetric Time device through the internet. +1. Sign Up and login to the [Developer Webpage](https://developer.lametric.com). +2. Create a **Indicator App** if you haven't already done so from [here](https://developer.lametric.com/applications/sources). + - There is a great official tutorial on how to do this [here](https://lametric-documentation.readthedocs.io/en/latest/guides/first-steps/first-lametric-indicator-app.html#publish-app-and-install-it-to-your-lametric-time) +3. Make sure to set the **Communication Type** to **PUSH** +4. You will be able to **Publish** your app once you've finished setting it up. This will allow it to be accessible from the internet using the `cloud` mode of this Apprise Plugin. The **Publish** button shows up from within the settings of your Lametric App upon clicking on the **Draft Vx** folder (where `x` is the version - usually a 1) + +5. When you've completed the above steps, the site would have provided you a **PUSH URL** that looks like this: + - `https://developer.lametric.com/api/v1/dev/widget/update/com.lametric.{app_id}/{app_ver}` + + You will need to record the `{app_id}` and `{app_ver}` to use the `cloud` mode. + + The same page should also provide you with an Application **Access Token**. It's approximately 86 characters with two equal (`=`) characters at the end of it. This becomes your `{app_access_token}`. Here is an example of what one might look like: + - `K2MxWI0NzU0ZmI2NjJlZYTgViMDgDRiN8YjlmZjRmNTc4NDVhJzk0RiNjNh0EyKWW==` + +### Syntax +Device Mode syntaxes are as follows: +* `lametric://{apikey}@{hostname}` +* `lametric://{apikey}@{hostname}:{port}` +* `lametric://{userid}:{apikey}@{hostname}` +* `lametric://{userid}:{apikey}@{hostname}:{port}` + +Cloud Mode syntax is as follows: +* `lametric://{app_access_token}@{app_id}` +* `lametric://{app_access_token}@{app_id}/{app_version}` + +### Parameter Breakdown +The breakdown of parameters depend on whether you are using the Cloud Mode or Device Mode. + +#### Device Mode +| Variable | Required | Description +| ----------- | -------- | ----------- +| apikey | Yes | Your Device **API Key** can be found on LaMetric's website [here](https://developer.lametric.com/user/devices) +| hostname | Yes | This is the IP address or hostname of your Lametric device on your local network. +| port | No | The port your LaMetric device is listening on. By default the port is **8080**. +| userid | No | The account login to your Lametric device on your local network. By default the user is set to `dev`. +| mode | No | Define the Apprise/Lametric mode to use. This can be either set to `cloud` or `device`. It's worth pointing out that Apprise is smart enough to detect the mode you're using based on the URL you provide it. But for those who want to explicitly provide it's value, they can do so. +| cycles | No | The number of times message should be displayed. If cycles is set to `0`, notification will stay on the screen until user dismisses it manually. By default it is set to `1`. +| sound | No | An audible alarm that can be sent with the notification. The following keywords are supported: `bicycle`, `car`, `cash`, `cat`, `dog`, `dog2`, `energy`, `knock-knock`, `letter_email`, `lose1`, `lose2`, `negative1`, `negative2`, `negative3`, `negative4`, `negative5`, `notification`, `notification2`, `notification3`, `notification4`, `open_door`, `positive1`, `positive2`, `positive3`, `positive4`, `positive5`, `positive6`, `statistic`, `thunder`, `water1`, `water2`, `win`, `win2`, `wind`, `wind_short`, `alarm1`, `alarm2`, `alarm3`, `alarm4`, `alarm5`, `alarm6`, `alarm7`, `alarm8`, `alarm9`, `alarm10`, `alarm11`, `alarm12`, and `alarm13`. +| priority | No | The priority of the message; the possible values are `info`, `warning`, and `critical`. By default `info` is used if nothing is specified. +| icon_type | No | Represents the nature of notification; the possible values are `info`, `alert`, and `none`. By default `none` is used if nothing is specified. + +#### Cloud Mode +| Variable | Required | Description +| ----------- | -------- | ----------- +| app_id | Yes | Your Indicator App's **Application ID** can be found in your *Indicator App Configuration**. You can access your application's configuration from the LaMetric's website [here](https://developer.lametric.com/applications/). +| app_access_token | Yes | Your Indicator App's **Access Token** can be found in your *Indicator App Configuration**. You can access your application's configuation from the LaMetric's website [here](https://developer.lametric.com/applications/). +| app_ver | No | The version associated with your Indicator App. If this isn't specified, then the default value of `1` (One) is used. +| mode | No | Define the Apprise/Lametric mode to use. This can be either set to `cloud` or `device`. It's worth pointing out that Apprise is smart enough to detect the mode you're using based on the URL you provide it. But for those who want to explicitly provide it's value, they can do so. + +#### Example +Send a LaMetric Time notification using Device Mode (local to our network): +```bash +# Assuming our {apikey} is abc123 +# Assuming our {hostname} is 192.168.1.3 +apprise -vv -b "Test Message Body" lametric://abc123@192.168.1.3 +``` + +Send a LaMetric Time notification using Cloud Mode (using LaMetrics Developer API): +```bash +# Assuming our {app_id} ABCD1234 +# Assuming our {app_access_token} is abcdefg== +apprise -vv -b "Test Message Body" lametric://abcdefg==@ABCD1234 +``` diff --git a/content/en/docs/Integrations/.Notifications/Notify_macosx.md b/content/en/docs/Integrations/.Notifications/Notify_macosx.md new file mode 100644 index 00000000..c11c8b4e --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_macosx.md @@ -0,0 +1,36 @@ +## MacOS X Desktop Notifications +* **Source**: n/a +* **Icon Support**: Yes +* **Message Format**: Text +* **Message Limit**: 250 Characters per message + +Display notifications right on your Mac OS X desktop provided you're running version 10.8 or higher and have installed [terminal-notifier](https://github.com/julienXX/terminal-notifier). This only works if you're sending the notification to the same system you're currently accessing. Hence this notification can not be sent from one PC to another. + +```bash +# Make sure terminal-notifier is installed into your system +brew install terminal-notifier +``` + +### Syntax +There are currently no options you can specify for this kind of notification, so it's really easy to reference: +* `macosx://` + +You can also choose to set a sound to play (such as `default`): +* `macosx://_/?sound=default` + +The `sound` can be set any of the sound names listed in _Sound Preferences_ of your Mac OS. + +### Parameter Breakdown + +| Variable | Required | Description +| ----------- | -------- | ----------- +| sound | No | The `sound` can be set any of the sound names listed in _Sound Preferences_ of your Mac OS. +| image | No | Associate an image with the message. By default this is enabled. + +#### Example +We can send a notification to ourselves like so: +```bash +# Send ourselves a MacOS desktop notification +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + macosx:// +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_mailgun.md b/content/en/docs/Integrations/.Notifications/Notify_mailgun.md new file mode 100644 index 00000000..c077283e --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_mailgun.md @@ -0,0 +1,50 @@ +## Mailgun Notifications +* **Source**: https://www.mailgun.com/ +* **Icon Support**: No +* **Attachment Support**: Yes +* **Message Format**: HTML +* **Message Limit**: 32768 Characters per message + +### Account Setup +You can create an account for free [on their website](https://www.mailgun.com/) but it comes with restrictions. + +For each domain you set up with them, you'll be able access them all from your dashboard once you're signed in. Here is a [quick link](https://app.mailgun.com/app/domains) to it. If you're using a free account; at the very least you will be able to see your _sandbox domain_ here. From here you can also acquire your **API Key** associated with each domain you've set up. + +### Syntax +Valid syntaxes are as follows: +* **mailgun**://**{user}**@**{domain}**/**{apikey}**/ +* **mailgun**://**{user}**@**{domain}**/**{apikey}**/**{email}**/ +* **mailgun**://**{user}**@**{domain}**/**{apikey}**/**{email1}**/**{email2}**/**{emailN}**/ + +You may also identify your region if you aren't using the US servers like so: +* **mailgun**://**{user}**@**{domain}**/**{apikey}**/?**region=eu** + +You can adjust what the Name associated with the From email is set to as well: +* **mailgun**://**{user}**@**{domain}**/**{apikey}**/?**From=Luke%20Skywalker** + +### Email Extensions +If you wish to utilize extensions, you'll need to escape the addition/plus (+) character with **%2B** like so:
      +``mailgun://{user}@{domain}/{apikey}/chris%2Bextension@example.com`` + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| apikey | Yes | The API Key associated with the domain you want to send your email from. This is available to you after signing into their website an accessing the [dashboard](https://app.mailgun.com/app/domains). +| domain | Yes | The Domain you wish to send your email from; this domain must be registered and set up with your mailgun account. +| user | Yes | The user gets paired with the domain you specify on the URL to make up the **From** email address your recipients receive their email from. +| email | No | You can specify as many email addresses as you wish. Each address you identify here will represent the **To**.
      **Note:** Depending on your account setup, mailgun does restrict you from emailing certain addresses. +| region | No | Identifies which server region you intend to access. Supported options here are **eu** and **us**. By default this is set to **us** unless otherwise specified. This specifically affects which API server you will access to send your emails from. +| from | No | This allows you to identify the name associated with the **From** email address when delivering your email. +| to | No | This is an alias to the email variable. You can chain as many (To) emails as you want here separating each with a comma and/or space. +| cc | No | Identify address(es) to notify as a Carbon Copy. +| bcc | No | Identify address(es) to notify as a Blind Carbon Copy. + +#### Example +Send a mailgun notification to the email address bill.gates@microsoft.com +```bash +# Assuming the {domain} we set up with our mailgun account is example.com +# Assuming our {apikey} is 4b4f2918fd-dk5f-8f91f +# We already know our To {email} is bill.gates@microsoft.com +# Assuming we want our email to come from noreply@example.com +apprise mailgun:///noreply@example.com/4b4f2918fd-dk5f-8f91f/bill.gates@microsoft.com +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_matrix.md b/content/en/docs/Integrations/.Notifications/Notify_matrix.md new file mode 100644 index 00000000..6a48d61d --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_matrix.md @@ -0,0 +1,74 @@ +## Matrix Notifications +* **Source**: https://matrix.org/ +* **Icon Support**: Yes +* **Message Format**: Text +* **Message Limit**: 1000 Characters per message + +By default the Apprise Integration of Matrix occurs using it's built in API. + +However, [the webhook service](https://matrix.org/docs/projects/bot/matrix-webhook.html) also works for those wishing to use it too. At the time, this is still identified as being in it's _late beta_ state. +This can be done by specifying **?mode=matrix** or **?mode=slack**. Presuming you've [set it up](https://github.com/turt2live/matrix-appservice-webhooks). + +### Syntax +Valid syntax's are as follows: +* `matrix://{user}:{password}@{matrixhost}/#{room_alias}` +* `matrixs://{user}:{password}@{matrixhost}/!{room_id}` + +You can mix and match as many rooms as you wish: +* `matrixs://{user}:{password}@{matrixhost}/!{room_id}/#{room_alias}/` + +**Note:** If no user and/or password is specified, then the matrix registration process is invoked. The matrix servers actually allow this (if enabled to do so in their configuration) to connect as a temporary user with/without a password and/or user-name. Under normal circumstances you should probably always supply a **{user}** and **{password}**. + +**Note:** Federated rooms identifiers are fully supported by Apprise. If no hostname is found in the _{room_id}_ and/or _{room_alias}_ entries specified, then apprise automatically uses the hostname returned to it (internally) upon login. For example, assume the following url:
      `matrix://user:pass@localhost/#room/#room:example.com/!abc123/!def456:example.com`: + + * **#room** is internally interpreted as **#room:localhost** before it is accessed. + * **#room:example.com** is not altered and is directly notified as such + * **!abc123** is internally interpreted as **!abc123:localhost** + * **!def456:example.com** is not altered and is directly notified as such + +When you specify the **?mode=** argument you immediately shift entirely how this plugin works and the syntax becomes: +* `matrix://{user}:{token}@{hostname}?mode=matrix` +* `matrixs://{token}@{hostname}:{port}?mode=matrix` +* `matrix://{user}:{token}@{hostname}?mode=slack&format=markdown` +* `matrixs://{token}@{hostname}?mode=slack&format=markdown` + +If you use [**t2bot.io**](https://t2bot.io/), then you can use the following URLs: +* `matrix://{t2bot_webhook_token}` +* `matrix://{user}@{t2bot_webhook_token}` + +You can also just use the t2bot URL as they share it with you from their website: +* `https://webhooks.t2bot.io/api/v1/matrix/hook/{t2bot_webhook_token}` + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| hostname | *Yes | The matrix server you wish to connect to. +| t2bot_webhook_token | *Yes | This is effectively the hostname but acts as the t2bot webhook token if the mode is set to t2bot. Apprise is smart enough to determine the mode provided you follow the t2bot URL examples explained above. This field becomes the `hostname` in all other cases. +| user | No | The user to authenticate (and/or register) with the matrix server +| password | No | The password to authenticate (and/or register) with the matrix server +| port | No | The server port Matrix is listening on. By default **matrixs://** uses a secure port port of **443** while **matrix://** uses port **80**. +| room_alias | No | The room alias you wish to join (if not there already) and broadcast your notification. For ambiguity purposes _you should_ prefix these locations with a pound/hashtag symbol **#** although it is not required. +| room_id | No | The room id you wish to join (if not there already) and broadcast your notification. For ambiguity purposes, _you MUST_ prefix these locations with a exclamation symbol **!** (_otherwise it is interpreted as a room_alias instead_) +| thumbnail | No | Displays an image before each notification is sent that identifies the notification type (warning, info, error, success). By default this option is set to **False**. +| mode | No | This is optional and allows you to specify a webhook mode instead. Setting this to **matrix** or **slack** allows you to leverage [this webhook service](https://matrix.org/docs/projects/bot/matrix-webhook.html) instead of directly communicating with the matrix server. By default no webhooks are used. +| msgtype | No | This is optional and allows you to specify a Matrix message type to use. Possible options are **text** and **notice**. By default all messages are sent as **text**. + +**Note**: If neither a **{room_alias}** or a **{room_id}** is specified on the URL then upon connecting to the matrix server, a list of currently joined channels will be polled. Each and every channel the account is currently part of will automatically be notified. + +#### Example +Send a secure Matrix.org notification to our server +```bash +# Assuming our {hostname} is matrix.example.com +# Assuming our {user} is nuxref +# Assuming our {password} is abc123 +# Assuming the {room_alias} we want to notify is #general and #apprise +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + matrixs://nuxref:abc123@matrix.example.com/#general/#apprise +``` + +Send a [**t2bot.io**](https://t2bot.io/webhooks/) request: +```bash +# Assuming our {webhook} is ABCDEFG12345 +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + matrix://ABCDEFG12345 +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_mattermost.md b/content/en/docs/Integrations/.Notifications/Notify_mattermost.md new file mode 100644 index 00000000..977eaf08 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_mattermost.md @@ -0,0 +1,84 @@ +## Mattermost Notifications +* **Source**: https://mattermost.com/ +* **Icon Support**: Yes +* **Message Format**: Text +* **Message Limit**: 4000 Characters per message + +To use this plugin, you need to first set yourself up with http://mattermost.com. Download their software and set it up. + +From here you'll need an **Incoming Webhook*. This can be done as follows: +1. Click on the **Integrations** option under the channel dropdown and select **Incoming Webhook**:
      + Incoming Webhook +2. From here you can select **Add Incoming Webhook**:
      + Add Incoming Webhook +3. Finally you'll be able to customize how you want the webhook to act/behave and you can press **Save** at the bottom when you're complete.
      + Generate An Apprise URL from itk + +An example URL you may be provided could look like this: +```bash +# The URL provided by Mattermost: +http://localhost:8065/hooks/yokkutpah3r3urc5h6i969yima + ^ ^ ^ + | | | + hostname port webhook token + +# From here you can do the following to generate your Apprise URL: +# - http:// becomes mmost:// +# - drop /hooks reference +# Which gets you: +mmost://localhost:8065/yokkutpah3r3urc5h6i969yima +``` + +### Syntax +Valid syntaxes are as follows: +* `mmost://{hostname}/{token}` +* `mmost://{hostname}:{port}/{token}` +* `mmost://{botname}@{hostname}/{token}` +* `mmost://{botname}@{hostname}:{port}/{token}` +* `mmost://{hostname}/{path}/{token}` +* `mmost://{hostname}:{port}/{path}/{token}` +* `mmost://{botname}@{hostname}/{path}/{token}` +* `mmost://{botname}@{hostname}:{port}/{{path}/token}` + +Secure connections (via https) should be referenced using **mmosts://** where as insecure connections (via http) should be referenced via **mmost://**; they follow the same structure: + +* `mmosts://{hostname}/{token}` +* `mmosts://{hostname}:{port}/{token}` +* `mmosts://{botname}@{hostname}/{token}` +* `mmosts://{botname}@{hostname}:{port}/{token}` +* `mmosts://{hostname}/{path}/{token}` +* `mmosts://{hostname}:{port}/{path}/{token}` +* `mmosts://{botname}@{hostname}/{path}/{token}` +* `mmosts://{botname}@{hostname}:{port}/{{path}/token}` + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| hostname | Yes | The server Mattermost is listening on. +| token | Yes | The Webhook Token you would have received after setting up the Mattermost **Incoming Webhook** +| port | No | The server port Mattermost is listening on. By default the port is **8065**. +| path | No | You can identify a sub-path if you wish. The last element of the path must be the **token**. +| botname | No | An optional botname you can associate with your post +| image | No | Identify whether or not you want the Apprise image (showing status color) to display with every message or not. By default this is set to **yes**. +| channels | No | You can optionally specify as many channels as you want in a comma separated value (as a keyword argument). See example below for how to use this. You must also not restrict your **Incoming Webhook** to only focus on a specific channel or providing alternatives here will not work. + +#### Example +Send a secure Mattermost notification to our server +```bash +# Assuming our {hostname} is mattermost.server.local +# Assuming our {token} is 3ccdd113474722377935511fc85d3dd4 + +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + mmosts://mattermost.server.local/3ccdd113474722377935511fc85d3dd4 +``` + +Send an insecure Mattermost notification to server in addition to having to address specific channels: +```bash +# Assuming our {hostname} is mattermost.server.local +# Assuming our {token} is 3ccdd113474722377935511fc85d3dd4 +# Assuming our {channels} is #support and #general + +# We don't need to provide the '#' (hashtag) prefix: +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + mmost://mattermost.server.local/3ccdd113474722377935511fc85d3dd4?channels=support,general +``` diff --git a/content/en/docs/Integrations/.Notifications/Notify_messagebird.md b/content/en/docs/Integrations/.Notifications/Notify_messagebird.md new file mode 100644 index 00000000..70286156 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_messagebird.md @@ -0,0 +1,36 @@ +## MessageBird +* **Source**: https://messagebird.com +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 160 Characters per message + +### Account Setup +To use MessageBird, you will need to acquire your _API Key_. This is accessible via the [MessageBird Dashboard](https://dashboard.messagebird.com/en/user/index). + +### Syntax +Valid syntaxes are as follows: +* **msgbird**://**{ApiKey}**/**{FromPhoneNo}** +* **msgbird**://**{ApiKey}**/**{FromPhoneNo}**/**{ToPhoneNo}** +* **msgbird**://**{ApiKey}**/**{FromPhoneNo}**/**{ToPhoneNo1}**/**{ToPhoneNo2}**/**{ToPhoneNoN}** + +### Parameter Breakdown +| Variable | Required | Description +| --------------- | -------- | ----------- +| ApiKey | Yes | The _API Key_ associated with your MessageBird account. This is available to you via the [MessageBird Dashboard](https://dashboard.messagebird.com/en/user/index). +| FromPhoneNo | Yes | A from phone number MUST include the country codes dialling prefix as well when placed. This field is also very friendly and supports brackets, spaces and hyphens in the event you want to format the number in an easy to read fashion. This MUST be the the number you registered with your *MessageBird* account. +| ToPhoneNo | No | A to phone number MUST include the country codes dialling prefix as well when placed. This field is also very friendly and supports brackets, spaces and hyphens in the event you want to format the number in an easy to read fashion. If no *ToPhoneNo* is specified, then the *FromPhoneNo* is notified instead. + +#### Example +Send a MessageBird Notification as an SMS: +```bash +# Assuming our {APIKey} is gank339l7jk3cjaE +# Assuming our {FromPhoneNo} - is in the US somewhere making our country code +1 +# - identifies as 1-123-555-1223 +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + msgbird://gank339l7jk3cjaE/11235551223 + +# the following would also have worked (spaces, brackets, +# dashes are accepted in a phone no field): +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + msgbird://gank339l7jk3cjaE/1-(123) 555-1223 +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_mqtt.md b/content/en/docs/Integrations/.Notifications/Notify_mqtt.md new file mode 100644 index 00000000..e4a4ab19 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_mqtt.md @@ -0,0 +1,70 @@ +## MQTT Notifications +* **Source**: https://mqtt.org/ +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 268435455 Characters per Message + +MQTT Support requires **paho-mqtt** to work: +```bash +pip install paho-mqtt +``` + +### Syntax +Valid syntax is as follows: +- `mqtt://{host}/{topic}` +- `mqtt://{host}:{port}/{topic}` +- `mqtt://{user}@{host}:{port}/{topic}` +- `mqtt://{user}:{password}@{host}:{port}/{topic}` + +For a secure connection, just use `mqtts` instead. +- `mqtts://{host}/{topic}` +- `mqtts://{host}:{port}/{topic}` +- `mqtts://{user}@{host}:{port}/{topic}` +- `mqtts://{user}:{password}@{host}:{port}/{topic}` + +Secure connections should be referenced using **mqtts://** where as insecure connections should be referenced via **mqtt://**. + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| user | no | The user associated with your MQTT server. +| password | no | The password associated with your MQTT server. +| hostname | Yes | The MQTT server you're sending your notification to. +| port | No | The port the MQTT server is listening on. By default the port is **1883** for **mqtt://** and **8883** for all **mqtts://** references. +| qos | No | The MQTT Quality of Service (Qos) setting. By default this is set to **0** (_zero_). +| version | No | The MQTT Protocol Version to use. By default this is set to **v3.1.1**. The other possible values are **v3.1** and **v5**. +| client_id | No | The MQTT client identifier to use when establishing a connection with the server. By default this is not set and a unique ID is generated per message. +| session | No | The MQTT session to maintain (associated with the client_id). If no client_id is specified, then this value is not considered. By default there is no session established and each connection made by apprise is unique. If you wish to enforce a session (associated with a provided client_id) then set this value to True. + +### Example +```bash +# Assuming we're just running an MQTT Server locally on your box +# Assuming we want to post our message to the topic: `my/topic` +apprise -vvv -b "whatever-payload-want" "mqtt://localhost/my/topic" +``` + +#### Sample Service Setup +I did the following to test this service locally (using docker): +```bash +# Pull in Mosquitto (v2.x at the time) - 2021 Sept 16th +docker pull eclipse-mosquitto + +# Set up a spot for our configuration +mkdir mosquitto +cd mosquitto +cat << _EOF > mosquitto.conf +persistence false +allow_anonymous true +connection_messages true +log_type all +listener 1883 +_EOF + +# Now spin up an instance (we can Ctrl-C out of when we're done): +docker run --name mosquitto -p 1883:1883 \ + --rm -v $(pwd)/mosquitto.conf:/mosquitto/config/mosquitto.conf \ + eclipse-mosquitto + +# All apprise testing can be done against this systems IP such as: +# apprise -vvv -b "my=payload" mqtt://localhost/a/simple/topic +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_msg91.md b/content/en/docs/Integrations/.Notifications/Notify_msg91.md new file mode 100644 index 00000000..e85f488a --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_msg91.md @@ -0,0 +1,38 @@ +## MSG91 +* **Source**: https://msg91.com +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 160 Characters per message + +### Account Setup +To use MSG91, you will need to acquire your _Authentication Key_. This is accessible via the [MSG91 Dashboard](https://world.msg91.com/user/index.php#api). + +### Syntax +Valid syntaxes are as follows: +* **msg91**://**{AuthKey}**/**{PhoneNo}** +* **msg91**://**{AuthKey}**/**{PhoneNo1}**/**{PhoneNo2}**/**{PhoneNoN}** +* **msg91**://**{SenderID}**@**{AuthKey}**/**{PhoneNo}** +* **msg91**://**{SenderID}**@**{AuthKey}**/**{PhoneNo1}**/**{PhoneNo2}**/**{PhoneNoN}** + +### Parameter Breakdown +| Variable | Required | Description +| --------------- | -------- | ----------- +| AuthKey | Yes | The _Authentication Key_ associated with your MSG91 account. This is available to you via the [MSG91 Dashboard](https://world.msg91.com/user/index.php#api). +| PhoneNo | Yes | A phone number MUST include the country codes dialling prefix as well when placed. This field is also very friendly and supports brackets, spaces and hyphens in the event you want to format the number in an easy to read fashion +| Route | No | The SMS Route. This is an SMG91 configuration that defaults to **1** (Transactional) if not otherwise specified. +| Country | No | The SMS Country. This is an SMG91 optional configuration that can either be **91** if referencing India, **1** if the USA and **0** if International. + +#### Example +Send a MSG91 Notification as an SMS: +```bash +# Assuming our {AuthKey} is gank339l7jk3cjaE +# Assuming our {PhoneNo} - is in the US somewhere making our country code +1 +# - identifies as 800-555-1223 +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + msg91://gank339l7jk3cjaE/18005551223 + +# the following would also have worked (spaces, brackets, +# dashes are accepted in a phone no field): +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + "msg91://gank339l7jk3cjaE/1-(800) 555-1223" +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_msteams.md b/content/en/docs/Integrations/.Notifications/Notify_msteams.md new file mode 100644 index 00000000..49735d1d --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_msteams.md @@ -0,0 +1,198 @@ +## Microsoft Teams Notifications +* **Source**: https://teams.microsoft.com +* **Icon Support**: Yes +* **Message Format**: Markdown +* **Message Limit**: 1000 Characters per message + +### Account Setup +Create a free account at https://teams.microsoft.com. + +You will need to create an **Incoming Webhook** to attach Apprise. This can be accomplished through the **the app store** (bottom left hand side of slack like interface); don't worry it's free. From within the app store, search for **Incoming Webhook**. Once you click on it you can associate it with your team. You can also assign it a name, and an avatar. Finally you will have to assign it to a channel. + +Alternatively, go to the channel where you want to add the webhook and select ••• icon (More options) from the top navigation bar. Search for **Incoming Webhook** and select **Add**. + +When you've completed this, it will generate you a URL that looks like: +``` +https://team-name.office.com/webhook/ \ + abcdefgf8-2f4b-4eca-8f61-225c83db1967@abcdefg2-5a99-4849-8efc-\ + c9e78d28e57d/IncomingWebhook/291289f63a8abd3593e834af4d79f9fe/\ + a2329f43-0ffb-46ab-948b-c9abdad9d643 +``` +Yes... The URL is that big... but at the end of the day this effectively equates to: +```https://{team}.office.com/webhook/{tokenA}/IncomingWebhook/{tokenB}/{tokenC}``` + +Hence: +The team name can be found in the generated webhook which looks like: +``` +# https://TEAM-NAME.office.com/webhook/ABCD/IncomingWebhook/DEFG/HIJK +# ^ ^ ^ ^ +# | | | | +# These are important <----------------^--------------------^----^ +``` + +vs the legacy URL which looked like (always stating `outlook` as the team name): +``` +# https://outlook.office.com/webhook/ABCD/IncomingWebhook/DEFG/HIJK +# ^ ^ ^ ^ +# | | | | +# legacy team reference: 'outlook' | | | +# | | | +# These are important <--------------^--------------------^----^ +``` + +So as you can see, we have is 3 separate tokens. These are what you need to build your apprise URL with. In the above example the tokens are as follows: +1. **TokenA** is ```ABCD@WXYZ``` +2. **TokenB** is ```DEFG``` +3. **TokenC** is ```HIJK``` + +**Note:** Apprise supports this URL _as-is_ (_as of v0.7.7_); you no longer need to parse the URL any further. However there is slightly more overhead (internally) if you do use it this way. + +### Syntax +Valid syntax is as follows: +* `https://team-name.office.com/webhook/{tokenA}/IncomingWebhook/{tokenB}/{tokenC}` +* `msteams://{team}/{tokenA}/{tokenB}/{tokenC}/` + +The Legacy format is also still supported. The below URL would automatically set the team name to `outlook` +* `msteams://{tokenA}/{tokenB}/{tokenC}/` + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| team | Yes | Extracted from the *incoming-webhook*. +| tokenA | Yes | The first part of 3 tokens provided to you after creating a *incoming-webhook* +| tokenB | Yes | The second part of 3 tokens provided to you after creating a *incoming-webhook* +| tokenC | Yes | The last part of 3 tokens provided to you after creating a *incoming-webhook* +| template | No | Optionally point to your own custom JSON formatted Microsoft Teams **MessageCard**; [See here for details on their formatting](https://docs.microsoft.com/en-us/microsoftteams/platform/webhooks-and-connectors/how-to/connectors-using). + +#### Example +Send a Microsoft Teams notification: +```bash +# Assuming our {team} is Apprise +# Assuming our {tokenA} is T1JJ3T3L2@DEFK543 +# Assuming our {tokenB} is A1BRTD4JD +# Assuming our {tokenC} is TIiajkdnlazkcOXrIdevi7F +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + msteams:///Apprise/T1JJ3T3L2@DEFK543/A1BRTD4JD/TIiajkdnlazkcOXrIdevi7F/ +``` + +## Templating +### The `template` URL Argument +Define a `?template=` argument that points to a predefined **MessageCard** you've already prepared for Microsoft Teams. The `template` parameter can either point to a local file or a web based URL. It's contents must be JSON (or you'll get an error trying to process it), and it at the very minimum must have the basic pattern: +```json +{ + "@type": "MessageCard", + "@context": "https://schema.org/extensions" +} +``` + +#### The Template Tokens +The `template=` you point to, can either be fully populate and ready to go as is (up to the MSTeams chat server), or you can dynamically populate it on the fly each time you call Apprise. You do this by using the double curly brace `{{` and `}}` to surround a keyword that you invent; here is an example: +```json +{ + "@type": "MessageCard", + "@context": "https://schema.org/extensions", + "summary": "{{app_id}}", + "sections": [ + { + "activityImage": "{{app_image_url}}", + "activityTitle": "{{app_title}}", + "text": "Hello {{ target }}, how are you {{ whence }}?" + } + ] +} +``` + +In the above example, we introduce several tokens... `app_id`, `app_title`, `target` and `whence`. There are a few entries that will ALWAYS be set and you can not over-ride them. They are: +* **app_id**: The Application identifier; usually set to `Apprise`, but developers of custom applications may choose to over-ride this and place their name here. this is how you acquire this value. +* **app_desc**: Similar the the Application Identifier, this is the Application Description. It's usually just a slightly more descriptive alternative to the *app_id*. This is usually set to `Apprise Notification` unless it has been over-ridden by a developer. +* **app_color**: A hex code that identifies a colour associate with a message. For instance, `info` type messages are generally blue where as `warning` ones are orange, etc. +* **app_type**: The message type itself; it may be `info`, `warning`, `success`, etc +* **app_title**: The actual title (`--title` or `-t` if from the command line) that was passed into the apprise notification when called. +* **app_body**: The actual body (`--body` or `-b` if from the command line) that was passed into the apprise notification when called. +* **app_image_url**: The image URL associated with the message type (`info`, `warning`, etc) if one exists and/or was not specified to be turned off from the URL (`image=no`) +* **app_url**: The URL associated with the Apprise instance (found in the **AppriseAsset()** object). Unless this has been over-ridden by a developer, it's value will be `https://github.com/caronc/apprise`. + +Anything you invent outside of that is yours. So lets get back to the `target` and `whence` that was define. Template tokens can be dynamically set by using the colon `:` operator before any URL argument you identify. For example we can set these values on our Apprise URL like so: +* `msteams://credentials/?template=/path/to/template.json&:target=Chris&whence=this%20afternoon` +* `msteams://credentials/?template=http://host/to/template.json&:target=Chris&whence=this%20afternoon` + +A notification like so: +```bash +# using colons, we can set our target and whence dynamically from the +# command line: +apprise -t "My Title" -b "This is Ignored" \ + "msteams://credentials/?template=http://host/to/template.json&:target=Chris&whence=this%20afternoon" +``` +Would post to MSTeams (with respect to our template above): +```json +{ + "@type": "MessageCard", + "@context": "https://schema.org/extensions", + "summary": "Apprise", + "sections": [ + { + "activityImage": null, + "activityTitle": "My Title", + "text": "Hello Chris, how are you this afternoon?" + } + ] +} +``` + +The default Apprise template today (and still has no change even after this commit looks like this): +```json +# Prepare our payload +payload = { + "@type": "MessageCard", + "@context": "https://schema.org/extensions", + "summary": "{{app_desc}}", + "themeColor": "{{app_color}}", + "sections": [ + { + "activityImage": null, + "activityTitle": "{{app_title}}", + "text": "{{app_body}}" + } + ] +} +``` + +#### Other Template Examples: +```json +{ + "@type": "MessageCard", + "@context": "https://schema.org/extensions", + "summary": "{{app_desc}}", + "themeColor": "{{app_color}}", + "sections": [ + { + "activityImage": null, + "activityTitle": "{{app_title}}", + "text": "{{app_body}}", + }, + ], + "potentialAction": [{ + + "@type": "ActionCard", + "name": "Add a comment", + "inputs": [{ + "@type": "TextInput", + "id": "comment", + "isMultiline": false, + "title": "Add a comment here for this task" + }], + "actions": [{ + "@type": "HttpPOST", + "name": "Add comment", + "target": "{{ target }}" + }] + }] +} +``` + +#### Additional Template Notes +* Tokens can have white space around them for readability if you like. Hence `{{ token }}` is no different then `{{token}}`. +* All tokens are escaped properly, so don't worry if your defined token has a double quote in it (`"`); it would be correctly escaped before it is sent upstream. +* Tokens ARE case sensitive, so `{{Token}}` NEEDS to be populated with a `:Token=` value on your URL. +* Tokens that are not matched correctly simply are not swapped and the {{keyword}} will remain as is in the message. +* Apprise always requires you to specify a `--body` (`-b`) at a very minimum which can be optionally referenced as `{{app_body}}` in your template. Even if you choose not to use this token, you must still pass in something (anything) just to satisfy this requirement and make use of the template calls. \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_nexmo.md b/content/en/docs/Integrations/.Notifications/Notify_nexmo.md new file mode 100644 index 00000000..ed34c9d1 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_nexmo.md @@ -0,0 +1,43 @@ +## Nexmo +* **Source**: https://nexmo.com/ +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 160 Characters per message + +### Account Setup +To use Nexmo, you will need to acquire your _API Key_ and _API Secret_. Both of these are accessible via the [Nexmo Dashboard](https://dashboard.nexmo.com/getting-started-guide). + +The **{FromPhoneNo}** must be a number provided to you through Nexmo + +### Syntax +Valid syntaxes are as follows: +* **nexmo**://**{ApiKey}**:**{ApiSecret}**@**{FromPhoneNo}**/**{PhoneNo}** +* **nexmo**://**{ApiKey}**:**{ApiSecret}**@**{FromPhoneNo}**/**{PhoneNo1}**/**{PhoneNo2}**/**{PhoneNoN}** + +If no _ToPhoneNo_ is specified, then the _FromPhoneNo_ will be messaged instead; hence the following is a valid URL: +* **nexmo**://**{ApiKey}**:**{ApiSecret}**@**{FromPhoneNo}**/ + +### Parameter Breakdown +| Variable | Required | Description +| --------------- | -------- | ----------- +| ApiKey | Yes | The _API Key_ associated with your Nexmo account. This is available to you via the [Nexmo Dashboard](https://dashboard.nexmo.com/getting-started-guide). +| ApiSecret | Yes | The _API Secret_ associated with your Nexmo account. This is available to you via the [Nexmo Dashboard](https://dashboard.nexmo.com/getting-started-guide). +| FromPhoneNo | Yes | This must be a _From Phone Number_ that has been provided to you from the Nexmo website. +| PhoneNo | **\*No** | A phone number MUST include the country codes dialling prefix as well when placed. This field is also very friendly and supports brackets, spaces and hyphens in the event you want to format the number in an easy to read fashion + +#### Example +Send a Nexmo Notification as an SMS: +```bash +# Assuming our {APIKey} is bc1451bd +# Assuming our {APISecret} is gank339l7jk3cjaE +# Assuming our {FromPhoneNo} is +1-900-555-9999 +# Assuming our {PhoneNo} - is in the US somewhere making our country code +1 +# - identifies as 800-555-1223 +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + nexmo://bc1451bd:gank339l7jk3cjaE@19005559999/18005551223 + +# the following would also have worked (spaces, brackets, +# dashes are accepted in a phone no field): +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + nexmo://bc1451bd:gank339l7jk3cjaE@1-(900) 555-9999/1-(800) 555-1223 +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_nextcloud.md b/content/en/docs/Integrations/.Notifications/Notify_nextcloud.md new file mode 100644 index 00000000..8c586b73 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_nextcloud.md @@ -0,0 +1,77 @@ +## Nextcloud Notifications +* **Source**: https://nextcloud.com +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 4000 Characters per message + +### Account Setup +The official [Notifications app](https://github.com/nextcloud/notifications) will need to be installed. An 'app password' (also referred to as 'device-specific' password/token) of the admin-user will need to be created, see the [documentation](https://docs.nextcloud.com/server/stable/user_manual/session_management.html#managing-devices) for more information. Don't forget to disable file system access for this password. + +### Syntax +Secure connections (via https) should be referenced using **nclouds://** where as insecure connections (via http) should be referenced via **ncloud://**. + +Valid syntaxes are as follows: +* `ncloud://{hostname}/{notify_user}` +* `ncloud://{hostname}:{port}/{notify_user}` +* `ncloud://{admin_user}:{password}@{hostname}/{notify_user}` +* `ncloud://{admin_user}:{password}@{hostname}:{port}/{notify_user}` +* `nclouds://{hostname}/{notify_user}` +* `nclouds://{hostname}:{port}/{notify_user}` +* `nclouds://{admin_user}:{password}@{hostname}/{notify_user}` +* `nclouds://{admin_user}:{password}@{hostname}:{port}/{notify_user}` + +You can notify more then one user by simply chaining them at the end of the URL. +* `ncloud://{admin_user}:{password}@{hostname}:{port}/{notify_user1}/{notify_user2}/{notify_userN}` +* `nclouds://{admin_user}:{password}@{hostname}:{port}/{notify_user1}/{notify_user2}/{notify_userN}` + + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| hostname | Yes | The hostname of the server hosting your Nextcloud service. +| admin_user | Yes | The administration user of the next cloud service you have set up. +| password | Yes | The administrator password associated with the **admin_user** for your Nextcloud account. +| notify_user | Yes | One or more users you wish to send your notification to. +| to | No | This is an alias to the notify_user variable. +| version | No | NextCloud changed their API around with v21. By default Apprise uses their latest API spec. If you're using an older version, you can sent this value accordingly and Apprise will accommodate (switching back to the older API). + +#### Example +Send a secure nextcloud notification to the user _chucknorris_: +```bash +# Assuming our {host} is localhost +# Assuming our {admin_user} is admin +# Assuming our (admin) {password} is 12345-67890-12345-67890-12345: +apprise nclouds://admin:12345-67890-12345-67890-12345@localhost/chucknorris +``` + +### Header Manipulation +Some users may require special HTTP headers to be present when they post their data to their server. This can be accomplished by just sticking a hyphen (**-**) in front of any parameter you specify on your URL string. +```bash +# Below would set the header: +# X-Token: abcdefg +# +# We want to send an insecure connection (we'll use ncloud://) +# Assuming our {host} is localhost +# Assuming our {admin_user} is admin +# Assuming our (admin) {password} is 12345-67890-12345-67890-12345: +# We want to notify arnold +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + ncloud://admin:12345-67890-12345-67890-12345@localhost/arnold?-X-Token=abcdefg + +# Multiple headers just require more entries defined with a hyphen in front: +# Below would set the headers: +# X-Token: abcdefg +# X-Apprise: is great +# +# Assuming our {host} is localhost +# Assuming our {admin_user} is admin +# Assuming our (admin) {password} is secret: +# We want to notify arnold +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + ncloud://admin:12345-67890-12345-67890-12345@localhost/arnold?-X-Token=abcdefg&-X-Apprise=is%20great + +# If we're using an older version of NextCloud (their API changed) we may need +# to let Apprise know this (using the version= directive) +apprise -t "Title" -b "Body" "ncloud://admin:12345-67890-12345-67890-12345@localhost/arnold??version=20" + +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_nextcloudtalk.md b/content/en/docs/Integrations/.Notifications/Notify_nextcloudtalk.md new file mode 100644 index 00000000..ded90c4a --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_nextcloudtalk.md @@ -0,0 +1,66 @@ +## Nextcloud Talk Notifications +* **Source**: https://nextcloud.com/talk +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 32000 Characters per message + +### Account Setup +The official [Nextcloud Talk app](https://github.com/nextcloud/spreed) will need to be installed. An 'app password' (also referred to as 'device-specific' password/token) of one member of the chat will need to be created, see the [documentation](https://docs.nextcloud.com/server/stable/user_manual/session_management.html#managing-devices) for more information. Don't forget to disable file system access for this password. + +### Syntax +Secure connections (via https) should be referenced using **nctalks://** where as insecure connections (via http) should be referenced via **nctalk://**. + +Valid syntaxes are as follows: +* `nctalk://{user}:{password}@{hostname}/{room_id}` +* `nctalk://{user}:{password}@{hostname}:{port}/{room_id}` +* `nctalks://{user}:{password}@{hostname}/{room_id}` +* `nctalks://{user}:{password}@{hostname}:{port}/{room_id}` + +You can post in multiple chats by simply chaining them at the end of the URL. +* `nctalk://{user}:{password}@{hostname}:{port}/{room_id1}/{room_id2}/{room_id3}` +* `nctalks://{user}:{password}@{hostname}:{port}/{room_id1}/{room_id2}/{room_id3}` + + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| hostname | Yes | The hostname of the server hosting your Nextcloud service. +| user | Yes | The user of the nextcloud service you have set up. +| password | Yes | The password associated with the **user** for your Nextcloud account. +| room_id | Yes | The room_id of Nextcloud Talk. + +#### Example +Send a secure nextcloud talk message to the room _93nfkdn3_: +```bash +# Assuming our {host} is localhost +# Assuming our {user} is user1 +# Assuming our (user1) {password} is 12345-67890-12345-67890-12345: +apprise nctalks://user1:12345-67890-12345-67890-12345@localhost/93nfkdn3 +``` + +### Header Manipulation +Some users may require special HTTP headers to be present when they post their data to their server. This can be accomplished by just sticking a hyphen (**-**) in front of any parameter you specify on your URL string. +```bash +# Below would set the header: +# X-Token: abcdefg +# +# We want to send an insecure connection (we'll use ncloud://) +# Assuming our {host} is localhost +# Assuming our {user} is user1 +# Assuming our (user1) {password} is 12345-67890-12345-67890-12345 +# We want to notify Room _93nfkdn3_: +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + nctalks://user1:12345-67890-12345-67890-12345@localhost/93nfkdn3?-X-Token=abcdefg + +# Multiple headers just require more entries defined with a hyphen in front: +# Below would set the headers: +# X-Token: abcdefg +# X-Apprise: is great +# +# Assuming our {host} is localhost +# Assuming our {user} is user1 +# Assuming our (user1) {password} is 12345-67890-12345-67890-12345 +# We want to notify Room _93nfkdn3_: +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + nctalks://user1:12345-67890-12345-67890-12345@localhost/arnold?-X-Token=abcdefg&-X-Apprise=is%20great +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_notica.md b/content/en/docs/Integrations/.Notifications/Notify_notica.md new file mode 100644 index 00000000..12ec6877 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_notica.md @@ -0,0 +1,76 @@ +## Notica Notifications +* **Source**: https://notica.us/ +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 32768 Characters per message + +### Account Setup +Notica doesn't require you to create an account at all. You just have to visit [their website](https://notica.us/) at least once to both: +1. Get your token +1. Enable Browser Notifications (to be sent from the Notica website) + +The website will generate you a URL to post to that looks like this: +`https://notica.us/?abc123` + +This effectively equates to: `https://notica.us/?{token}` +Note: _disregard the question mark on the URL as it is not part of the token_. + +From here you have two options, you can directly pass the Notica URL into apprise exactly how it is shown to you from the website, or you can reconstruct the URL into an Apprised based one (which equates to _slightly_ faster load times) as: `notica://{token}` + +### Syntax +Valid syntaxes are as follows: +* `https://notica.us/?{token}` +* `notica://{token}` + +For self hosted solutions, you can use the following: +* `{schema}://{host}/{token}` +* `{schema}://{host}:{port}/{token}` +* `{schema}://{user}@{host}/{token}` +* `{schema}://{user}@{host}:{port}/{token}` +* `{schema}://{user}:{password}@{host}/{token}` +* `{schema}://{user}:{password}@{host}:{port}/{token}` + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| token | Yes | The Token that was generated for you after visiting their [website](https://notica.us/). Alternatively this should be the token used by your self hosted solution. + +A self hosted solution allows for a few more parameters: + +| Variable | Required | Description +| ----------- | -------- | ----------- +| hostname | Yes | The Web Server's hostname. +| port | No | The port our Web server is listening on. By default the port is **80** for **xml://** and **443** for all **xmls://** references. +| user | No | If you're system is set up to use HTTP-AUTH, you can provide _username_ for authentication to it. +| password | No | If you're system is set up to use HTTP-AUTH, you can provide _password_ for authentication to it. + +#### Example +Send a notica notification: +```bash +# Assuming our {token} is abc123 + +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + notica://abc123 +``` + +### Header Manipulation +Self-hosted solutions may require users to set special HTTP headers when they post their data to their server. This can be accomplished by just sticking a hyphen (**-**) in front of any parameter you specify on your URL string. +```bash +# Below would set the header: +# X-Token: abcdefg +# +# Assuming our {hostname} is localhost +# Assuming our {token} is abc123 +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + "notica://localhost/abc123/?-X-Token=abcdefg" + +# Multiple headers just require more entries defined with a hyphen in front: +# Below would set the headers: +# X-Token: abcdefg +# X-Apprise: is great +# +# Assuming our {hostname} is localhost +# Assuming our {token} is abc123 +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + "notica://localhost/abc123/?-X-Token=abcdefg&-X-Apprise=is%20great" +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_notifico.md b/content/en/docs/Integrations/.Notifications/Notify_notifico.md new file mode 100644 index 00000000..e2992428 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_notifico.md @@ -0,0 +1,63 @@ +## Notifico Notifications +* **Source**: https://n.tkte.ch/ +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 512 Characters per message + +Notifico allows you to send a message to one or more IRC Channel(s) + +## Account Setup +1. Visit https://n.tkte.ch and sign up for an account +1. Create a project; either manually or sync with github +1. from within the project, you can create a **Plain Text Message Hook** +![notifico plain text hook](https://user-images.githubusercontent.com/850374/66708086-3f17cb00-ed19-11e9-8e37-bc7e6ba5a3cd.png) + +Once your hook has been created successfully, from the main project page, you can retrieve the link needed to send your messages to. Apprise will need this: +![notifico hook capture instructions](https://user-images.githubusercontent.com/850374/66708104-6c647900-ed19-11e9-895e-d5f755d05079.png) + +The URL will look something like this: +``` + https://n.tkte.ch/h/2144/uJmKaBW9WFk42miB146ci3Kj + ^ ^ + | | + project id message hook +``` + +This URL effectively equates to:
      +```https://n.tkte.ch/h/{ProjectID}/{MessageHook}``` + +If you want to convert this to an Apprise URL, do the following: +The last part of the URL you're given make up the 2 arguments that are most important to us. In the above example the arguments are as follows: +1. **ProjectID** is ```2144``` +2. **MessageHook** is ```uJmKaBW9WFk42miB146ci3Kj``` + +# Syntax +You can directly pass in the native URL as retrieved from the website if you like: +* `https://n.tkte.ch/h/{ProjectID}/{MessageHook}` + +Or your can format it for Apprise (there is slightly less overhead if you do this): +* `notifico://{ProjectID}/{MessageHook}` + +You can optionally turn colors off (by default they are turned on): +* `notifico://{ProjectID}/{MessageHook}?color=off` + +By default Apprise will send a prefix with each message it sends you can turn this off too as follows: +* `notifico://{ProjectID}/{MessageHook}?prefix=off` + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| ProjectID | Yes | The project ID is an integer and makes up the first part of the provided Notifico Message Hook URL. +| MessageHook | Yes | The message hook can be found at the end of the provided Notifico Message Hook URL. +| color | No | Uses IRC Coloring to provide a richer experience. It also allows the parsing of IRC colors found in the notification passed in. You must ensure the **Color** Checkbox is selected when setting up your Message Hook for this to work. By default this is set to **Yes**. +| prefix | No | All messages sent to IRC by default have a Prefix that help identify the type of message (info, error, warning, or success) as well as the system performing the notification. By default this is set to **Yes**. + +#### Example +Send a Notifico notification +```bash +# The following sends a notifico notification +# Assuming our {ProjectID} is 2144 +# Assuming our {MessageHook} is uJmKaBW9WFk42miB146ci3Kj +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + notifico://2144/uJmKaBW9WFk42miB146ci3Kj +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_office365.md b/content/en/docs/Integrations/.Notifications/Notify_office365.md new file mode 100644 index 00000000..229931ec --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_office365.md @@ -0,0 +1,61 @@ +## Office 365 Notifications +* **Source**: n/a +* **Icon Support**: no +* **Attachment Support**: no +* **Message Format**: Text +* **Message Limit**: 32768 Characters per message + +**Note:** At this time, this plugin requires that you have administrative permissions on your Azure email infrastructure. + +## Syntax: + +- `o365://{tenant_id}:{account_email}/{client_id}/{client_secret}/` +- `o365://{tenant_id}:{account_email}/{client_id}/{client_secret}/{targets}` + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| tenant_id | Yes | The **Tenant ID** Associated with your Azure Application you created. This can also be referred to as your **Directory ID**. +| account_email | Yes | The **Email** Associated with your Azure account. +| client_id | Yes | The **Client ID** Associated with your Azure Application you created. This can also be referred to as your **Application ID**. +| client_secret | Yes | You will need to generate one of these; this can be done through the Azure portal (Also documented below). +| from | No | If you want the email address *ReplyTo* address to be something other then your own email address, then you can specify it here. +| to | No | This will enforce (or set the address) the email is sent To. By default the email is sent to the address identified by the `account_email` + +**Notes:** +* If no `targets` are specified, then the notification is just sent to the address identified by `{account_email}` +* Unfortunately the `client_secret` contains a lot of characters that can drastically conflict with standard URL rules (and thus Apprise might have difficulty detecting your client secret). The `?` and `@` characters can get generated by Microsoft and will almost definitely cause you issues. + * Consider encoding this `client secret` before putting it into your Apprise URL. Encoding the URL can be as simple as just pasting it into the form on [this website](https://www.url-encode-decode.com/). + * You can also just manually escape these characters on your Apprise URL yourself manually ([explained here](https://github.com/caronc/apprise/wiki/Troubleshooting#special-characters-and-url-conflicts)). Simply swap all instances of: + * `?` with `%3F` + * `@` with `%40` + +### Tenant ID, Client ID, and Secret ID Acquisition +You will need to have a valid Microsoft Personal Account AND you will require Administrative access unfortunately (to access the **Mail.Send** Application Permission). More details can be [found here](https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-v2-protocols-oauth-client-creds) about registering your app with Azure. + +But basically it amounts to: + +1. From the **Azure Portal** go to **Microsoft Active Directory** -> **App Registrations** ([alt link](https://apps.dev.microsoft.com/portal/register-app)) +1. Click **new** -> _give any name (your choice) in Name field_ -> select _personal Microsoft accounts only_ --> **Register** +1. From here (the **Overview** panel) you can acquire both the Directory (`tenant`) ID and the Application (`client_id') you will need. +1. To create your `client_secret` , go to **Active Directory** -> **Certificate & Tokens** -> **New client secret** + * The `client_secret` is an auto-generated string which may have `@` and/or `?` character(s) in it. You will need to encode these characters to when pasting this into your Apprise URL. See the note section above for more details on how to do this. +1. Now need to set permission **Active directory** -> **API permissions** -> **Add permission**. +1. Click on **Microsoft Graph** +1. Click on **Application Permissions** and search for **Mail.Send**; You will want to check this box too on the match found. +1. Set the Redirect URI (Web) to the following: `https://login.microsoftonline.com/common/oauth2/nativeclient` + 1. You can do this from the **Authentication** -> **Add a platform** + 1. Choose **Web Application**. + 1. Enter the URI `https://login.microsoftonline.com/common/oauth2/nativeclient` +1. Now you're good to go. :slightly_smiling_face: + +#### Example +Send a email notification to our your Office 365 account: +```bash +# Assuming our {tenant_id} is ab-cd-ef-gh +# Assuming our {account_email} is chuck.norris@roundhouse.kick +# Assuming our {client_id} is zz-yy-xx-ww +# Assuming our {client_secret} is rt/djd/jjd +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + o365:///ab-cd-ef-gh:chuck.norris@roundhouse.kick/zz-yy-xx-ww/rt/djd/jjd +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_onesignal.md b/content/en/docs/Integrations/.Notifications/Notify_onesignal.md new file mode 100644 index 00000000..24de857d --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_onesignal.md @@ -0,0 +1,64 @@ +## OneSignal Notifications +* **Source**: https://onesignal.com +* **Icon Support**: Yes +* **Message Format**: Text +* **Message Limit**: 32768 Characters per Message + +### Account Setup +1. Visit https://onesignal.com to create your account. +2. To acquire your `{appid}` and `{apikey}` Clic on the **Keys and IDs**.
      ![OneSignalAppKeys](https://user-images.githubusercontent.com/850374/103224241-65616080-48f5-11eb-97c0-fa32a28524b4.png) + +### Syntax +The syntax is as follows: +* `onesignal://{app_id}@{apikey}/#{include_segment}` +* `onesignal://{app_id}@{apikey}/#{include_segment1}/#{include_segment2}/#{include_segmentN}` +* `onesignal://{app_id}@{apikey}/{player_id}/` +* `onesignal://{app_id}@{apikey}/{player_id1}/{player_id2}/{player_idN}` +* `onesignal://{app_id}@{apikey}/@{user_id}/` +* `onesignal://{app_id}@{apikey}/@{user_id1}/@{user_id2}/@{user_idN}` +* `onesignal://{app_id}@{apikey}/{email}/` +* `onesignal://{app_id}@{apikey}/{email1}/{email2}/{emailN}` + +You can also mix/match the targets: +* `onesignal://{app_id}@{apikey}/{email}/@{user_id}/#{include_segment}/{player_id}` + +If you defined a template with OneSignal, you can use it as well: +* `onesignal://{template_id}:{app_id}@{apikey}/#{include_segment}` +* `onesignal://{template_id}:{app_id}@{apikey}/#{include_segment1}/#{include_segment2}/#{include_segmentN}` +* `onesignal://{template_id}:{app_id}@{apikey}/{player_id}/` +* `onesignal://{template_id}:{app_id}@{apikey}/{player_id1}/{player_id2}/{player_idN}` +* `onesignal://{template_id}:{app_id}@{apikey}/@{user_id}/` +* `onesignal://{template_id}:{app_id}@{apikey}/@{user_id1}/@{user_id2}/@{user_idN}` +* `onesignal://{template_id}:{app_id}@{apikey}/{email}/` +* `onesignal://{template_id}:{app_id}@{apikey}/{email1}/{email2}/{emailN}` + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| app_id | Yes | This is the Application ID associated with your OneSignal account. +| apikey | Yes | This is the API Key associated with your OneSignal account. +| template_id | No | The UUID Template ID you wish to use +| player_id | No | A Player ID to notify +| user_id | No | A User ID to notify.
      **Note**: these must be prefixed with an `@` symbol or it will be interpreted as a Player ID +| include_segment | No | An include segment.
      **Note**: these must be prefixed with an `#` symbol or it will be interpreted as a Player ID +| email | No | An email to notify. +| subtitle | No | The subtitle of your push. Only appears on iOS devices. +| language | No | The 2 character language code to push your message as. By default this is set to `en` if not specified. +| image | No | to include the icon/image associated with the message. By default this is set to `yes`. +| batch | No | Set it to **Yes** if you want all identified targets to be notified notified in batches (instead of individually). By default this is set to **No**. + +#### Example +Send a OneSignal notification to all devices associated with a project: +```bash +# Assume: +# - our {app_id} is abc123 +# - our {apikey} is a6k4ABnck26hDh8AA3EDHoOVdDEUlw3nty +# - our {player_id} is 3456-2345-a3ef +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + onesignal://abc123@a6k4ABnck26hDh8AA3EDHoOVdDEUlw3nty/3456-2345-a3ef + +# Override the subtitle (Mac users only) by doing the following: +# You must use URL encoded strings, below the spaces are swapped with %20 +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + onesignal://abc123@a6k4ABnck26hDh8AA3EDHoOVdDEUlw3nty/3456-2345-a3ef?subtitle=A%20Different%20Subtitle +``` diff --git a/content/en/docs/Integrations/.Notifications/Notify_opsgenie.md b/content/en/docs/Integrations/.Notifications/Notify_opsgenie.md new file mode 100644 index 00000000..422abc63 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_opsgenie.md @@ -0,0 +1,69 @@ +## Opsgenie Notifications +* **Source**: https://www.opsgenie.com +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 15000 Characters per Message + +### Account Setup +1. Visit https://www.opsgenie.com to create your account. +2. [Generate your Integration API Key](https://app.opsgenie.com/settings/integration/add/API/) + +**Note**: You must generate an Integration API Key; this is not to be confused with the Opsgenie Management API Key. + +### Syntax +The syntax is as follows: +- `opsgenie://{apikey}/` +- `opsgenie://{apikey}/@{user}` +- `opsgenie://{apikey}/@{user1}/@{user2}/@{userN}` +- `opsgenie://{apikey}/*{schedule}` +- `opsgenie://{apikey}/*{schedule1}}/*{schedule2}/*{scheduleN}` +- `opsgenie://{apikey}/^{escalation}` +- `opsgenie://{apikey}/^{escalation1}/^{escalation2}/^{escalationN}` +- `opsgenie://{apikey}/#{team}` +- `opsgenie://{apikey}/#{team1}/#{team2}/#{teamN}` + +**Note:** If no prefix character is specified, then the target is presumed to be a user (an `@` symbol is presumed to be in front of it). + +You can also mix/match the targets: +- `opsgenie://{apikey}/@{user}/#{team}/*{schedule}/^{escalation}` + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| apikey | Yes | This is the API Key associated with your Opsgenie account. +| user | No | The user you wish to notify; this can be a `username`, `email`, or `uuid4`. This is the assumed default target type to notify, but it is advised you prefix all users with a `@` symbol to eliminate any ambiguity. +| team | No | The team you wish to notify; this can be the team name itself, or or `uuid4` associated with it.
      **Note:** Teams must be prefixed with a `#` symbol. +| schedule | No | The schedule you wish to notify; this can be the schedule name itself, or or `uuid4` associated with it.
      **Note:** Schedules must be prefixed with a `*` symbol. +| escalation | No | The escalation you wish to notify; this can be the escalation name itself, or or `uuid4` associated with it.
      **Note:**Escalations must be prefixed with a `^` symbol. +| region | No | The 2 character region code. By default this is set to `us` if not specified. Europeans must set this to `eu` to work correctly. +| batch | No | Set it to **Yes** if you want all identified targets to be notified notified in batches (instead of individually). By default this is set to **No**. +| tags | No | A comma separated list of tags you can associate with your Opsgenie message +| priority | No | The priority to associate with the message. It is on a scale between 1 and 5. The default value is `3` if not specified. +| alias | No | The alias to associate with the message. +| entity | No | The entity to associate with the message. + +#### Example +Send a Opsgenie notification to all devices associated with a project: +```bash +# Assuming our {apikey} is a6k4ABnck26hDh8AA3EDHoOVdDEUlw3nty +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + opsgenie://a6k4ABnck26hDh8AA3EDHoOVdDEUlw3nty +``` + +### Include Details (Key/Value Pairs) +Opsgenie allows you to provide details composed of key/value pairs you can set with messages. This can be accomplished by just sticking a plus symbol (**+**) in front of any parameter you specify on your URL string. +```bash +# Below would set the key/value pair of foo=bar: +# Assuming our {apikey} is a6k4ABnck26hDh8AA3EDHoOVdDEUlw3nty +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + "opsgenie://a6k4ABnck26hDh8AA3EDHoOVdDEUlw3nty/?+foo=bar" + +# Multiple key/value pairs just require more entries: +# Below would set the key/value pairs of: +# foo=bar +# apprise=awesome +# +# Assuming our {apikey} is a6k4ABnck26hDh8AA3EDHoOVdDEUlw3nty +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + "opsgenie://a6k4ABnck26hDh8AA3EDHoOVdDEUlw3nty/?+foo=bar&+apprise=awesome" +``` diff --git a/content/en/docs/Integrations/.Notifications/Notify_parseplatform.md b/content/en/docs/Integrations/.Notifications/Notify_parseplatform.md new file mode 100644 index 00000000..177523fd --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_parseplatform.md @@ -0,0 +1,28 @@ +## Parse Platform Notifications +* **Source**: https://parseplatform.org/ +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 32768 Characters per Message + +### Syntax +Channels are optional; if no channel is specified then you are just personally notified. +- `parsep://{app_id}:{master_key}@{hostname}` +- `parseps://{app_id}:{master_key}@{hostname}` + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| app_id | Yes | The Application ID +| master_key | Yes | This is the Master Key associated with your account +| hostname | Yes | The Hostname of your Parse Platform Server + +#### Example +Send a Parse Platform notification +```bash +# Assume: +# - our {app_id} is abc123 +# - our {master_key} is a6k4ABnck26hDh8AA3EDHoOVdDEUlw3nty +# - our {hostname} is parseplatform.local +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + parsep://app_id:master_key@parseplatform.local +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_popcornnotify.md b/content/en/docs/Integrations/.Notifications/Notify_popcornnotify.md new file mode 100644 index 00000000..6466b1e2 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_popcornnotify.md @@ -0,0 +1,50 @@ +## PopcornNotify Notifications +* **Source**: https://popcornnotify.com/ +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 32768 Characters per message + +### Account Setup +Get yourself an API Key from [their website](https://popcornnotify.com/) and you're already to use the service. + +### Syntax +Valid syntaxes are as follows: +* `popcorn://{ApiKey}/{PhoneNo}/` +* `popcorn://{ApiKey}/{PhoneNo1}/{PhoneNo2}/{PhoneNoN}/` +* `popcorn://{ApiKey}/{Email}/` +* `popcorn://{ApiKey}/{Email1}/{Email2}/{EmailN}/` + +You can mix and match the information too: +* `popcorn://{ApiKey}/{PhoneNo1}/{Email1}/{EmailN}/{PhoneNoN}` + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| ApiKey | Yes | The Personal API Token associated with your account. +| PhoneNo | No | A Phone Number you wish to notify (via SMS). +| Email | No | The Email address you wish to notify +| to | No | This is an alias to the Phone/Email variable. +| batch | No | PopcornNotify allows a batch mode. If you identify more then one phone number and/or email, you can send all of these targets you identify on the URL in a single shot instead of the normal _Apprise_ approach (which sends them one by one). Enabling batch mode has both it's pro's and cons. By default batch mode is disabled. + +#### Example +Send a PopcornNotify notification as an SMS: +```bash +# Assuming our {ApiKey} is abc123456 +# Assuming our {PhoneNo} - is in the US somewhere making our country code +1 +# - identifies as 800-555-1223 +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + popcorn:///abc123456/18005551223 + +# the following would also have worked (spaces, brackets, +# dashes are accepted in a phone no field): +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + popcorn:///abc123456/1-(800) 555-1223 +``` + +You can also send emails just as easily: +```bash +# Assuming our {ApiKey} is abc123456 +# Assuming our {Email} is user@example.com +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + popcorn:///abc123456/user@example.com +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_prowl.md b/content/en/docs/Integrations/.Notifications/Notify_prowl.md new file mode 100644 index 00000000..e7348cec --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_prowl.md @@ -0,0 +1,28 @@ +## Prowl Notifications +* **Source**: https://www.prowlapp.com/ +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 10000 Characters per message + +Prowl requires users to pre-register themselves at [prowlapp.com](https://www.prowlapp.com/) first. + +### Syntax +Valid syntaxes are as follows: +* **prowl**://**{apikey}** +* **prowl**://**{apikey}**/**{providerkey}** +* **prowl**://**{apikey}**/?**priority={priority}** + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| apikey | Yes | The API Key provided to you after you create yourself a Prowl account. +| providerkey | No | The Provider Key is only required if you have been whitelisted. +| priority | No | Can be **low**, **moderate**, **normal**, **high**, or **emergency**; the default is **normal** if a priority isn't specified. + +#### Example +Send a Prowl notification to our server +```bash +# Assuming our {apikey} is adf9dfjkj24jkafkljkf6f +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + prowl://adf9dfjkj24jkafkljkf6f +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_pushalot.md b/content/en/docs/Integrations/.Notifications/Notify_pushalot.md new file mode 100644 index 00000000..dec2b185 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_pushalot.md @@ -0,0 +1,34 @@ +## :skull: Pushalot Notifications +* **Source**: https://pushalot.com (also see https://ifttt.com/pushalot) +* **Icon Support**: Yes +* **Message Format**: Text +* **Message Limit**: 32768 Characters per message +* **Service End Date**: _Somewhere around_ **Nov 2016** (_Estimate_) + +### Service End Reason +There isn't much to go on here; Here was their [last public tweet](https://twitter.com/pushalotapp/status/534758031431860224) made on November 18th, 2014:
      +![pushalot-last-tweet](https://user-images.githubusercontent.com/850374/53437921-a07a6c00-39cc-11e9-95cc-a120476f292e.png) + +There is also [this reddit post](https://www.reddit.com/r/pushalot/comments/5ctstq/pushalot_gone/) which also hints that the permanent shutdown occurred sometime in early November 2016. + +Presumably service was never restored and they just closed up shop. + +## Legacy Setup Details + +There isn't too much configuration for Pushalot notifications. The message is basically just passed to your online Pushalot account and then gets relayed to your Microsoft device(s) from there. + +### Syntax +Valid syntax is as follows: +* **palot**://**{authorizationtoken}** + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| authorizationtoken | Yes | The authorization token associated with your Pushalot account. This is an alpha-numeric string (32 characters in length) + +#### Example +Send a Pushalot notification +```bash +# Assuming our {authorizationtoken} is 1f418df7577e32b89ac6511f2eb9aa68 +apprise palot://1f418df7577e32b89ac6511f2eb9aa68 +``` diff --git a/content/en/docs/Integrations/.Notifications/Notify_pushbullet.md b/content/en/docs/Integrations/.Notifications/Notify_pushbullet.md new file mode 100644 index 00000000..17514af6 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_pushbullet.md @@ -0,0 +1,38 @@ +## Pushbullet Notifications +* **Source**: https://www.pushbullet.com/ +* **Icon Support**: No +* **Attachment Support**: Yes +* **Message Format**: Text +* **Message Limit**: 32768 Characters per Message + +### Account Setup +Pushbullet accounts are free; the Pro extension is optional and grants you a larger message limit and a few other features. Once you've signed up on https://www.pushbullet.com/ You can generate your API Key by accessing your [account settings](https://www.pushbullet.com/#settings) and clicking on **Create Access Token**. + +### Syntax +Valid syntaxes are as follows: +* **pbul**://**{accesstoken}** +* **pbul**://**{accesstoken}**/**{device_id}** +* **pbul**://**{accesstoken}**/**#{channel}** +* **pbul**://**{accesstoken}**/**{email}** + +You can also form any combination of the above and perform updates from one url: +* **pbul**://**{accesstoken}**/**{device_id}**/**#{channel}**/**{email}** + +If neither a **{device_id}**, **#{channel}**, or **{email}** is specified, then the default configuration is to send to _all_ of your configured _devices_. + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| accesstoken | Yes | The Access Token can be generated on the Settings page of your Pushbullet's account. You must have an access token for this Notification service to work. +| device_id | No | Associated devices with your Pushbullet account can be found in your _Settings_ +| channel | No | Channels must be prefixed with a hash (#) or they will be interpreted as a device_id. Channels must be registered with your Pushbullet account to work. +| email | No | Emails only work if you've registered them with your Pushbullet account. + + +#### Example +Send a Pushbullet notification to all devices: +```bash +# Assuming our {accesstoken} is abcdefghijklmno +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + pbul://abcdefghijklmno +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_pushed.md b/content/en/docs/Integrations/.Notifications/Notify_pushed.md new file mode 100644 index 00000000..87a5bea3 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_pushed.md @@ -0,0 +1,49 @@ +## Pushed Notifications +* **Source**: https://pushed.co/ +* **Icon Support**: _From within the pushed.co website you can set up an icon._ +* **Message Format**: Text +* **Message Limit**: 160 Characters per Message + +### Account Setup +You'll want to _Request Developer Access_ which is asked of you when you first log in to the site. Check your email because you'll need to verify your account with them. + +#### First: Create an App: +Once this is done you will have access to the [apps](https://account.pushed.co/apps) where you can create a new one if you don't already have one. + +Once this is done, you'll get access to an: +* Application Key: **{app_key}** +* Application Secret: **{app_secret}** + +You'll also need something to notify; so once you've created an account and an app, you'll also need to retrieve their mobile app (for either [Android](https://play.google.com/store/apps/details?id=co.pushed.GetPushed) or [iOS](https://itunes.apple.com/us/app/get-pushed/id804777699?mt=8&uo=6&at=&ct=)) and log in. + +Subscribe to this App; there is a _Subscription Link_ you can follow right from the settings page of the App you just created. You will need at least one subscription to use the notification service. + +### Syntax +Valid syntax is as follows: +* **pushed**://**{app_key}**/**{app_secret}** +* **pushed**://**{app_key}**/**{app_secret}**/**@{user_pushed_id}** +* **pushed**://**{app_key}**/**{app_secret}**/**@{user_pushed_id1}**/**@{user_pushed_id2}**/**@{user_pushed_idN}** +* **pushed**://**{app_key}**/**{app_secret}**/**#{channel_alias}** +* **pushed**://**{app_key}**/**{app_secret}**/**#{channel_alias1}**/**#{channel_alias2}**/**#{channel_aliasN}** + +You can also form any combination of the above and perform updates from one url: +* **pushed**://**{app_key}**/**{app_secret}**/**@{user_pushed_id}**/**#{channel_alias}**/ + +If neither a **@{user_pushed_id}** or **#{channel}** is specified, then the default configuration is to send to just the _App_ you provided keys for. + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| app_key | Yes | The Application Key can be generated on the Settings page of your Pushed's account. You must have an application key for this Notification service to work. +| app_secret | Yes | The Application Secret can be generated on the Settings page of your Pushed's account. You must have an application secret for this Notification service to work. +| user_pushed_id | No | Users must be prefixed with an _at_ (@) character or they will be ignored. You can identify users here by their Pushed ID. +| channel_alias | No | Channels must be prefixed with a _hash tag_ (#) or they will be ignored. Channels must be registered with your Pushed account to work. This must be the channel alias itself; not the channel. The alias can be retrieved from the channel settings from within your pushed.io account. + +#### Example +Send a Pushed notification: +```bash +# Assuming our {app_key} is sopJo0dVKVC9YK1F5wDQ +# Assuming our {app_secret} is KWEtXxVm1PtDTTrKaEM49DhBd8MJvSMCHSvunPerbCf1MaNLO300roqOL0F8HErAl +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + pushed://sopJo0dVKVC9YK1F5wDQ/KWEtXxVm1PtDTTrKaEM49DhBd8MJvSMCHSvunPerbCf1MaNLO300roqOL0F8HErAl +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_pushjet.md b/content/en/docs/Integrations/.Notifications/Notify_pushjet.md new file mode 100644 index 00000000..e42175b4 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_pushjet.md @@ -0,0 +1,34 @@ +## Pushjet Notifications +* **Source**: ~https://pushjet.io/~ +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 32768 Characters per message + +**Note:** The Pushjet online service appears to have gone dead. They did however leave behind all of our source code as open source [here on github](https://github.com/Pushjet). Thus the _apprise_ plugin _pjet://_ still works for the local hosting of a Pushjet server. + +### Syntax +If you want to use your own custom Pushjet server, then the following identify the syntax you may use: +* `pjet://{host}/{secret_key}` +* `pjet://{host}:{port}/{secret_key}` +* `pjet://{user}:{password}@{host}/{secret_key}` +* `pjets://{host}/{secret_key}` +* `pjets://{host}:{port}/{secret_key}` +* `pjets://{user}:{password}@{host}/{secret_key}` + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| secret_key | Yes | The Secret Key associated with your Pushjet account. +| host | Yes | The Pushjet server you're hosting +| user | No | If you're system is set up to use HTTP-AUTH, you can provide _username_ for authentication to it. +| password | No | If you're system is set up to use HTTP-AUTH, you can provide _password_ for authentication to it. +| port | No | The Pushjet port optional and only required if you're hosting your own notification server on a different port then the standard ones. By default the port is **80** for **pjet://** and **443** for all **pjets://** references. + +#### Example +Send a Pushjet notification: +```bash +# Assuming our {secret_key} is abcdefghijklmnopqrstuvwxyzabc +# Assuming our {hostname} is localhost +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + pjet://abcdefghijklmnopqrstuvwxyzabc@localhost +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_pushover.md b/content/en/docs/Integrations/.Notifications/Notify_pushover.md new file mode 100644 index 00000000..6a900770 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_pushover.md @@ -0,0 +1,55 @@ +## Pushover Notifications +* **Source**: https://pushover.net/ +* **Icon Support**: No +* **Attachment Support**: Yes +* **Message Format**: Text +* **Message Limit**: 512 Characters per message + +There isn't too much configuration for Pushover notifications. The message is basically just passed to your online Pushover account and then gets relayed to your device(s) you've setup from there. + +### Getting Your User Key +Once you log into [the website](https://pushover.net/), your dashboard will present your **{user_key}** in front of you. + +### Getting Your API Token +On the dashboard after logging in, if you scroll down you'll have the ability to generate an application. Upon doing so, you will be provided an API Token to associate with this application you generated. This will become your **{token}**. + +### Syntax +Valid syntax is as follows: +* `pover://{user_key}@{token}` +* `pover://{user_key}@{token}/{device_id}` +* `pover://{user_key}@{token}/{device_id1}/{device_id2}/{device_idN}` +* `pover://{user_key}@{token}?priority={priority}` +* `pover://{user_key}@{token}?priority=emergency&expire={expire}&retry={retry}` +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| user_key | Yes | The user key identifier associated with your Pushover account. This is NOT your email address. The key can be acquired from your Pushover dashboard. +| token | Yes | The token associated with your Pushover account. +| device_id | No | The device identifier to send your notification to. By default if one isn't specified then all of devices associated with your account are notified. +| priority | No | Can be **low**, **moderate**, **normal**, **high**, or **emergency**; the default is **normal** if a priority isn't specified.
      To send an emergency-priority notification, the `retry` and `expire` parameters _should_ be supplied. +| expire | No | The expire parameter specifies how many seconds your notification will continue to be retried for (every `retry` seconds). If the notification has not been acknowledged in `expire` seconds, it will be marked as expired and will stop being sent to the user. Note that the notification is still shown to the user after it is expired, but it will not prompt the user for acknowledgement. This parameter has a maximum value of at most 10800 seconds (3 hours). The default is 3600 seconds (1 hr) if nothing is otherwise specified. +| retry | No | The retry parameter specifies how often (in seconds) the Pushover servers will send the same notification to the user. In a situation where your user might be in a noisy environment or sleeping, retrying the notification (with sound and vibration) will help get his or her attention. This parameter must have a value of at least 30 seconds between retries. The default is 900 seconds (15 minutes) if nothing is otherwise specified. +| sound | No | Can optionally identify one of the optional sound effects identified [here](https://pushover.net/api#sounds). The default sound is **pushover**. +| url | No | Can optionally provide a Supplementary URL to go with your message +| url_title | No | Can optionally provide a Supplementary URL Title to go with your message + +#### Example +Send a Pushover notification to all of our configured devices: +```bash +# Assuming our {user_key} is 435jdj3k78435jdj3k78435jdj3k78 +# Assuming our {token} is abcdefghijklmnop-abcdefg +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + pover://435jdj3k78435jdj3k78435jdj3k78@abcdefghijklmnop-abcdefg +``` + +Send a Pushover notification with the Emergency Priority: +```bash +# Emergency priority advises you to also specify the expire and +# retry values. +# Assuming our {user_key} is 435jdj3k78435jdj3k78435jdj3k78 +# Assuming our {token} is abcdefghijklmnop-abcdefg +# The following will set a 1hr expiry and attempt to resend +# the message every 10 minutes: +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + pover://435jdj3k78435jdj3k78435jdj3k78@abcdefghijklmnop-abcdefg?priority=emergency&retry=600&expire=3600 +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_pushsafer.md b/content/en/docs/Integrations/.Notifications/Notify_pushsafer.md new file mode 100644 index 00000000..df5acc09 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_pushsafer.md @@ -0,0 +1,50 @@ +## Pushsafer Notifications +* **Source**: https://www.pushsafer.com/ +* **Icon Support**: No +* **Attachment Support**: Yes +* **Message Format**: Text +* **Message Limit**: 32768 Characters per message + +There isn't too much effort requires to use PushSafer notifications. The message is basically just passed to your online PushSafer account and then gets relayed to your device(s) you've setup from there. + +### Getting Your Private Key +Once you log into their official [website](https://www.pushsafer.com/), you can find the **{private_key}** on your [dashboard](https://www.pushsafer.com/dashboard/). + +### Syntax +Valid syntax is as follows: +* `psafers://{private_key}` +* `psafers://{private_key}/{device_id}` +* `psafers://{private_key}/{device_id1}/{device_id2}/{device_idN}` +* `psafers://{private_key}?priority={priority}` +* `psafers://{private_key}?priority=emergency&sound=okay` +* `psafers://{private_key}?vibrate=2` + +If no device is specified, the `a` reserved device is used by default. the `a` notifies **all** of your devices currently associated with your account. + +Secure connections are always made when you use `psafers://` however `psafer://` also works if you wish to use an unencrypted connection. + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| private_key | Yes | The private key associated with your PushSafer account. This can be found on your [dashboard](https://www.pushsafer.com/dashboard/) after successfully logging in. +| device_id | No | The device identifier to send your notification to. By default if one isn't specified then all of devices associated with your account are notified. +| priority | No | Can be **low**, **moderate**, **normal**, **high**, or **emergency**; the default is to use whatever the default setting is for the device being notified. +| sound | No | Can optionally identify one of the optional sound effects identified [here](https://www.pushsafer.com/en/pushapi#api-sound). By default this variable isn't set at all. +| vibration | No | Android and iOS devices can be set to vibrate upon the reception of a notification. By setting this, you're effectively setting the strength of the vibration. You can set this to **1**, **2** or **3** where 3 is a maximum vibration setting and 1 causes a lighter vibration. By default this variable isn't set at all causing your device default settings to take effect. + +#### Example +Send a PushSafer notification to all of our configured devices: +```bash +# Assuming our {private_key} is 435jdj3k78435jdj3k78435jdj3k78 +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + psafers://435jdj3k78435jdj3k78435jdj3k78 +``` + +Send a PushSafer notification with the Emergency Priority: +```bash +# Emergency priority advises you to also specify the expire and +# retry values. +# Assuming our {user_key} is 435jdj3k78435jdj3k78435jdj3k78 +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + psafers://435jdj3k78435jdj3k78435jdj3k78?priority=emergency +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_reddit.md b/content/en/docs/Integrations/.Notifications/Notify_reddit.md new file mode 100644 index 00000000..2ff98464 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_reddit.md @@ -0,0 +1,56 @@ +## Reddit +* **Source**: https://reddit.com +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 6000 Characters per message + +### Account Setup +1. Visit https://old.reddit.com/prefs/apps and scroll to the bottom +1. Click on the button that reads '**are you a developer? create an app...**' +1. Set the mode to `script`, +1. Provide a `name`, `description`, and `redirect uri` (it can be anything). +1. Save your configuration: + ![Reddit-Setup01](https://user-images.githubusercontent.com/850374/109997361-20372180-7cde-11eb-868d-e5e46bb41873.png) +1. Once the bot is saved, you'll be given a ID (next to the the bot name) and a Secret. + ![Reddit-Setup02](https://user-images.githubusercontent.com/850374/109997391-262d0280-7cde-11eb-8681-067c0e00d4ab.png) + +- The **App ID** will look something like this: `YWARPXajkk645m` +- The **App Secret** will look something like this: `YZGKc5YNjq3BsC-bf7oBKalBMeb1xA` +- The App will also have a location where you can identify the users/developers who can also use this key. By default it's already configured to be yours. You will need to use the user/pass of one of the accounts identified here as well to use the posting capabilities. + +### Syntax +Valid syntax is as follows: +- `reddit://{user}:{pass}@{app_id}/{app_secret}/{subreddit}` +- `reddit://{user}:{pass}@{app_id}/{app_secret}/{subreddit_1}/{subreddit_2}/{subreddit_N}` + +### Parameter Breakdown +| Variable | Required | Description +| --------------- | -------- | ----------- +| app_id | Yes | The App ID generated for your **script** application you created on the [Reddit Apps](https://old.reddit.com/prefs/apps) page. +| app_secret | Yes | The App Secret generated for your **script** application you created on the [Reddit Apps](https://old.reddit.com/prefs/apps) page. +| user | Yes | The Reddit UserID associated with one of the developers attached to your application you generated. By default this is just the same user account you used to create the Reddit app in the first place. +| pass | Yes | The Reddit password associated with the UserID defined above. +| subreddit | Yes | The Subreddit you wish to post your message to. You must specify at least 1 of these. +| kind | No | The message kind can be set to either `self`, `link`, or `auto`.
      Set this to `self` to imply you're posting a general/common post to the subreddit. Otherwise, set this to `link` if the message body you provide (as part of your Apprise payload) only contains a hyperlink/URI to a website. The `auto` setting (_also the default_) will parse the _message body_ and set the `self`/`link` kind accordingly based on what was detected. +| ad | No | Specify whether or not what you are posting is an advertisement. By default this is set to **No**. +| nsfw | No | The *Not Safe For Work* (NSFW) flag. By default this is set to **No**. +| replies | No | Send all replies of the thread to your (Reddit) inbox? By default this is set to **Yes**. +| resubmit | No | Let Reddit know this is a re-post. Some subreddits block the re-posting of content; setting this flag to `yes` can enforce that the content be accepted even if this is the case. Some subreddits will even flag the message differently when you identify it as a re-post up front. This may or may not be what you want. By default this is set to **No** so that all messages are treated by the upstream server. +| spoiler | No | Mark your post with the **spoiler** flag. By default this is set to **No**. +| flair_id | No | Provide the `flair_id` you want to associate with your post. By default this is not passed upstream unless identified. +| flair_text | No | Provide the `flair_text` you want to associate with your post. By default this is not passed upstream unless identified. + +**Note:** Reddit always requires a `title` to go with it's `body`. Reddit will deny your post (upstream) if you don't provide both. + +#### Example +Send a Reddit Notification +```bash +# Assuming our {user} is sstark +# Assuming our {pass} is notAFanOfLannisters +# Assuming our {app_id} is YWARPXajkk645m +# Assuming our {app_secret} is YZGKc5YNjq3BsC-bf7oBKalBMeb1xA +# Assuming we want to post to the {subreddit} Apprise + +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + reddit://sstark:notAFanOfLannisters@YWARPXajkk645m/YZGKc5YNjq3BsC-bf7oBKalBMeb1xA/Apprise +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_rocketchat.md b/content/en/docs/Integrations/.Notifications/Notify_rocketchat.md new file mode 100644 index 00000000..16d6136e --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_rocketchat.md @@ -0,0 +1,71 @@ +## Rocket.Chat Notifications +* **Source**: https://rocket.chat/ +* **Icon Support**: Yes +* **Message Format**: Text +* **Message Limit**: 1000 Characters per Message + +### Syntax +Rocket.Chat can send notifications through the following **modes**: +* **webhook**: A configured Incoming Webhook; this can be set up in the **Administration** area under **Integrations** heading. +* **basic**: A user/password combination. + +Secure connections (via https) should be referenced using **rockets://** where as insecure connections (via http) should be referenced via **rocket://**. + +#### Basic Mode +Valid syntax is as follows: +* `rocket://{user}:{password}@{hostname}/#{channel}` +* `rocket://{user}:{password}@{hostname}:{port}/#{channel}` +* `rocket://{user}:{password}@{hostname}/{room_id}` +* `rocket://{user}:{password}@{hostname}:{port}/{room_id}` +* `rockets://{user}:{password}@{hostname}/#{channel}` +* `rockets://{user}:{password}@{hostname}:{port}/#{channel}` +* `rockets://{user}:{password}@{hostname}/{room_id}` +* `rockets://{user}:{password}@{hostname}:{port}/{room_id}` + +**Note:** the `?avatar=yes` option will only work if your user has the `bot` permission setting. + +You can also form any combination of the above and perform updates from one url: +* **rocket**://**{user}**:**{password}**@**{hostname}**/#**{channel_id}**/**{room_id}** + +For the Basic Mode Only: if neither a **{room_id}** or **#{channel}** is specified then this notification will fail. + +#### Webhook Mode +Valid syntax is as follows: +* `rocket://{webhook}@{hostname}/#{channel}` +* `rocket://{webhook}@{hostname}/{room_id}` +* `rocket://{webhook}@{hostname}/{@user}` +* `rockets://{webhook}@{hostname}/#{channel}` +* `rockets://{webhook}@{hostname}:{port}/#{channel}` +* `rockets://{webhook}@{hostname}/{room_id}` +* `rockets://{webhook}@{hostname}:{port}/{room_id}` + +You can also form any combination of the above and perform updates from one url: +* **rocket**://**{webhook}**@**{hostname}**:**{port}**/#**{channel_id}**/**{room_id}**/**@{user}** + +By default a webhook is set up to be associated with a channel. Thus the following syntax is also valid: +* **rocket**://**{webhook}**@**{hostname}**/ + + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| user | *Yes | The user identifier you've associated with your Rocket.Chat server. This is only required if you are not providing a **webhook** instead. This can be optionally combined with the **webhook** if you wish to over-ride the bot name. +| password | *Yes | The password identifier you've associated with your Rocket.Chat server. This is only required if you are not providing a **webhook** instead +| webhook | *Yes | The incoming webhook you created and associated with your Rocket.Chat server . This is only required if you are not providing a **webhook** instead +| hostname | Yes | The Rocket.Chat server you're sending your notification to. +| port | No | The port the Rocket.Chat server is listening on. By default the port is **80** for **rocket://** and **443** for all **rockets://** references. +| room_id | No | A room identifier. Available for both **basic** and **webhook** modes. +| channel | No | Channels must be prefixed with a hash (#) or they will be interpreted as a room_id. Available for both **basic** and **webhook** modes. Channels must be registered with your Rocket.Chat server to work. +| user_id | No | Another user you wish to notify. User IDs must be prefixed with an at symbol (@). Available for the **webhook** mode only. +| mode | No | The authentication mode is automatically detected based what it parses from the URL provided. You only need to set this if you feel it is being detected incorrectly. The possible modes are **basic** and **webhook** and are explained above. +| avatar | No | Override the default avatar associated with the message to match that of the notification type (be that of a Warning, Error, Info, etc). By default this is set to **No** for **basic** mode and **Yes** for **webhook** mode. + +#### Example +Send a Rocket.Chat notification to the channel *#nuxref*: +```bash +# Assuming our {user} is l2g +# Assuming our {password} is awes0m3! +# Assuming our {hostname} is rocket.server.local +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + rocket://l2g:awes0m3!@rocket.server.local/#nuxref +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_ryver.md b/content/en/docs/Integrations/.Notifications/Notify_ryver.md new file mode 100644 index 00000000..e9a9d01f --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_ryver.md @@ -0,0 +1,54 @@ +## Ryver Notifications +* **Source**: https://ryver.com/ +* **Icon Support**: Yes +* **Message Format**: Markdown +* **Message Limit**: 1000 Characters per message + +### Account Setup +To use Ryver you'll need to have the forum(s) you intend to notify already pre-created. You'll need to do this before you follow the next set of instructions. + +Next you need to define a new webhook and get the corresponding URL. This is done through: +1. click on the **Integrations** > **Incoming Webhooks** beneath your settings on the left +2. click on the **Create Webhook** button +3. choose either **Slack** or **Plain/text Ryver** as this plugin currently supports both. +4. Regardless of what webhook type you choose to create (Slack or Ryver), the next steps are still the same: + - Set the webhook type to **Chat Message** + - Select the forum(s) you already have set up to allow this webhook to access. + - Click next. + + When you've completed this process you will receive a URL that looks something like this: +```https://apprise.ryver.com/application/webhook/ckhrjW8w672m6HG``` + +This effectively equates to:
      +```https://{organization}.ryver.com/application/webhook/{token}``` + +**Note:** Apprise supports this URL _as-is_ (_as of v0.7.7_); you no longer need to parse the URL any further. However there is slightly less overhead (internally) if you do. + +The last part of the URL you're given is the token we're most interested in. With respect to the above example: + +- the **token** is ```ckhrjW8w672m6HG``` +- the **organization** is ``apprise`` + +### Syntax +Valid syntaxes are as follows: +* `https://{organization}.ryver.com/application/webhook/{token}` +* `ryver://{organization}/{token}/` +* `ryver://{botname}@{organization}/{token}/` +* `ryver://{organization}/{token}/?webhook=slack` + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| organization| Yes | The organization you created your webhook under. +| token | Yes | The token provided to you after creating a *incoming-webhook* +| botname | No | Set the display name the message should appear from. +| webhook | No | The type of webhook you created (Slack or Ryver). The only possible values are **slack** and **ryver**. The default value is **ryver** if the webhook value isn't specified. + +#### Example +Send a ryver notification: +```bash +# Assuming our {organization} is apprise +# Assuming our {token} is T1JJ3T3L2 +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + ryver:///apprise/T1JJ3T3L2 +``` diff --git a/content/en/docs/Integrations/.Notifications/Notify_sendgrid.md b/content/en/docs/Integrations/.Notifications/Notify_sendgrid.md new file mode 100644 index 00000000..d10f9b27 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_sendgrid.md @@ -0,0 +1,69 @@ +## SendGrid Notifications +* **Source**: https://sendgrid.com/ +* **Icon Support**: no +* **Message Format**: Text +* **Message Limit**: 32768 Characters per message + +### Account Setup +Creating an account with SendGrid is free of charge and can be done through their main page. + +Once you have an account and access to [your dashboard](https://app.sendgrid.com/). You will need to ensure you've correctly **authenticated your domains** with them; this is done in the [Sender Authentication](https://app.sendgrid.com/settings/sender_auth) section of your dashboard. You get here by clicking on **Settings** > **Sender Authentication** from your dashboard. + +The last thing you need is to generate an **API Key** with at least the **Mail Send** permission. This can also be done through your dashboard in the [API Keys](https://app.sendgrid.com/settings/api_keys) section of your dashboard. You can get here by clicking on **Settings** > **API Keys** + +### Syntax +Valid syntaxes are as follows: +* `{schema}://{apikey}:{from_email}` +* `{schema}://{apikey}:{from_email}/{to_email}` +* `{schema}://{apikey}:{from_email}/{to_email1}/{to_email2}/{to_email3}` + +Template support is also supported as well, You just need to specify the UUID assigned to it as part of the URL: +* `{schema}://{apikey}:{from_email}/{to_email}?template={template_uuid}` + +If you want to take advantage of the `dynamic_template_data` variables, just create arguments prefixed with a plus (+); for example: +* `sendgrid://{apikey}:{from_email}/{to_email}?template={template_uuid}&+{sub1}=value&+{sub2}=value2` + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| apikey | Yes | The [API Key](https://app.sendgrid.com/settings/api_keys) you generated from within your SendGrid dashboard. +| from_email | Yes | This is the email address will identify the email's origin (the _From_ address). This address **must** contain a domain that was previously authenticated with your SendGrid account (See [Domain Authentication](https://app.sendgrid.com/settings/sender_auth)). +| to_email | No | This is the email address will identify the email's destination (the _To_ address). If one isn't specified then the *from_email* is used instead. +| template | No | You may optionally specify the UUID of a previously generated SendGrid dynamic template to base the email on. +| cc | No | The _Carbon Copy_ (CC:) portion of the email. This is entirely optional. It should be noted that SendGrid immediately rejects emails where the _cc_ contains an email address that exists in the _to_ or the _bcc_ list. To avoid having issues, Apprise automatically eliminates these duplicates silently if detected. +| bcc | No | The _Blind Carbon Copy_ (BCC:) portion of the email. This is entirely optional. It should be noted that SendGrid immediately rejects emails where the _bcc_ contains an email address that exists in the _to_ or the _cc_ list. To avoid having issues, Apprise automatically eliminates these duplicates silently if detected. If an identical email is detected in both the CC and the BCC list, the BCC list will maintain the email and it will drop from the CC list automatically. + +#### Dynamic Template Data +Templates allow you to define {{variables}} within them that can be substituted on the fly once the email is sent. You can identify and set these variables using Apprise by simply sticking a plus (+) in front of any parameter you specify on your URL string. + +Consider the following template: `d-e624763c71314ea2a1fae38d7fa64a4a` +``` +This is a test email about {{what}}. + +You can take a mapped variable on a SendGrid template +and easily swap it with whatever you want using {{app}}. +``` + +In the above example, we defined the following variables: ``what`` and ``app``. + +An Apprise URL might look like:
      +`sendgrid://myapikey:noreply@example.com?template=d-e624763c71314ea2a1fae38d7fa64a4a&+what=templates&+app=Apprise` + +The above URL would create the following: +``` +This is a test email about templates. + +You can take a mapped variable on a SendGrid template +and easily swap it with whatever you want using Apprise. +``` + +#### Example +Send a sendgrid notification: +```bash +# Assuming our {apikey} is abcd123-xyz +# Assuming our Authenticated Domain is example.com, we might want to +# set our {from_email} to noreply@example.com +# Assuming our {to_email} is someone@microsoft.com +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + sendgrid:///abcd123-xyz:noreply@example.com/someone@microsoft.com +``` diff --git a/content/en/docs/Integrations/.Notifications/Notify_serverchan.md b/content/en/docs/Integrations/.Notifications/Notify_serverchan.md new file mode 100644 index 00000000..57fe4f4b --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_serverchan.md @@ -0,0 +1,28 @@ +## ServerChan Notifications +* **Source**: https://sct.ftqq.com/ +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 32768 Characters per Message + +## Account Setup +Register your own account on the [ServerChan Official Website](https://sct.ftqq.com/). After configure the notify channel, you will be provided the sendkey/token used for notifications. + +### Syntax +Valid authentication syntax is as follows: +* `schan://{sendkey}/` + + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| sendkey | Yes | This is token provided to you through your SimpleChan account. + + +#### Example +Send a SimpleChan notification: +```bash +# Assume: +# - our {sendkey} is ABC123 +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + schan://ABC123 +``` diff --git a/content/en/docs/Integrations/.Notifications/Notify_ses.md b/content/en/docs/Integrations/.Notifications/Notify_ses.md new file mode 100644 index 00000000..f0c7687f --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_ses.md @@ -0,0 +1,52 @@ +## Amazon Web Service (AWS) - Simple Email Service (SES) +* **Source**: https://aws.amazon.com/ses/ +* **Attachment Support**: yes +* **Message Format**: Text +* **Message Limit**: 32768 Characters per message + +### Account Setup +You'll need to create an account with Amazon Web Service (AWS) first to use this. If you don't have one, you'll need your credit card (even though the first 12 months are free). Alternatively, if you already have one (or are using it through your company), you're good to go to the next step. + +The next thing you'll need to do is generate an _Access Key ID_ and _Secret Access Key_.: +1. From the [AWS Management Console](https://console.aws.amazon.com) search for **IAM** under the _AWS services_ section or simply click [here](https://console.aws.amazon.com/iam/home?#/security_credentials). +1. Expand the section reading **Access keys (access key ID and secret access key)** +1. Click on **Create New Access Key** +1. It will present the information to you on screen and let you download a file containing the same information. I suggest you do so since there is no way to retrieve this key again later on (unless you delete it and create a new one). + +So at this point, it is presumed you're set up, and you got your _Access Key ID_ and _Secret Access Key_ on hand. + +You now have all the tools you need to send SES (Email) messages. + +If you want to take advantage of sending your notifications to _topics_: from the [AWS Management Console](https://console.aws.amazon.com) search for **Simple Notification Service** under the _AWS services_ section and configure as many topics as you want. You'll be able to reference them as well using this notification service. + +### Syntax +The syntax is as follows: +- `ses://{from}/{aws_access_key}/{aws_secret_key}/{region}/` +- `ses://{from}/{aws_access_key}/{aws_secret_key}/{region}/{ToEmail1}/{ToEmail2}/{ToEmailN}/` + +### Parameter Breakdown +| Variable | Required | Description +| --------------- | -------- | ----------- +| from | Yes | The originating source of the Email Address AWS is sending on behalf. AWS will validate this against your account (when paired with your aws_access_key and aws_secret_key) +| access | Yes | The generated _Access Key ID_ from the AWS Management Console +| secret | Yes | The generated _Access Key Secret_ from the AWS Management Console +| region | Yes | The region code might look like **us-east-1**, **us-west-2**, **cn-north-1**, etc +| target_emails | Yes | On ore more emails separated by a slash to deliver your notification to. If no email is specified then the `from` email is notified. +| reply | No | If you want the email address *ReplyTo* address to be something other then your own email address, then you can specify it here. +| to | No | This will enforce (or set the address the email is sent To). This is only required in special circumstances. The notification script is usually clever enough to figure this out for you. +| name | No | With respect to {from_email}, this allows you to provide a name with your *ReplyTo* address. +| cc | No | Carbon Copy email address(es). More than one can be separated with a space and/or comma. +| bcc | No | Blind Carbon Copy email address(es). More than one can be separated with a space and/or comma. + + +#### Example +Send a AWS SES notification (Email): +```bash +# Assuming our {AccessKeyID} is AHIAJGNT76XIMXDBIJYA +# Assuming our {AccessKeySecret} is bu1dHSdO22pfaaVy/wmNsdljF4C07D3bndi9PQJ9 +# Assuming our {Region} is us-east-2 +# Assuming our {Email} - test@test.com +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + ses://test@test.com/AHIAJGNT76XIMXDBIJYA/bu1dHSdO22pfaaVy/wmNsdljF4C07D3bndi9PQJ9/us-east-2/ + +``` diff --git a/content/en/docs/Integrations/.Notifications/Notify_simplepush.md b/content/en/docs/Integrations/.Notifications/Notify_simplepush.md new file mode 100644 index 00000000..d5e6f69f --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_simplepush.md @@ -0,0 +1,31 @@ +## SimplePush Notifications +* **Source**: https://simplepush.io/ +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 10000 Characters per Message + +SimplePush is a pretty straight forward messaging system you can get for your Android Device through their App [here](https://play.google.com/store/apps/details?id=io.tymm.simplepush). + +You can optionally add additional notification encryption in the settings where it provides you with a **{salt}** value and allows you to configure/set your own encryption **{password}**. + +### Syntax +Valid authentication syntaxes are as follows: +* `spush://{apikey}/` +* `spush://{salt}:{password}@{apikey}/` + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| apikey | Yes | This is required for your account to work. You will be provided one from your SimplePush account. +| event | No | Optionally specify an event on the URL. +| password | No | SimplePush offers a method of further encrypting the message and title during transmission (on top of the secure channel it's already sent on). This is the Encryption password set. You must provide the `salt` value with the `ppassword` in order to work. +| salt | No | The salt is provided to you by SimplePush and is the second part of the additional encryption you can use with this service. You must provide a `password` with the `salt` value in order to work. + +#### Example +Send a SimplePush notification: +```bash +# Assume: +# - our {apikey} is ABC123 +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + spush://ABC123 +``` diff --git a/content/en/docs/Integrations/.Notifications/Notify_sinch.md b/content/en/docs/Integrations/.Notifications/Notify_sinch.md new file mode 100644 index 00000000..fff2c1ad --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_sinch.md @@ -0,0 +1,49 @@ +## Sinch Notifications +* **Source**: https://sinch.com +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 160 Characters per message + +### Account Setup +To use Sinch, you will need to acquire your _Service Plan ID_ and _API Token_. Both of these are accessible via the [Sinch Dashboard](https://dashboard.sinch.com/sms/overview) or through [the API section](https://dashboard.sinch.com/sms/api/rest). + +You'll need to have a number defined as an Active Number ([from your dashboard here](https://dashboard.sinch.com/numbers/your-numbers/number)). This will become your **{FromPhoneNo}** when identifying the details below. + +### Syntax +Valid syntaxes are as follows: +* `sinch://{ServicePlanID}:{ApiToken}@{FromPhoneNo}/{PhoneNo}` +* `sinch://{ServicePlanID}:{ApiToken}@{FromPhoneNo}/{PhoneNo1}/{PhoneNo2}/{PhoneNoN}` + +If no _ToPhoneNo_ is specified, then the _FromPhoneNo_ will be messaged instead; hence the following is a valid URL: +* `sinch://{ServicePlanID}:{ApiToken}@{FromPhoneNo}/` + +Short Codes are also supported but require at least 1 Target PhoneNo +* `sinch://{ServicePlanID}:{ApiToken}@{ShortCode}/{PhoneNo}` +* `sinch://{ServicePlanID}:{ApiToken}@{ShortCode}/{PhoneNo1}/{PhoneNo2}/{PhoneNoN}` + +### Parameter Breakdown +| Variable | Required | Description +| --------------- | -------- | ----------- +| ServicePlanID | Yes | The _Account SID_ associated with your Sinch account. This is available to you via the Sinch Dashboard. +| ApiToken | Yes | The _Auth Token_ associated with your Sinch account. This is available to you via the Sinch Dashboard. +| FromPhoneNo | **\*No** | The [Active Phone Number](https://dashboard.sinch.com/numbers/your-numbers/number) associated with your Sinch account you wish the SMS message to come from. It must be a number registered with Sinch. As an alternative to the **FromPhoneNo**, you may also provide a **ShortCode** here instead. The phone number MUST include the country codes dialling prefix as well when placed. This field is also very friendly and supports brackets, spaces and hyphens in the event you want to format the number in an easy to read fashion. +| ShortCode | **\*No** | The ShortCode associated with your Sinch account you wish the SMS message to come from. It must be a number registered with Sinch. As an alternative to the **ShortCode**, you may provide a **FromPhoneNo** instead. +| PhoneNo | **\*No** | A phone number MUST include the country codes dialling prefix as well when placed. This field is also very friendly and supports brackets, spaces and hyphens in the event you want to format the number in an easy to read fashion.
      **Note:** If you're using a _ShortCode_, then at least one _PhoneNo_ MUST be defined. +| Region | **No** | Can be either `us` or `eu`. By default the region is set to `us`. + +#### Example +Send a Sinch Notification as an SMS: +```bash +# Assuming our {ServicePlanID} is AC735c307c62944b5a +# Assuming our {ApiToken} is e29dfbcebf390dee9 +# Assuming our {FromPhoneNo} is +1-900-555-9999 +# Assuming our {PhoneNo} - is in the US somewhere making our country code +1 +# - identifies as 800-555-1223 +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + sinch://AC735c307c62944b5a:e29dfbcebf390dee9@19005559999/18005551223 + +# the following would also have worked (spaces, brackets, +# dashes are accepted in a phone no field): +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + sinch://AC735c307c62944b5a:e29dfbcebf390dee9@1-(900) 555-9999/1-(800) 555-1223 + diff --git a/content/en/docs/Integrations/.Notifications/Notify_slack.md b/content/en/docs/Integrations/.Notifications/Notify_slack.md new file mode 100644 index 00000000..5108d7f3 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_slack.md @@ -0,0 +1,118 @@ +## Slack Notifications +* **Source**: https://slack.com/ +* **Icon Support**: Yes +* **Attachment Support**: Yes +* **Message Format**: Markdown +* **Message Limit**: 30000 Characters per message + +### Account Setup +Slack is slightly more complicated then some of the other notification services, so here is quick breakdown of what you need to know and do in order to send Notifications through it using this tool: + +#### Method 1: Incoming Webhook +First off, Slack notifications require an *incoming-webhook* it can connect to. + +1. You can create this webhook from [here](https://my.slack.com/services/new/incoming-webhook/). Just follow the wizard to pre-determine the channel(s) you want your message to broadcast to. +2. Or you can create a Slack App [here](https://api.slack.com/slack-apps) and associate it with one of your Slack Workspaces. From here there are just a few extra steps needed to get your webhook URL (all done through the App's configuration screen): + * You must **Activate** the **Incoming Webhook** _Feature_ if not already. + * On this same configuration screen, you can create a webhook and assign it to a channel/user. + +Regardless of what option you choose (above), both will result in giving you a webhook URL that looks something like this:
      +```https://hooks.slack.com/services/T1JJ3T3L2/A1BRTD4JD/TIiajkdnlazkcOXrIdevi7F``` + +This URL effectively equates to:
      +```https://hooks.slack.com/services/{tokenA}/{tokenB}/{tokenC}``` + +**Note:** Apprise supports this URL _as-is_ (_as of v0.7.7_); you no longer need to parse the URL any further. However there is slightly less overhead (internally) if you do. + +If you want to convert this to an Apprise URL, do the following: +The last part of the URL you're given make up the 3 tokens you need to send notifications with It's very important to pay attention. In the above example the tokens are as follows: +1. **TokenA** is ```T1JJ3T3L2``` +2. **TokenB** is ```A1BRTD4JD``` +3. **TokenC** is ```TIiajkdnlazkcOXrIdevi7F8``` + +#### Method 2: Create a Bot +Bots offer you slightly more flexibility then Webhooks do. The main difference is *Slack Bots* can support attachments allowing you to leverage this in Apprise! +1. First create your [Slack App here](https://api.slack.com/apps?new_app=1). +1. Pick an App Name (such as *Apprise*) and select your workspace; click on the **Create App** +1. You'll be able to click on **Bots** menu selection from here where you can then choose to add a **Bot User**. Give it a name and then choose **Add Bot User*. +1. You'll need to provide the proper OAuth permissions:
      ![Slack Bot OAuth Min Permissions](https://user-images.githubusercontent.com/850374/104230100-1d752a00-541b-11eb-86c4-9b09df38a647.png) +1. Now choose **Install App** to which you can choose **Install App to Workspace**. +1. You will need to authorize the app which you get prompted to do; so this step is easy. +1. Finally you'll get some very important information you will need for Apprise. From this point on you can either used the **OAuth Access Token** or the **Bot User OAuth Access Token** using the syntax `slack://{OAuth Access Token}`. + +Your Apprise Slack URL (for accessing your Bot) might look something like: + - `slack://xoxp-1234-1234-1234-4ddbc191d40ee098cbaae6f3523ada2d` + - `slack://xoxb-1234-1234-4ddbc191d40ee098cbaae6f3523ada2d` + +Both OAuth tokens you are provided have the ability to post text to channels and provide attachments. So it's up to you which of the two you choose to use. + +### Syntax +Valid syntaxes are as follows: +* `slack://{tokenA}/{tokenB}/{tokenC}` +* `https://hooks.slack.com/services/{tokenA}/{tokenB}/{tokenC}` +* `slack://{OAuthToken}/` + - A Bot has no default channel configurable through Slack like Webhooks do. If no channel is specified, then the channel `#general` is used. + +Now if you're using the legacy webhook method (and not going through the App), you're granted a bit more freedom. As a result, the following URLs will also work for you through Apprise: +* `slack://{tokenA}/{tokenB}/{tokenC}/#{channel}` +* `slack://{tokenA}/{tokenB}/{tokenC}/#{channel1}/#{channel2}/#{channelN}` +* `slack://{OAuthToken}/#{channel}` +* `slack://{botname}@{OAuthToken}/#{channel1}/#{channel2}/#{channelN}` + +If you know the *Encoded-ID* of the channel you wish to access, you can use the plus (+) symbol to identify these separately from channels in the url. Valid syntaxes are as follows: +* `slack://{botname}@{tokenA}/{tokenB}/{tokenC}/+{encoded_id}` +* `slack://{botname}@{tokenA}/{tokenB}/{tokenC}/+{encoded_id1}/+{encoded_id2}/+{encoded_id3}` +* `slack://{botname}@{OAuthToken}/+{encoded_id}` +* `slack://{botname}@{OAuthToken}/+{encoded_id1}/+{encoded_id2}/+{encoded_id3}` + +If you know the user_id you wish to transmit your slack notification to (instead of a channel), you can use the at symbol (@) to do this with. Valid syntaxes are as follows: +* `slack://{botname}@{tokenA}/{tokenB}/{tokenC}/@{user_id}` +* `slack://{botname}@{tokenA}/{tokenB}/{tokenC}/@{user_id1}/@{user_id2}/@{user_id3}` +* `slack://{botname}@{OAuthToken}/@{user_id}` +* `slack://{botname}@{OAuthToken}/@{user_id1}/@{user_id2}/@{user_id3}` + +You can freely mix and match all of the combinations in any order as well: +* `slack://**{botname}@{tokenA}/{tokenB}/{tokenC}/@{user_id}/#{channel}/+{encoded_id}` +* `slack://{botname}@{OAuthToken}/@{user_id}/#{channel}/+{encoded_id}` + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| tokenA | Yes | The first part of 3 tokens provided to you after creating a *incoming-webhook*. The OAuthToken is not required if using the Slack Webhook. +| tokenB | Yes | The second part of 3 tokens provided to you after creating a *incoming-webhook*. The OAuthToken is not required if using the Slack Webhook. +| tokenC | Yes | The last part of 3 tokens provided to you after creating a *incoming-webhook*. The OAuthToken is not required if using the Slack Webhook. +| OAuthToken | Yes | The OAuth Token provided to you through the Slack App when using a a *Bot* instead of a Webhook. Token A, B and C are not used when using Bots. +| channel | No | Channels must be prefixed with a hash tag **#**! You can specify as many channels as you want by delimiting each of them by a forward slash (/) in the url. +| encoded_id | No | Slack allows you to represent channels and private channels by an *encoded_id*. If you know what they are, you can use this instead of the channel to send your notifications to. All encoded_id's must be prefixed with a plus symbol **+**! +| user_id | No | Users must be prefixed with an at symbol **@**! You can specify as many users as you want by delimiting each of them by a forward slash (/) in the url. +| botname | No | Identify the name of the bot that should issue the message. If one isn't specified then the default is to just use your account (associated with the *incoming-webhook*). +| footer | No | Identify whether or not you want the Apprise Footer icon to show with each message. By default this is set to **yes**. +| image | No | Identify whether or not you want the Apprise image (showing status color) to display with every message or not. By default this is set to **yes**. + +#### Example +Send a Slack notification to the channel `#nuxref`: +```bash +# Assuming our {tokenA} is T1JJ3T3L2 +# Assuming our {tokenB} is A1BRTD4JD +# Assuming our {tokenC} is TIiajkdnlazkcOXrIdevi7F +# our channel nuxref is represented by #nuxref +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + slack:///T1JJ3T3L2/A1BRTD4JD/TIiajkdnlazkcOXrIdevi7F/#nuxref +``` + +Alternatively, if you're using the Bot; a Slack notification sent to the channel `#general` might look like this: +```bash +# Assuming our {OAuthToken} is xoxb-1234-1234-4ddbc191d40ee098cbaae6f3523ada2d +# our channel general is represented by #general +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + slack://xoxb-1234-1234-4ddbc191d40ee098cbaae6f3523ada2d/#general +``` + +Perhaps you want to disable the footer, you can do so like so: +```bash +# Assuming our {OAuthToken} is xoxb-1234-1234-4ddbc191d40ee098cbaae6f3523ada2d +# we want to send it to our #general channel; %23 is the encoded way of representing the # +# we set footer to no as well +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + slack://xoxb-1234-1234-4ddbc191d40ee098cbaae6f3523ada2d/%23general?footer=no +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_smtp2go.md b/content/en/docs/Integrations/.Notifications/Notify_smtp2go.md new file mode 100644 index 00000000..aae1df49 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_smtp2go.md @@ -0,0 +1,42 @@ +## SMTP2Go Notifications +* **Source**: https://www.smtp2go.com/ +* **Icon Support**: No +* **Attachment Support**: Yes +* **Message Format**: HTML +* **Message Limit**: 32768 Characters per message + +### Account Setup +You can create an account for free [on their website](https://www.smtp2go.com/). + +The next step is to simply generate an **API Key** associated with your account (on your Dashboard) [here](https://app.smtp2go.com/settings/apikeys/)/ + +### Syntax +Valid syntax is as follows: +* `smtp2go://{user}@{domain}/{apikey}/` +* `smtp2go://{user}@{domain}/{apikey}/{email}` +* `smtp2go://{user}@{domain}/{apikey}/{email1}/{email2}/{emailN}` + +You can adjust what the Name associated with the From email is set to as well: +* `smtp2go://{user}@{domain}/{apikey}/?from=Luke%20Skywalker` + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| apikey | Yes | The API Key associated with your SMTP2Go account. You can acquire your API key from your dashboard [here](https://app.smtp2go.com/settings/apikeys/). +| domain | Yes | The domain associated with the sending email account +| user | Yes | The user gets paired with the domain you specify on the URL to make up the **From** email address your recipients receive their email from. +| email | No | You can specify as many email addresses as you wish. Each address you identify here will represent the **To**. +| from | No | This allows you to identify the name associated with the **From** email address when delivering your email. +| to | No | This is an alias to the email variable. You can chain as many (To) emails as you want here separating each with a comma and/or space. +| cc | No | Identify users you wish to send as a Carbon Copy +| bcc | No | Identify users you wish to send as a Blind Carbon Copy + +#### Example +Send a SMTP2Go notification to the email address bill.gates@microsoft.com +```bash +# Assuming the {domain} we set up with our SMTP2Go account is example.com +# Assuming our {apikey} is api-60F0DD0AB5BA11ABA421F23C91C88EF4 +# We already know our To {email} is bill.gates@microsoft.com +# Assuming we want our email to come from noreply@example.com +apprise smtp2go:///noreply@example.com/api-60F0DD0AB5BA11ABA421F23C91C88EF4/bill.gates@microsoft.com +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_sns.md b/content/en/docs/Integrations/.Notifications/Notify_sns.md new file mode 100644 index 00000000..b84a4aea --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_sns.md @@ -0,0 +1,61 @@ +## Amazon Web Service (AWS) - Simple Notification Service (SNS) +* **Source**: https://aws.amazon.com/sns/ +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 160 Characters per message + +### Account Setup +You'll need to create an account with Amazon Web Service (AWS) first to use this. If you don't have one, you'll need your credit card (even though the first 12 months are free). Alternatively, if you already have one (or are using it through your company), you're good to go to the next step. + +The next thing you'll need to do is generate an _Access Key ID_ and _Secret Access Key_.: +1. From the [AWS Management Console](https://console.aws.amazon.com) search for **IAM** under the _AWS services_ section or simply click [here](https://console.aws.amazon.com/iam/home?#/security_credentials). +1. Expand the section reading **Access keys (access key ID and secret access key)** +1. Click on **Create New Access Key** +1. It will present the information to you on screen and let you download a file containing the same information. I suggest you do so since there is no way to retrieve this key again later on (unless you delete it and create a new one). + +So at this point, it is presumed you're set up, and you got your _Access Key ID_ and _Secret Access Key_ on hand. + +You now have all the tools you need to send SMS messages. + +If you want to take advantage of sending your notifications to _topics_: from the [AWS Management Console](https://console.aws.amazon.com) search for **Simple Notification Service** under the _AWS services_ section and configure as many topics as you want. You'll be able to reference them as well using this notification service. + +### Syntax +Valid syntaxes are as follows: +* **sns**://**{AccessKeyID}**/**{AccessKeySecret}**/**{Region}**/+**{PhoneNo}** +* **sns**://**{AccessKeyID}**/**{AccessKeySecret}**/**{Region}**/+**{PhoneNo1}**/+**{PhoneNo2}**/+**{PhoneNoN}** +* **sns**://**{AccessKeyID}**/**{AccessKeySecret}**/**{Region}**/#**{Topic}** +* **sns**://**{AccessKeyID}**/**{AccessKeySecret}**/**{Region}**/#**{Topic1}**/#**{Topic2}**/#**{TopicN}** + +You can mix and match these entries as well: +* **sns**://**{AccessKeyID}**/**{AccessKeySecret}**/**{Region}**/+**{PhoneNo1}**/#**{Topic1}** + +Enforcing a hashtag (#) for _topics_ and a plus sign (+) in-front of phone numbers helps eliminate cases where ambiguity could be an issue such as a _topic_ that is comprised of all numbers. These characters are purely optional. + +### Parameter Breakdown +| Variable | Required | Description +| --------------- | -------- | ----------- +| AccessKeyID | Yes | The generated _Access Key ID_ from the AWS Management Console +| AccessKeySecret | Yes | The generated _Access Key Secret_ from the AWS Management Console +| Region | Yes | The region code might look like **us-east-1**, **us-west-2**, **cn-north-1**, etc +| PhoneNo | No | The phone number MUST include the country codes dialling prefix as well when placed. You can optionally prefix the entire number with a plus symbol (+) to enforce that the value be interpreted as a phone number (in the event it can't be auto-detected otherwise). This field is also very friendly and supports brackets, spaces and hyphens in the event you want to format the number in an easy to read fashion. +| Topic | No | The topic you want to publish your message to. + +**Note:** This notification service does not use the title field; only the _body_ is passed along. + +#### Example +Send a AWS SNS notification as an SMS: +```bash +# Assuming our {AccessKeyID} is AHIAJGNT76XIMXDBIJYA +# Assuming our {AccessKeySecret} is bu1dHSdO22pfaaVy/wmNsdljF4C07D3bndi9PQJ9 +# Assuming our {Region} is us-east-2 +# Assuming our {PhoneNo} - is in the US somewhere making our country code +1 +# - identifies as 800-555-1223 +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + sns://AHIAJGNT76XIMXDBIJYA/bu1dHSdO22pfaaVy/wmNsdljF4C07D3bndi9PQJ9/us-east-2/+18005551223 + +# the following would also have worked (spaces, brackets, +# dashes are accepted in a phone no field): +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + sns://AHIAJGNT76XIMXDBIJYA/bu1dHSdO22pfaaVy/wmNsdljF4C07D3bndi9PQJ9/us-east-2/+1(800)555-1223 + +``` diff --git a/content/en/docs/Integrations/.Notifications/Notify_sparkpost.md b/content/en/docs/Integrations/.Notifications/Notify_sparkpost.md new file mode 100644 index 00000000..5172cd5c --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_sparkpost.md @@ -0,0 +1,100 @@ +## SparkPost Notifications +* **Source**: https://sparkpost.com/ +* **Icon Support**: No +* **Attachment Support**: yes +* **Message Format**: Text +* **Message Limit**: 32768 Characters per message + +### Account Setup +You can create an account for free [on their website](https://sparkpost.com/) but it comes with restrictions. + +For each domain you set up with them, you'll be able access them all from your dashboard once you're signed in. You'll need to generate an API key and grant it `transmission` access. + +### Syntax +Valid syntaxes are as follows: +* `sparkpost://{user}@{domain}/{apikey}/` +* `sparkpost://{user}@{domain}/{apikey}/{email}/` +* `sparkpost://{user}@{domain}/{apikey}/{email1}/{email2}/{emailN}/` + +You may also identify your region if you aren't using the US servers like so: +* `sparkpost://{user}@{domain}/{apikey}/?region=eu` + +You can adjust what the Name associated with the From email is set to as well: +* `sparkpost://{user}@{domain}/{apikey}/?From=Darth%20Vader` + +### Email Extensions +If you wish to utilize extensions, you'll need to escape the addition/plus (+) character with **%2B** like so:
      +`sparkpost://{user}@{domain}/{apikey}/chris%2Bextension@example.com` + +The Carbon Copy (**cc=**) and Blind Carbon Copy (**bcc=**) however are applied to each email sent. Hence if you send an email to 3 target users, the entire *cc* and *bcc* lists will be part of all 3 emails. + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| apikey | Yes | The API Key associated with the domain you want to send your email from. This is available to you after signing into their website an accessing the [dashboard](https://app.sparkpost.com/app/domains). +| domain | Yes | The Domain you wish to send your email from; this domain must be registered and set up with your sparkpost account. +| user | Yes | The user gets paired with the domain you specify on the URL to make up the **From** email address your recipients receive their email from. +| batch | No | If batch mode is set to `yes` then all of email addresses are sent in a single batch for SparkPost to handle. +| email | No | You can specify as many email addresses as you wish. Each address you identify here will represent the **To**.
      **Note:** Depending on your account setup, sparkpost does restrict you from emailing certain addresses. +| region | No | Identifies which server region you intend to access. Supported options here are **eu** and **us**. By default this is set to **us** unless otherwise specified. This specifically affects which API server you will access to send your emails from. +| from | No | This allows you to identify the name associated with the **From** email address when delivering your email. +| to | No | This is an alias to the email variable. You can chain as many (To) emails as you want here separating each with a comma and/or space. +| cc | No | Carbon Copy email address(es). More than one can be separated with a space and/or comma. +| bcc | No | Blind Carbon Copy email address(es). More than one can be separated with a space and/or comma. + +#### Example +Send a sparkpost notification to the email address `bill.gates@microsoft.com` +```bash +# Assuming the {domain} we set up with our sparkpost account is example.com +# Assuming our {apikey} is 4b4f2918fddk5f8f91f +# We already know our To {email} is bill.gates@microsoft.com +# Assuming we want our email to come from noreply@example.com +apprise sparkpost:///noreply@example.com/4b4f2918fddk5f8f91f/bill.gates@microsoft.com +``` + +### Header Manipulation +Some users may require special HTTP headers to be present when they post their data to their server. This can be accomplished by just sticking a plus symbol (**+**) in front of any parameter you specify on your URL string. The below examples send a sparkpost notification to the email address `bill.gates@microsoft.com` while leveraging the header manipulation. +```bash +# Below would set the header: +# X-Token: abcdefg +# +# Assuming the {domain} we set up with our sparkpost account is example.com +# Assuming our {apikey} is 4b4f2918fddk5f8f91f +# We already know our To {email} is bill.gates@microsoft.com +# Assuming we want our email to come from noreply@example.com +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + "sparkpost:///noreply@example.com/4b4f2918fddk5f8f91f/bill.gates@microsoft.com/?+X-Token=abcdefg" + +# Multiple headers just require more entries defined with a hyphen in front: +# Below would set the headers: +# X-Token: abcdefg +# X-Apprise: is great +# +# Assuming the {domain} we set up with our sparkpost account is example.com +# Assuming our {apikey} is 4b4f2918fddk5f8f91f +# We already know our To {email} is bill.gates@microsoft.com +# Assuming we want our email to come from noreply@example.com +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + "sparkpost:///noreply@example.com/4b4f2918fddk5f8f91f/bill.gates@microsoft.com/?+X-Token=abcdefg&+X-Apprise=is%20great" +``` + +### Global Substitution +SparkPost allows you to identify `{{tokens}}` that are wrapped in 2 curly braces. [See here on their section of templating](https://developers.sparkpost.com/api/template-language/) for more details. If you wish to pass in a keyword and it's substituted value, simply use the colon (**:**) in front of any parameter you specify on your URL string. The below examples send a sparkpost notification to the email address `bill.gates@microsoft.com` while leveraging the header manipulation. +```bash +# Below would set the token {{software}} to be substituted with Microsoft: +# Assuming the {domain} we set up with our sparkpost account is example.com +# Assuming our {apikey} is 4b4f2918fddk5f8f91f +# We already know our To {email} is bill.gates@microsoft.com +# Assuming we want our email to come from noreply@example.com +apprise -vv -t "Test Message Title" -b "Bill Gates works at {{software}}" \ + "sparkpost:///noreply@example.com/4b4f2918fddk5f8f91f/bill.gates@microsoft.com/?+software=Microsoft" +``` + +You can specify as many tokens as you like. Apprise automatically provides some default (out of the box) translated tokens if you wish to use them; they are as follows: +* **app_id**: The Application identifier; usually set to `Apprise`, but developers of custom applications may choose to over-ride this and place their name here. this is how you acquire this value. +* **app_desc**: Similar the the Application Identifier, this is the Application Description. It's usually just a slightly more descriptive alternative to the *app_id*. This is usually set to `Apprise Notification` unless it has been over-ridden by a developer. +* **app_color**: A hex code that identifies a colour associate with a message. For instance, `info` type messages are generally blue where as `warning` ones are orange, etc. +* **app_type**: The message type itself; it may be `info`, `warning`, `success`, etc +* **app_title**: The actual title (`--title` or `-t` if from the command line) that was passed into the apprise notification when called. +* **app_body**: The actual body (`--body` or `-b` if from the command line) that was passed into the apprise notification when called. +* **app_url**: The URL associated with the Apprise instance (found in the **AppriseAsset()** object). Unless this has been over-ridden by a developer, it's value will be `https://github.com/caronc/apprise`. diff --git a/content/en/docs/Integrations/.Notifications/Notify_spontit.md b/content/en/docs/Integrations/.Notifications/Notify_spontit.md new file mode 100644 index 00000000..da08fdaa --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_spontit.md @@ -0,0 +1,38 @@ +## Spontit Notifications +* **Source**: https://spontit.com +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 5000 Characters per Message + +1. Visit https://spontit.com to create your account. +2. To acquire your `{user}`: Visit your profile at https://spontit.com/profile and take note of your User ID here. It will look something like: `user12345678901` +3. To acquire your `{apikey}`: Generate an API key at https://spontit.com/secret_keys (if you haven't already done so). + +### Syntax +Channels are optional; if no channel is specified then you are just personally notified. +* `spontit://{user}@{apikey}` +* `spontit://{user}@{apikey}/{channel_id}` +* `spontit://{user}@{apikey}/{channel_id1}/{channel_id2}/{channel_idN}/` + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| user | Yes | This is the User ID associated with your Spontit account. It can be found on your [Spontit Profile page](https://spontit.com/profile). +| apikey | Yes | This is the API key you generated for your Spontit account. It can be found (and generated if it doesn't already exist) [here](https://spontit.com/secret_keys). +| channel_id | No | A Channel you wish to notify _that you created_. +| subtitle | No | The subtitle of your push. Only appears on iOS devices. + +#### Example +Send a Spontit notification to all devices associated with a project: +```bash +# Assume: +# - our {user} is user28635710302 +# - our {apikey} is a6k4ABnck26hDh8AA3EDHoOVdDEUlw3nty +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + spontit://user28635710302@a6k4ABnck26hDh8AA3EDHoOVdDEUlw3nty + +# Override the subtitle (Mac users only) by doing the following: +# You must use URL encoded strings, below the spaces are swapped with %20 +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + spontit://myuser@myapi?subtitle=A%20Different%20Subtitle +``` diff --git a/content/en/docs/Integrations/.Notifications/Notify_streamlabs.md b/content/en/docs/Integrations/.Notifications/Notify_streamlabs.md new file mode 100644 index 00000000..97abc52b --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_streamlabs.md @@ -0,0 +1,57 @@ +## Streamlabs Notifications +* **Source**: https://streamlabs.com/ +* **Icon Support**: Yes +* **Message Format**: Text +* **Message Limit**: 32768 Characters per message + +### Account Setup +The process to get signed up with Streamlabs is a bit lengthy. + +**Note:** The screenshots and instructions below are 100% full credit to the **[LNBits Project](https://github.com/Fittiboy/lnbits)** ([found here](https://github.com/Fittiboy/lnbits/tree/master/lnbits/extensions/streamalerts#stream-alerts)). + +At the moment, the only service that has an open API to work with is Streamlabs, so this setup requires linking your Twitch/YouTube/Facebook account to Streamlabs. + +1. Log into [Streamlabs](https://streamlabs.com/login?r=https://streamlabs.com/dashboard). +1. Navigate to the API settings page to register an App: +![image](https://user-images.githubusercontent.com/28876473/127759145-710d53b6-3c19-4815-812a-9a6279d1b8bb.png) +![image](https://user-images.githubusercontent.com/28876473/127759182-da8a27cb-bb59-48fa-868e-c8892080ae98.png) +![image](https://user-images.githubusercontent.com/28876473/127759201-7c28e9f1-6286-42be-a38e-1c377a86976b.png) +1. Fill out the form with anything it will accept as valid. Most fields can be gibberish, as the application is not supposed to ever move past the "testing" stage and is for your personal use only. +In the "Whitelist Users" field, input the username of a Twitch account you control. While this feature is *technically* limited to Twitch, you can use the alerts overlay for donations on YouTube and Facebook as well. +For now, simply set the "Redirect URI" to `http://localhost`, you will change this soon. +Then, hit create: +![image](https://user-images.githubusercontent.com/28876473/127759264-ae91539a-5694-4096-a478-80eb02b7b594.png) +1. Now we'll take the Client ID from the Streamlabs page and generate a code that will be used for apprise to communicate with Streamlabs +Replace the placeholders in the link below with your Client ID +`https://www.streamlabs.com/api/v1.0/authorize?client_id=&redirect_uri=http://localhost&response_type=code&scope=donations.read+donations.create+alerts.create` +You are redirected to localhost +copy the url param code that is specified in the browser url bar +`http://localhost/?code=` +1. Generate an access token using your code generated in the last step, your Client ID, and your Secret +Open a terminal and make a request to generate an access token that Apprise will utilize +```bash +curl --request POST --url 'https://streamlabs.com/api/v1.0/token' -d 'grant_type=authorization_code&code=&client_id=&client_secret=&redirect_uri=http%3A%2F%2Flocalhost' +``` +`` +Similar JSON should be returned +`{"access_token":,"token_type":"Bearer","expires_in":3600,"refresh_token":""}` +Note that the access token does not expire +1. Now copy and paste your access token to build the streamlabs url +`strmlabs:///?call=DONATIONS` + +### Syntax +Valid syntax is as follows: +* `strmlabs://{access_token}/` + +### Parameter Breakdown +| Variable | Required | Description +| ------------ | -------- | ----------- +| access_token | Yes |The access token generated from your Streamlabs account. + +#### Example +Send a streamlabs notification: +```bash +# Assuming our {access_token} is abcdefghij1234567890 +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + strmlabs://abcdefghij1234567890/ +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_stride.md b/content/en/docs/Integrations/.Notifications/Notify_stride.md new file mode 100644 index 00000000..b5e278a8 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_stride.md @@ -0,0 +1,43 @@ +## :skull: Stride Notifications +* **Source**: https://www.stride.com/ +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 2000 Characters per message +* **Service End Date**: **Feb 14th 2019** + +### Service End Reason +The creators of Stride ([Atlassian](https://www.atlassian.com)) performed partnership with [Slack](https://slack.com) and therefore are discontinuing both _Stride_ and _Hipchat_ services. [See their official announcement here](https://www.atlassian.com/blog/announcements/new-atlassian-slack-partnership). This was what was displayed on their website when looking up info on these products:
      +![Screenshot from 2019-09-07 14-28-34](https://user-images.githubusercontent.com/850374/64478836-58f34a80-d17c-11e9-8779-940f57303b10.png). + +## Legacy Setup Details + +### Account Setup +_Stride_ is the successor to _Hipchat_. It requires you to create a custom app and assign it to your channel you create. + +Let's start from the beginning: +1. When you sign-up with stride.com, the site will ask if you want to join a group or creating your own. Brand new users will start their own while companies might have already formed a group you want to join. +2. Once you get set up, you'll have the option of creating a channel (or if you joined your companies group, you might already see channels you can join in front of you). Either way, you need to be in a channel before you get to the next step. +3. Once you're in a channel you'll want to connect _apprise_ (this notification service) up. To do this, you need to go to the App Manager (on right hand side in your browser) an choose to '_Connect your own app_'. + * It will ask you to provide a '_token name_' which can be whatever you want. This will be used for reference later. Click the _Create_ button when you're done. + * When it completes it will generate a token that looks something like:
      ```HQFtq4pF8rKFOlKTm9Th```
      This is important and we'll referenced it as your **{auth_token}**. + * If you scroll down it will also generate you a conversation URL that might look like:
      ```https://api.atlassian.com/site/ce171c45-09ae-4fac-a73d-5a4b7a322872/conversation/a54a80b3-eaad-4524-9a3a-f6653bcfb100/message```
      Think of this URL like this:
      ```https://api.atlassian.com/site/{cloud_id}/conversation/{convo_id}/message```. Specifically pay close attention to the **{cloud_id}** and **{convo_id}** because you will need this to build your custom URL with. + +### Syntax +The valid syntax is as follows: +* **stride**://**{auth_token}**/**{cloud_id}**/**{convo_id}** + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| auth_token | Yes | The Authorization token that is created for you once you create your Custom App (that you associate with your channel). +| cloud_id | Yes | This is extracted from the URL that is created for you when you create your Custom App (the same one that is identified above).
      **Note**: This is the first part of the conversation URL:
      https\:\/\/api.atlassian.com/site/**{cloud_id}**/conversation/{convo_id}/message +| convo_id | Yes | This is extracted from the URL that is created for you when you create your Custom App (the same one that is identified above).
      **Note**: This is the second part of the conversation URL:
      https\:\/\/api.atlassian.com/site/{cloud_id}/conversation/**{convo_id}**/message + +#### Example +Send a stride notification: +```bash +# Assuming our {auth_token} is HQFtq4pF8rKFOlKTm9Th +# Assuming our {cloud_id} is ce171c45-09ae-4fac-a73d-5a4b7a322872 +# Assuming our {convo_id} is a54a80b3-eaad-4524-9a3a-f6653bcfb100 +apprise stride://HQFtq4pF8rKFOlKTm9Th/ce171c45-09ae-4fac-a73d-5a4b7a322872/a54a80b3-eaad-4524-9a3a-f6653bcfb100 +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_syslog.md b/content/en/docs/Integrations/.Notifications/Notify_syslog.md new file mode 100644 index 00000000..2f179be0 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_syslog.md @@ -0,0 +1,72 @@ +## Syslog Notifications +* **Source**: https://tools.ietf.org/html/rfc5424 +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 32768 Characters per message + +Syslog is a way for network devices to send event messages to a logging server – usually known as a Syslog server. The Syslog protocol is supported by a wide range of devices and can be used to log different types of events. + +### Syntax +Valid syntaxes are as follows: +* `syslog://` +* `syslog://{facility}` +* `syslog://{host}` +* `syslog://{host}:{port}` +* `syslog://{host}/{facility}` +* `syslog://{host}:{port}/{facility}` + +One might change the facility from it's default like so: +* `syslog://local5` + +One might change the facility on a remote syslog (rsyslog) server from it's default like so: +* `syslog://localhost/local5` + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| host | No | Query a remote Syslog server (rsyslog) by optionally specifying the hostname +| port | No | The remote port associated with your rsyslog server provided. By default if this value isn't sent port **514** is used by default. +| facility | No | The facility to use, by default it is `user`. Valid options are **kern**, **user**, **mail**, **daemon**, **auth**, **syslog**, **lpr**, **news**, **uucp**, **cron**, **local0**, **local1**, **local2**, **local3**, **local4**, **local5**, **local6**, and **local7** +| logperror | No | Additionally send the log message to _stderr_. This method is ignored when preforming a remote query. +| logpid | Yes | Include PID as part of the log output. +| mode | No | This is automatically detected if not specified. The mode determines if we're using `rsyslog` (Remote SysLog) vs `syslog` (Local). Hence the mode value can be either `remote` or `local`. The Apprise URL introduces some ambiguity between `syslog://{facility}` vs `syslog://{hostname}`. This flag allows you to specifically identify what your intentions are if the internal detection is wrong. + +### Example +Send a Syslog notification +```bash +# The following sends a syslog notification to the `user` facility +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + syslog:// +``` + +## RSysLog Testing +To test the remote server, the following can be performed: +```bash +# Setup a simple docker file that will run our our rsyslog server for us: +cat << _EOF > dockerfile.syslog +FROM ubuntu +RUN apt update && apt install rsyslog -y +RUN echo '\$ModLoad imudp\n \\ +\$UDPServerRun 514\n \\ +\$ModLoad imtcp\n \\ +\$InputTCPServerRun 514\n \\ +\$template RemoteStore, "/var/log/remote/%\$year%-%\$Month%-%\$Day%.log"\n \\ +:source, !isequal, "localhost" -?RemoteStore\n \\ +:source, isequal, "last" ~ ' > /etc/rsyslog.conf +ENTRYPOINT ["rsyslogd", "-n"] +_EOF + +# build it: +docker build -t mysyslog -f dockerfile.syslog . + +# Now run it: +docker run --cap-add SYSLOG --restart always \ + -v $(pwd)/log:/var/log \ + -p 514:514 -p 514:514/udp --name rsyslog mysyslog + +# In another terminal window, you can look into a directory +# relative to the location you ran the above command for a directory +# called `log` +You may need to adjust it's permissions, the log file will only get +created after you send an apprise notification. +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_techulus.md b/content/en/docs/Integrations/.Notifications/Notify_techulus.md new file mode 100644 index 00000000..512e8056 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_techulus.md @@ -0,0 +1,31 @@ +## Techulus Push Notifications +* **Source**: https://push.techulus.com +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 1000 Characters per message + +### Account Setup +To use this plugin, you need to first download the mobile app and sign up through there: +- [Apple](https://itunes.apple.com/us/app/push-by-techulus/id1444391917?ls=1&mt=8) +- [Android](https://play.google.com/store/apps/details?id=com.techulus.push) + +Once you've got your account, you can get your API key from [here](https://push.techulus.com/login.html). +You can also just get the **{apikey}** right out of the phone app that is installed. The **{apikey}** will look something like: +- `b444a40f-3db9-4224-b489-9a514c41c009` + +### Syntax +Valid syntax is as follows: +* **push**://**{apikey}**/ + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| apikey | Yes | The apikey associated with your Techulus Push account. + +#### Example +Send a Techulus Push notification: +```bash +# Assuming our {apikey} is b444a40f-3db9-4224-b489-9a514c41c009 +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + push:///b444a40f-3db9-4224-b489-9a514c41c009/ +``` diff --git a/content/en/docs/Integrations/.Notifications/Notify_telegram.md b/content/en/docs/Integrations/.Notifications/Notify_telegram.md new file mode 100644 index 00000000..ea85d4dc --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_telegram.md @@ -0,0 +1,78 @@ +## Telegram Notifications +* **Source**: https://telegram.org/ +* **Icon Support**: Yes +* **Attachment Support**: Yes +* **Message Format**: Text +* **Message Limit**: 4096 Characters per message + +### Account Setup +Telegram is slightly more complicated then some of the other notification services, so here is quick breakdown of what you need to know and do in order to send Notifications through it using this tool. + +At the very start (if you don't have an account already), you will need to connect with a phone. The site uses your phone number as it's credential to let you into your account. So download and install the phone app first via [Android](https://telegram.org/dl/android) or [Apple](https://telegram.org/dl/ios). + +Once you're set up, it can be a bit easier to just use their web interface [here](https://telegram.org/dl/webogram) with a PC (especially for development); but this part is up to you. + +#### Bot Setup +Telegram notifications require you to [create a bot](https://api.telegram.org). It's only after this is done that you will gain a vital piece of information Apprise needs called the **Token Identifier** (or **bot_token**). + +To do this you will have to open a communication (inside Telegram) to the **[BotFather](https://botsfortelegram.com/project/the-bot-father/)**. He is available to all users signed up to the platform. Once you've got a dialog box open to him: +1. Type: ```/newbot``` +2. Answer the questions it asks after doing this (which get the name of it, etc). +3. When you've completed step 2, you will be provided a *bot_token* that looks something like this: ```123456789:alphanumeric_characters```. +4. Type ```/start``` now in the same dialog box to enable and instantiate your brand new bot. + +The good news is this process only has to be done once. Once you get your **bot_token**, hold on to it and no longer worry about having to repeat this process again. It's through this bot that Apprise is able to send notifications onto Telegram to different users. + +#### Chat ID Conundrum + +---- +**2021.12.23 Update**: Recently the developers of Telegram have made it easier to acquire this ID using their own built in tool [explained here](https://www.alphr.com/find-chat-id-telegram/). Thank you `@mattpackwood` for this tip! + +---- +Behind the scenes, Telegram notifies users by their **{chat_id}** and not their _easy-to-remember_ user name. +Unfortunately (at this time) Telegram doesn't make it intuitive to get this **{chat_id}** without simple tricks and workarounds that can be found through Googling or just simply talking to their support team. + +However, Apprise can make this task a bit easier if the intention is to just private message yourself. If this is the case, simply send a private message to this new bot you just created (above). That's it! + +By doing this, Apprise is able to automatically to detect _your_ **{chat_id}** from the message sent to the bot. Doing this also allows you to greatly simplify the Apprise URL to read: +* **tgram**://**{bot_token}**/ + +When using the short form of the Telegram/Apprise URL and the bot owner (probably you) is successfully detected, the **{chat_id}** it detected will appear in the logs after the notification is sent. Ideally you should take this and update your Apprise URL to explicitly reference this in the future. +* **tgram**://**{bot_token}**/**{chat_id}** + +**Note**: you can also just go ahead and acquire the **{chat_id}** yourself after first messaging yourself as per the instructions above. Afterwards, you just need to visit `https://api.telegram.org/bot{bot_token}/getUpdates`. + * *Note:* the keyword `bot` must sit in-front of the actual **{bot_token}** that you were given by the BotFather. + * The result will contain the message you sent; in addition to this there is a section entitled `chat` with the `id` identified here. This is the **{chat_id}** you can use to directly message using Apprise. + +### Syntax +The following syntax is valid: +* **tgram**://**{bot_token}**/ + * **Note**: As already identified above: Apprise is clever enough to determine the chat_id of the bot owner (you) _only if you've sent it at least 1 private message to it_ first. + +* **tgram**://**{bot_token}**/**{chat_id}**/ +* **tgram**://**{bot_token}**/**{chat_id1}**/**{chat_id2}**/**{chat_id3}**/ + + +If you want to see the icon/image associated with the notification, you can have it come through by adding a **?image=yes** to your URL string like so: +* **tgram**://**{bot_token}**/**?image=Yes** +* **tgram**://**{bot_token}**/**{chat_id}**/**?image=Yes** +* **tgram**://**{bot_token}**/**{chat_id1}**/**{chat_id2}**/**{chat_id3}**/**?image=Yes** + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| bot_token | Yes | The token that identifies the bot you created through the *[BotFather](https://botsfortelegram.com/project/the-bot-father/)* +| chat_id | Yes | Identify the users you want your bot to deliver your notifications to. You must specify at least 1 *chat_id*. If you do not specify a chat_id, the notification script will attempt to detect the bot owner's (you) chat_id and use that. +| image | No | You can optionally append the argument of **?image=Yes** to the end of your URL to have a Telegram message generated before the actual notice which uploads the image associated with it. Due to the services limitations, Telegram doesn't allow you to post an image inline with a text message. But you can send a message that just contains an image. If this flag is set to true, *apprise* will send an image notification followed by the notice itself. Since receiving 2 messages for every 1 notice could be annoying to some, this has been made an option that defaults to being disabled. +| format | No | The default value of this is _text_. But if you plan on managing the format yourself, you can optionally set this to _markdown_ or _html_ as well. +| silent | No | A `yes/no` flag allowing you to send the notification in a silent fashion. By default this is set to `no`. +| preview | No | A `yes/no` flag allowing you to display webpage previews of your post. By default this is set to `no`. + +#### Example +Send a telegram notification to lead2gold: +```bash +# Assuming our {bot_token} is 123456789:abcdefg_hijklmnop +# Assuming the {chat_id} belonging to lead2gold is 12315544 +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + tgram://123456789:abcdefg_hijklmnop/12315544/ +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_toasty.md b/content/en/docs/Integrations/.Notifications/Notify_toasty.md new file mode 100644 index 00000000..c4b7cb8c --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_toasty.md @@ -0,0 +1,35 @@ +## :skull: Super Toasty Notifications +* **Source**: http://supertoasty.com/ +* **Icon Support**: Yes +* **Message Format**: Text +* **Message Limit**: 32768 Characters per message +* **Service End Date**: Somewhere between 2016 and 2017 + +### Service End Reason +It is hard to find much details on this project and whether or not it still exists in some form or another. + +Here is the open source project that extended on this: https://github.com/JohnPersano/SuperToasts. + +## Legacy Setup Details + +There isn't too much configuration for Super Toasty notifications. The message is basically just passed to your online Super Toasty account and then gets relayed to your device(s) you've setup from there. + +By default, +### Syntax +Valid syntax is as follows: +* **toasty**://**{user_id}**@**{device_id}** +* **toasty**://**{user_id}**@**{device_id1}**/**{device_id2}**/**{device_idN}** + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| user_id | Yes | The user identifier associated with your Super Toasty account. +| device_id | No | The device identifier to send your notification to. + +#### Example +Send a Super Toasty notification a configured device: +```bash +# Assuming our {user_id} is nuxref +# Assuming our {device_id} is abcdefghijklmnop-abcdefg +apprise toasty://nuxref@abcdefghijklmnop-abcdefg +``` diff --git a/content/en/docs/Integrations/.Notifications/Notify_twilio.md b/content/en/docs/Integrations/.Notifications/Notify_twilio.md new file mode 100644 index 00000000..d8066887 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_twilio.md @@ -0,0 +1,51 @@ +## Twilio +* **Source**: https://twilio.com +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 140 Characters per message + +### Account Setup +To use Twilio, you will need to acquire your _Account SID_ and _Auth Token_. Both of these are accessible via the [Twilio Dashboard](https://www.twilio.com/console). + +You'll need to have a number defined as an Active Number ([from your dashboard here](https://www.twilio.com/console/phone-numbers/incoming)). This will become your **{FromPhoneNo}** when identifying the details below. + +### Syntax +Valid syntaxes are as follows: +* **twilio**://**{AccountSID}**:**{AuthToken}**@**{FromPhoneNo}**/**{PhoneNo}** +* **twilio**://**{AccountSID}**:**{AuthToken}**@**{FromPhoneNo}**/**{PhoneNo1}**/**{PhoneNo2}**/**{PhoneNoN}** + +If no _ToPhoneNo_ is specified, then the _FromPhoneNo_ will be messaged instead; hence the following is a valid URL: +* **twilio**://**{AccountSID}**:**{AuthToken}**@**{FromPhoneNo}**/ + +[Short Codes](https://www.twilio.com/docs/glossary/what-is-a-short-code) are also supported but require at least 1 Target PhoneNo +* **twilio**://**{AccountSID}**:**{AuthToken}**@**{ShortCode}**/**{PhoneNo}** +* **twilio**://**{AccountSID}**:**{AuthToken}**@**{ShortCode}**/**{PhoneNo1}**/**{PhoneNo2}**/**{PhoneNoN}** + +### Parameter Breakdown +| Variable | Required | Description +| --------------- | -------- | ----------- +| AccountSID | Yes | The _Account SID_ associated with your Twilio account. This is available to you via the Twilio Dashboard. +| AuthToken | Yes | The _Auth Token_ associated with your Twilio account. This is available to you via the Twilio Dashboard. +| FromPhoneNo | **\*No** | The [Active Phone Number](https://www.twilio.com/console/phone-numbers/incoming) associated with your Twilio account you wish the SMS message to come from. It must be a number registered with Twilio. As an alternative to the **FromPhoneNo**, you may provide a [ShortCode](https://www.twilio.com/docs/glossary/what-is-a-short-code) instead. The phone number MUST include the country codes dialling prefix as well when placed. This field is also very friendly and supports brackets, spaces and hyphens in the event you want to format the number in an easy to read fashion. +| ShortCode | **\*No** | The ShortCode associated with your Twilio account you wish the SMS message to come from. It must be a number registered with Twilio. As an alternative to the **ShortCode**, you may provide a **FromPhoneNo** instead. +| PhoneNo | **\*No** | A phone number MUST include the country codes dialling prefix as well when placed. This field is also very friendly and supports brackets, spaces and hyphens in the event you want to format the number in an easy to read fashion.
      **Note:** If you're using a _ShortCode_, then at least one _PhoneNo_ MUST be defined. + +**Note:** This notification service does not use the title field; only the _body_ is passed along. + +#### Example +Send a Twilio Notification as an SMS: +```bash +# Assuming our {AccountSID} is AC735c307c62944b5a +# Assuming our {AuthToken} is e29dfbcebf390dee9 +# Assuming our {FromPhoneNo} is +1-900-555-9999 +# Assuming our {PhoneNo} - is in the US somewhere making our country code +1 +# - identifies as 800-555-1223 +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + twilio://AC735c307c62944b5a:e29dfbcebf390dee9@19005559999/18005551223 + +# the following would also have worked (spaces, brackets, +# dashes are accepted in a phone no field): +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + twilio://AC735c307c62944b5a:e29dfbcebf390dee9@1-(900) 555-9999/1-(800) 555-1223 + +``` diff --git a/content/en/docs/Integrations/.Notifications/Notify_twist.md b/content/en/docs/Integrations/.Notifications/Notify_twist.md new file mode 100644 index 00000000..7a783cba --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_twist.md @@ -0,0 +1,45 @@ +## Twist Notifications +* **Source**: https://twist.com +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 1000 Characters per Message + +[Sign in](https://twist.com/login) or [create an account](https://twist.com/signup) with the [Twist service](https://twist.com) if you don't already have one. + +The main thing with the Twist service is you always authenticate with an **{email}** and a **{password}**. Apprise can work with twist just knowing these two values as well. + +### Syntax +Valid authentication syntaxes are as follows: +* **twist**://**{password}**:**{email}** +* **twist**://**{email}**/**{password}** + +**Note:** If not channel is specified then by default the **#General** channel is messaged. + +You can also message one or more channels too: +* **twist**://**{password}**:**{email}**/#**{channel}** +* **twist**://**{email}**/**{password}**/#**{channel}** +* **twist**://**{password}**:**{email}**/#**{channel1}**/#**{channel2}** +* **twist**://**{email}**/**{password}**/#**{channel1}**/#**{channel2}** + +Twist always associates your account with a *default team*. Apprise determines this for you and by default notifies the channels you specify from within it. However, since it is possible to have your login/password associated with more then one **Team**. You can use the colon (:) as a delimiter to explicitly identify which team/channel you wish to message. + +* **twist**://**{password}**:**{email}**/#**{team}**:**{channel}** +* **twist**://**{email}**/**{password}**/#**{team}**:**{channel}** + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| email | Yes | This is the email address you associated with your Twist account when you signed up. +| password | Yes | This is the password you set when you signed up with Twist +| channel | No | This is the channel you wish to notify. If you don't specify one then the *#General* channel will be used by default from within your default team. You can optionally use a colon (:) placed in front of the channel name to force the message to a specific team (if you are part of more then one). + +#### Example +Send a Twist notification to the channel #general associated with our default team. +```bash +# Assume: +# - our {email} is test@example.com +# - our {password} is abc123 +# - The {channel} is #general +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + twist://abc123:test@example.com/#general +``` diff --git a/content/en/docs/Integrations/.Notifications/Notify_twitter.md b/content/en/docs/Integrations/.Notifications/Notify_twitter.md new file mode 100644 index 00000000..675b1d99 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_twitter.md @@ -0,0 +1,52 @@ +## Twitter Notifications +* **Source**: https://twitter.com/ +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 10000 Characters per message if a private DM otherwise public tweets are limited to 280 character. + +### Account Setup +Twitter Direct Messages are slightly more complicated then some of the other notification services, so here is quick breakdown of what you need to know and do in order to send Notifications through it using this tool: + +1. First off, you need to generate a Twitter App from [developer.twitter.com](https://developer.twitter.com/en/apps). It's through a Twitter App we will be able to send our DMs. +2. Once you create the app, you'll need to **generate the Access Tokens**. This Is done from the "*Keys and Access Tokens*" Tab. + +You should now have 4 Tokens to work with at this point on this same page. +* A Consumer Key +* A Consumer Secret +* An Access Token +* An Access Token Secret + +From here you're ready to go. You can post public tweets or simply create DMs through the use of the mode= variable. By default Direct Messaging (DM) is used. + +### Syntax +Valid syntaxes are as follows: +* **twitter**://**{ConsumerKey}**/**{ConsumerSecret}**/**{AccessToken}**/**{AccessSecret}** +* **twitter**://**{ScreenName}**@**{ConsumerKey}**/**{ConsumerSecret}**/**{AccessToken}**/**{AccessSecret}** +* **twitter**://**{ConsumerKey}**/**{ConsumerSecret}**/**{AccessToken}**/**{AccessSecret}**/**{ScreenName1}**/**{ScreenName2}**/**{ScreenNameN}** + +**Note** If no ScreenName is specified, then by default the Direct Message is sent to your own account. + +A Public tweet can be referenced like so: +* **twitter**://**{ConsumerKey}**/**{ConsumerSecret}**/**{AccessToken}**/**{AccessSecret}**?**mode=tweet** + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| ScreenName | Yes | The UserID of your account such as *l2gnux* (if your id is @l2gnux). You must specify a {userid} *or* an {ownerid}. +| ConsumerKey | Yes | The Consumer Key +| ConsumerSecret | Yes | The Consumer Secret Key +| AccessToken | Yes | The Access Token; you would have had to generate this one from your Twitter App Configuration. +| AccessSecret | Yes | The Access Secret; you would have had to generate this one from your Twitter App Configuration. +| Mode | No | This the the Twitter mode you want to operate in. Possible values are **dm** (for Private Direct Messages) and **tweet** to make a public post. By default this is set to **dm** + +#### Example +Send a Twitter DM to @testaccount: +```bash +# Assuming our {ConsumerKey} is T1JJ3T3L2 +# Assuming our {ConsumerSecret} is A1BRTD4JD +# Assuming our {AccessToken} is TIiajkdnlazkcOXrIdevi7F +# Assuming our {AccessSecret} is FDVJaj4jcl8chG3 +# our user is @testaccount +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + twitter://testaccount@T1JJ3T3L2/A1BRTD4JD/TIiajkdnlazkcOXrIdevi7F/FDVJaj4jcl8chG3 +``` diff --git a/content/en/docs/Integrations/.Notifications/Notify_windows.md b/content/en/docs/Integrations/.Notifications/Notify_windows.md new file mode 100644 index 00000000..149a41f1 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_windows.md @@ -0,0 +1,28 @@ +## Microsoft Windows Notifications +* **Source**: n/a +* **Icon Support**: Yes +* **Message Format**: Text +* **Message Limit**: 250 Characters per message + +Display notifications right inside of your windows application. This only works if you're sending the notification to the same windows system you're currently accessing. Hence this notification can not be sent from one PC to another. + +You may have to install a dependency on your windows system to get this to work. Simply run: +```bash +# windows:// minimum requirements +pip install pywin32 +``` + +### Syntax +There are currently no options you can specify for this kind of notification, so it's really easy to reference: +* **windows**:// + +### Parameter Breakdown +There are no parameters at this time. + +#### Example +Assuming we're on a Windows computer, we can send a Windows Notification to ourselves: +```bash +# Send ourselves a windows notification +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + windows:// +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_wxteams.md b/content/en/docs/Integrations/.Notifications/Notify_wxteams.md new file mode 100644 index 00000000..a00133b1 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_wxteams.md @@ -0,0 +1,46 @@ +## Webex Teams Notifications +* **Source**: https://teams.webex.com +* **Icon Support**: No +* **Message Format**: Markdown +* **Message Limit**: 1000 Characters per message + +### Account Setup +To use this plugin, you need to first access https://teams.webex.com and make yourself an account if you don't already have one. You'll want to create at least one 'space' before getting the 'incoming webhook'. + +Next you'll need to install the 'Incoming webhook' plugin found under the 'other' category here: https://apphub.webex.com/integrations/ + +These links may not always work as time goes by and websites always change, but at the time of creating this plugin [this was a direct link to it](https://apphub.webex.com/integrations/incoming-webhooks-cisco-systems). + +If you're logged in, you'll be able to click on the 'Connect' button. From there you'll need to accept the permissions it will ask of you. Give the webhook a name such as 'apprise'. + +When you're complete, you will receive a URL that looks something like this: +``` +https://api.ciscospark.com/v1/webhooks/incoming/\ + Y3lzY29zcGkyazovL3VzL1dFQkhPT0sajkkzYWU4fTMtMGE4Yy00 +``` + +The last part of the URL is all you need to be interested in. Think of this URL as: +* `https://api.ciscospark.com/v1/webhooks/incoming/{token}` + +So as you can see, we have is 3 separate tokens. These are what you need to build your apprise URL with. In the above (simplified) example, the tokens are: +* **Token**: `Y3lzY29zcGkyazovL3VzL1dFQkhPT0sajkkzYWU4fTMtMGE4Yy00` + +**Note:** Apprise supports this URL _as-is_ (_as of v0.7.7_); you no longer need to parse the URL any further. However there is slightly less overhead (internally) if you do. + +### Syntax +Valid syntax is as follows: +* `https://api.ciscospark.com/v1/webhooks/incoming/{token}` +* `wxteams://{token}/` + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| token | Yes | The tokens provided to you after creating a *incoming-webhook* + +#### Example +Send a Webex Teams notification: +```bash +# Assuming our {token} is T1JJ3T3L2DEFK543 +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + wxteams:///T1JJ3T3L2DEFK543/ +``` diff --git a/content/en/docs/Integrations/.Notifications/Notify_xbmc.md b/content/en/docs/Integrations/.Notifications/Notify_xbmc.md new file mode 100644 index 00000000..7e875ee2 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_xbmc.md @@ -0,0 +1,38 @@ +## XBMC Notifications +* **Source**: http://kodi.tv/ +* **Icon Support**: Yes +* **Message Format**: Text +* **Message Limit**: 250 Characters per message + +**Note:** XMBC is a legacy product and has been replaced by [[KODI|Notify_kodi]]. However for systems that can't be updated (such as a Jail Broken Apple TV2) you can use this protocol. + +### Syntax +Valid syntaxes are as follows: +* **xbmc**://**{hostname}** +* **xbmc**://**{hostname}**:**{port}** +* **xbmc**://**{userid}**:**{password}**@**{hostname}**:**{port}** + + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| hostname | Yes | The server XBMC is listening on. +| port | No | The port XBMC is listening on. By default the port is **8080**. +| userid | No | The account login to your XBMC server. +| password | No | The password associated with your XBMC Server. + +#### Example +Send a XBMC notification to our server listening on port 8080: +```bash +# Assuming our {hostname} is xbmc.server.local +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + xbmc://xbmc.server.local + +# You may have a password and user protecting your xbmc server; so the +# following is another way to hit your xbmc server: +# Assuming our {hostname} is xbmc.server.local +# Assuming our {userid} is xbmc +# Assuming our {password} is xbmc +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + xbmc://xbmc:xbmc@xbmc.server.local +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Notify_xmpp.md b/content/en/docs/Integrations/.Notifications/Notify_xmpp.md new file mode 100644 index 00000000..c550a66c --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_xmpp.md @@ -0,0 +1,41 @@ +## XMPP Notifications +* **Source**: https://xmpp.org/ +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 32768 Characters per message + +XMPP Support requires **sleekxmpp** to work: +```bash +pip install sleekxmpp +``` + +### Syntax +Valid syntaxes are as follows: +* `xmpp://{user}/{password}@{hostname}` +* `xmpps://{user}/{password}@{hostname}` +* `xmpp://{user}/{password}@{hostname}/{jid}` +* `xmpp://{user}/{password}@{hostname}/{jid1}/{jid2}/{jidN}` + +Secure connections should be referenced using **xmpps://** where as insecure connections should be referenced via **xmpp://**. + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| hostname | Yes | The server XMPP is listening on. +| port | No | The port the XMPP server is listening on By default the port is **5222** for **xmpp://** and **5223** for the secure **xmpps://** references. +| userid | No | The account login used to build the JID with if one isn't specified. +| password | Yes | The password associated with the XMPP Server. +| jid | No | The JID account to associate/authenticate with the XMPP Server. This is automatically detected/built from the {userid} and {hostname} if it isn't specified. +| xep | No | The XEP specifications to include. By default **xep_0030** (Service Discovery) and **xep_0199** (XMPP Ping) if nothing is specified. +| to | No | The JID to notify + +#### Example +Send a XMPP notification to our server listening on port 5223: +```bash +# Assuming the xmpp {hostname} is localhost +# Assuming the jid is user@localhost +# - constructed using {hostname} and {userid} +# Assuming the xmpp {password} is abc123 +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + xmpp://user:abc123@localhost +``` diff --git a/content/en/docs/Integrations/.Notifications/Notify_zulip.md b/content/en/docs/Integrations/.Notifications/Notify_zulip.md new file mode 100644 index 00000000..144bbbc0 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Notify_zulip.md @@ -0,0 +1,58 @@ +## Zulip Notifications +* **Source**: https://zulipchat.com/ +* **Icon Support**: No +* **Message Format**: Text +* **Message Limit**: 10000 Characters per message + +### Account Setup +To use this Zulip, you must have a Zulip Chat bot defined; See [here for more details](https://zulipchat.com/help/add-a-bot-or-integration). At the time of writing this plugin the instructions were: +1. From your desktop, click on the gear icon in the upper right corner. +2. Select Settings. +3. On the left, click Your bots. +4. Click Add a new bot. +5. Fill out the fields, and click Create bot. + +If you know your organization **{ID}** (as it's part of your zulipchat.com url), then you can also access your bot information by visiting: `https://ID.zulipchat.com/#settings/your-bots` + +Upon creating a bot successfully, you'll now be able to access it's API Token. + +### Syntax +Valid syntaxes are as follows: +* **zulip**://**{botname}**@**{organization}**/**{token}**/ +* **zulip**://**{botname}**@**{organization}**/**{token}**/**{stream}** +* **zulip**://**{botname}**@**{organization}**/**{token}**/**{stream1}**/**{stream2}**/**{streamN}** +* **zulip**://**{botname}**@**{organization}**/**{token}**/**{email}** +* **zulip**://**{botname}**@**{organization}**/**{token}**/**{email1}**/**{email2}**/**{emailN}** + +**Note**: If neither a **{stream}** or **{email}** is specified then by default the stream **general** is notified. + +You can also mix and match the entries above too: +* **zulip**://**{botname}**@**{organization}**/**{token}**/**{stream1}**/**{email1}**/ + +### Parameter Breakdown +| Variable | Required | Description +| ----------- | -------- | ----------- +| organization| Yes | The organization you created your webhook under. The trailing part of the organization reading `.zulipchat.com` is not required here, however this is gracefully handled if specified. +| token | Yes | The API token provided to you after creating a *bot* +| botname | Yes | The botname associated with the API Key. The `-bot` portion of the bot name is not required, however this is gracefully handled if specified. +| email | No | An email belonging to one of the users that have been added to your organization the private message. +| stream | No | A stream to notify. + +#### Example +Send a Zulip notification to default #general (default) stream: +```bash +# Assuming our {organization} is apprise +# Assuming our {token} is T1JJ3T3L2 +# Assuming our {botname} is goober +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + zulip:///goober@apprise/T1JJ3T3L2 +``` + +Send a Zulip notification to the #support stream: +```bash +# Assuming our {organization} is apprise +# Assuming our {token} is T1JJ3T3L2 +# Assuming our {stream} is #support +apprise -vv -t "Test Message Title" -b "Test Message Body" \ + zulip:///apprise/T1JJ3T3L2/support +``` diff --git a/content/en/docs/Integrations/.Notifications/Sponsors.md b/content/en/docs/Integrations/.Notifications/Sponsors.md new file mode 100644 index 00000000..ae6deb5e --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Sponsors.md @@ -0,0 +1,12 @@ +# :heart: Sponsorship and Donations + +This project was created and is maintained on the few evenings and weekends I can allocate to it. I pay for many of the notification services Apprise supports so that I can provide support for them and keep everything in working order. Since Apprise supports more than 70+ services, these fee's can really add up. +* :gem: [Sponsorship](https://github.com/sponsors/caronc) goes a long way in helping me cover costs. I've got some great stickers I can ship to you (no matter where you are) for those that help out here! [See here](https://github.com/sponsors/caronc) for more details on this and other cool perks. +* :coffee: Even those who just [buying me a coffee](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=MHANV39UZNQ5E); your contribution is so appreciated and it truly does help support the product! + +Thank you to all of those who have already thrown a few dollars my way; your support has been truly appreciated! + +*** + +# :medal_military: Kudos and Honorary Mentions +Just a shout out to :medal_military: @keppo070 for his extra donation help! It's very much appreciated! \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/Troubleshooting.md b/content/en/docs/Integrations/.Notifications/Troubleshooting.md new file mode 100644 index 00000000..a09aaebb --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/Troubleshooting.md @@ -0,0 +1,186 @@ + +## Table of Contents + + + * [General Troubleshooting](#general-troubleshooting) + * [Tag Matching Issues](#tag-matching-issues) + * [Too Much Data and Overflow Directive](#too-much-data-and-overflow-directive) + * [Special Characters and URL Conflicts](#special-characters-and-url-conflicts) + * [Formatting Issues](#formatting-issues) + + +## General Troubleshooting + +The best thing you can do when troubleshooting problems with your notification is to work it out using the _apprise_ command line tool. You can add verbosity what is going on with the notification you're troubleshooting by simply specifying **-v**; the more v's you specify, the more verbose it gets: +```bash +# In the below example, I am trying to figure out why my mailto:// line +# isn't working: +apprise -vvv -t "test title" -b "test body" \ + mailto://user:password@gmail.com +``` +The output can help you pinpoint what is wrong with your URL. + +If the output appears cryptic, or you feel that you've exhausted all avenues, Don't be afraid to [open a ticket and ask here](https://github.com/caronc/apprise/issues). It greatly helps if you share the output received from your debug response. It might be just a simple tweak to your URL that is needed, otherwise we might have a good bug we need to solve. + +Please feel free to join us on [Discord](https://discord.gg/MMPeN2D); it's not a big community, but it's growing slowly. You may just find an answer here after asking. + +Just be cautious as the debugging information can potentially expose personal information (such as your password and/or private access tokens) to the screen. Please remember to erase this or swap it with some random characters before posting such a thing publicly. + +## Tag Matching Issues + +If you tagged your URLs, they're not going to be notified unless you explicitly reference them with **--tag=** (or **-g**). You can always check to see what URLs have been loaded using the `all` tag directive paired with **--dry-run**: +```bash +# This simply lists all of the tags found in the apprise.txt file +# You don't even need to specify the --config if you're reading files +# from their default locations: +python apprise --dry-run --tag=all \ + --config=/my/path/to/my/config/apprise.txt + +# Without a --tag specified, you'll only match URLs that have +# no tag associated with them: +python apprise --dry-run \ + --config=/my/path/to/my/config/apprise.txt + +# Otherwise, --dry-run can help you track what notifications are triggered +# depending on what services you're targeting (without actually performing +# any action): +python apprise --dry-run --tag=devops \ + --config=/my/path/to/my/config/apprise.txt +``` + +## Too Much Data and Overflow Directive + +Out of the box, Apprise passes the full message (and title) you provide right along to the notification source(s). Some sources can handle a large surplus of data while others might not. These limitations are documented (*to the best of my knowledge*) on each of the [individual services corresponding wiki pages](https://github.com/caronc/apprise/wiki#notification-services). + +However if you don't want to be bothered with upstream restrictions, Apprise has a somewhat _non-elegant_ way of handling these kinds of situations that you can leverage. You simply need to tack on the **overflow** parameter somewhere in your Apprise URL; for example: +* `schema://path/?overflow=split` +* `schema://path/?overflow=truncate` +* `schema://path/?overflow=upstream` +* `schema://path/?other=options&more=settings&overflow=split` + +The possible **overflow=** options are defined as: + +| Variable | Description +| ----------- | ----------- | +| **split** | This will break the message body into as many smaller chunks as it takes to ensure the full delivery of what you wanted to notify with. The smaller chunk sizes are based on the very restrictions put out by the notification service itself.

      For example, _Twitter_ restricts public tweets to 280 characters. If your Apprise/Twitter URL was updated to look like this: `twitter:///?overflow=split`, A message of say 1000 characters would be broken (and sent) via 4 smaller messages (280 + 280 + 280 + 160). +| **truncate** | This just ensures that regardless of how much content you're passing along to a remote notification service, the contents will never exceed the restrictions set by the service itself.

      Take our _Twitter_ example again (which restricts public tweets to 280 characters), If your Apprise/Twitter URL was updated to look like this: `twitter:///?overflow=truncate`, A message of say 1000 characters would only send the first 280 characters to it. The rest would just be _truncated_ and ignored. +| **upstream** | Simply let the the upstream notification service handle all of the data passed to it (large or small). Apprise will not mangle/change it's content in any way.
      **Note**: This is the default configuration option used when the `overflow=` directive is not set. + +Please note that the **overflow=** option isn't a perfect solution: +* It can fail for services like Telegram which can take content in the format of _HTML_ (in addition to _Markdown_). If you're using _HTML_, then there is a very strong possibility that both the `overflow=split` and/or `overflow=truncate` option could cut your message in the middle of an un-closed HTML tag. Telegram doesn't fair to well to this and in the past (at the time of writing this wiki entry) would error and not display the data. +* It doesn't elegantly split/truncate messages the end of a word (near the message limits). It just makes a cut right at the notification services hard limit itself. +* The `overflow=split` can work against you. Consider a situation where you send thousands of log entries accidentally to you via an SMS notification service. Be prepared to get hundreds of text messages to re-construct all of the data you asked it to deliver! This may or may not be what you wanted to happen; in this case, perhaps `overflow=truncate` is a better choice. Some services might even concur extra costs on you if you exceed a certain message threshold. The point is, just be open minded when you choose to enable the _split_ option with notification services that have very small message size limits. The good news that each supported notification service on the [Apprise Wiki](https://github.com/caronc/apprise/wiki) identifies what each hard limit is set to. + +So while the overflow _switch_ is a viable solution for most notification services; consider that it may not work perfectly for all. + +## Special Characters and URL Conflicts + +Apprise is built around URLs. Unfortunately URLs have pre-reserved characters it uses as delimiters that help distinguish one argument/setting from another. + +For example, in a URL, the **&**, **/**, and **%** all have extremely different meanings and if they also reside in your password or user-name, they can cause quite a troubleshooting mess as to why your notifications aren't working. + +Now there is a workaround: you can replace these characters with special **%XX** character-set (URL encoded) values. These encoded characters won't cause the URL to be mis-interpreted allowing you to send notifications at will. + +Below is a chart of special characters and the value you should set them: + +| Character | URL Encoded | Description +| ----------- | -------- | ----------- +| **%** | **%25** | The percent sign itself is the starting value for defining the %XX character sets. +| **&** | **%26** | The ampersand sign is how a URL knows to stop reading the current variable and move onto the next. If this existed within a password or username, it would only read 'up' to this character. You'll need to escape it if you make use of it. +| **#** | **%23** | The hash tag and/or pound symbol as it's sometime referred to as can be used in URLs as anchors. +| **?** | **%3F** | The question mark divides a url path from the arguments you pass into it. You should ideally escape this if this resides in your password or is intended to be one of the variables you pass into your url string. +| _(a space)_ | **%20** | While most URLs will work with the space, it's a good idea to escape it so that it can be clearly read from the URL. +| **/** | **%2F** | The slash is the most commonly used delimiter that exists in a URL and helps define a path and/or location. +| **@** | **%40** | The at symbol is what divides the user and/or password from hostname in a URL. if your username and/or password contains an '@' symbol, it can cause the url parser to get confused. +| **+** | **%2B** | By default a addition/plus symbol **(+)** is interpreted as a _space_ when specified in the URL. It must be escaped if you actually want the character to be represented as a **+**. +| **,** | **%2C** | A comma only needs to be escaped in _extremely rare circumstances_ where one exists at the very end of a specific URL that has been chained with others using a comma. [See PR#104](https://github.com/caronc/apprise/pull/104) for more details as to why you _may_ need this. +| **:** | **%3A** | A colon will never need to be escaped unless it is found as part of a user/pass combination. Hence in a url `http://user:pass@host` you can see that a colon is already used to separate the _username_ from the _password_. Thus if your _{user}_ actually has a colon in it, it can confuse the parser into treating what follows as a password (and not the remaining of your username). This is a very rare case as most systems don't allow a colon in a username field. + +## Formatting Issues + +If your upstream server is not correctly interpreting the information you're passing it, it could be a simple tweak to Apprise you need to make to help it along. + +The thing with Apprise is it doesn't know what you're feeding it (the format the text is in); so by default it just passes exactly what you hand it right along to the upstream service. Since Email operates using HTML formatting (by default), if you feed it raw text, it may not interpret the new lines correctly (because HTML ignores these charaters); you can solve this problem by 3 ways: + +1. Change your email URL to read this instead (adding the `format=text` parameter) + * `mailtos://example.com?user=username&pass=password&to=myspy@example.com&format=text` +1. For developers, your call to `notify()` to include should include the `body_format` value set: + ```python + # one more include to keep your code clean + from apprise import NotifyFormat + + apobj.notify( + body=message, + title='My Notification Title', + body_format=NotifyFormat.TEXT, + ) + ``` +1. For developers, you can actually make a global variable out of the `body_format` so you don't have to keep setting it every time you call `notify` (in-case you intend to call this throughout your code in several locations): + ```python + import apprise + from apprise import NotifyFormat + from apprise import AppriseAsset + + # Create your Apprise Asset + asset = apprise.Asset(body_format=apprise.NotifyFormat.TEXT) + + # Create your Apprise object (pass in the asset): + apobj = apprise.Apprise(asset=asset) + + # Add your objects (like you're already doing) + apobj.add('mailtos://example.com?user=username&pass=password&to=myspy@example.com') + + # And your multi-line message + message = """ + This message will self-destruct in 10 seconds... + + Or not... (... yeah it probably won't at all) + + Chris + """ + + # The big difference here is now all calls to notify already have the body_format + # set to be TEXT. Apprise knows everything you're feeding it will always be this + # You can still specify body_format here in the future and over-ride if you ever + # need to, but your notify stays simple like you had it (but the multi line will work + # this time): + apobj.notify( + body=message, + title='My Notification Title', + ) + ``` +**What it boils down to is:** +* Developers can use the `body_format` tag which is telling Apprise what the **INPUT source** is. If a Apprise knows this it can go ahead and make special accommodations for the services that are expecting another format. By default the `body_format` is `None` and no modifications to the data fed to Apprise is touched at all (it's just passed right through to the upstream provider). +* End User can modify their URL to specify a `format=` which can be either `text`, `markdown`, or `html` which sets the **OUTPUT source**. Notification Plugins can use this information to accommodate the data it's being fed and behave differently to help accommodate your situation. + +## Scripting Multi-Line Input/Output with CLI +If you're using the `apprise` tool from the command line, you may be trying to script it to send multiple lines. To acomplish this, there are a number of tweaks you can do with `bash`, `sh`, or `ksh` such as: +Those who want to deliver multiple line output can use the CLI as follows: +```bash +# Send ourselves a DBus related multi-line notification using `stdin` and +# the `cat` tool: +cat << _EOF | apprise -vv -t "Multi-line STDIN Redirect Example" dbus:// +Line 1 of output +Line 2 of output +Line 3 of output +_EOF + +# Another way is to just redirect the contents of file straight back +# into apprise: +cat ~/notes.txt | apprise -vv -t "Multi-line cat STDIN Redirect Example 2" \ + email://user:pass@hotmail.com + +# You can also use pass content from a multi-line variable you +# declared: +MULTILINE_VAR=" +This variable has been defined +with multiple lines in it." + +# Now send our variable straight into apprise: +apprise -vv -t "Multi-line Variable Example" -b "$MULTILINE_VAR" \ + gotify://localhost + +# Note: to preserve the new lines, be sure to wrap your +# variable in quotes (like example does above). + +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/URLBasics.md b/content/en/docs/Integrations/.Notifications/URLBasics.md new file mode 100644 index 00000000..d585992e --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/URLBasics.md @@ -0,0 +1,27 @@ +# Apprise URL Basics + +## The Apprise URL + +Apprise URLs are the blueprints used let the application know where to relay your notification(s) through. They follow a simple convention: +* `service://configuration/?parameters` + +### Service +The `service://` you specify determines which Apprise plugin will get loaded. For example, an Email address uses the service id of `mailto://` (and `mailtos://` for secure emails). + +[[Click here|Home#notification-services]] to see a list of supported services supported by Apprise. + +### Configuration +Most services require a different set of configuration (depending on what it is). All configuration can be found right after the `service://` declaration. You need to read up on the notification you're trying to set up to know what you set here. This can contain anything from _API Keys_, _passwords_, _hostnames_, etc. + +## Parameters +These are completely optional to use; but sometimes they grant you more abilities. +Additional parameters always start after the first question mark (**?**) defined in the Apprise URL. Here is where you can over-ride some global system settings in addition to treating it as an alternative to _Core Configuration_ options. + +### Global Parameter Breakdown +| Variable | Description +| ----------- | ----------- +| overflow | This parameter can be set to either `split`, `truncate`, or `upstream`. This determines how Apprise delivers the message you pass it. By default this is set to `upstream`
      :information_source: `upstream`: Do nothing at all; pass the message exactly as you received it to the service.
      :information_source: `truncate`: Ensure that the message will fit within the service's documented upstream message limit. If more information was passed then the defined limit, the overhead information is truncated.
      :information_source: `split`: similar to truncate except if the message doesn't fit within the service's documented upstream message limit, it is split into smaller chunks and they are all delivered sequentially there-after. +| format | This parameter can be set to either `text`, `html`, or `markdown`. Some services support the ability to post content by several different means. The default of this varies (it can be one of the 3 mentioned at any time depending on which service you choose). You can optionally force this setting to stray from the defaults if you wish. If the service doesn't support different types of transmission formats, then this field is ignored. +| verify | External requests made to secure locations (such as through the use of `https`) will have certificates associated with them. By default, Apprise will verify that these certificates are valid; if they are not then no notification will be sent to the source. In some occasions, a user might not have a certificate authority to verify the key against or they trust the source; in this case you will want to set this flag to `no`. By default it is set to `yes`. +| cto | This stands for Socket Connect Timeout. This is the number of seconds Requests will wait for your client to establish a connection to a remote machine (corresponding to the _connect()_) call on the socket. The default value is 4.0 seconds. +| rto | This stands for Socket Read Timeout. This is the number of seconds the client will wait for the server to send a response. The default value is 4.0 seconds. \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/_index.md b/content/en/docs/Integrations/.Notifications/_index.md new file mode 100644 index 00000000..4e29e82c --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/_index.md @@ -0,0 +1,11 @@ +--- +title: "Apprise Guide" +linkTitle: "Apprise Guide" +description: >- + The Apprise Guide provides an overview of setting up notification channels and the specifics of each channel type. +--- + + + +The Developer Guide provides a step by step guide to building services and code snippets for specific scenarios. + diff --git a/content/en/docs/Integrations/.Notifications/config.md b/content/en/docs/Integrations/.Notifications/config.md new file mode 100644 index 00000000..f7759a73 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/config.md @@ -0,0 +1,145 @@ +## :mega: Apprise Configuration +Configuration allows you to identify all of your notification services in one or more secure spots. + +There are 2 supported formats: +- **[[TEXT|Config_text]]**: Super basic and easy to use. +- **[[YAML|Config_yaml]]**: A wee bit more complicated (when comparing to TEXT) but offers you much more flexibility. + +## Configuration URLs +Whether loading your Apprise configuration from the command line or the Python framework, they also work in the form of URLs too. This allows for URL location endpoints to extend further then what is currently already supported in the future. + +Today, the following configuration URLs are supported: + +| URL | Description | +| --------------- | ----------- | +| **file://** | Reads the configuration from a file accessible locally from whence the directory you're standing in when using Apprise. You can specify an absolute path such as `file:///etc/config/apprise.yaml` or a relative one such as `file://var/config.txt`.

      The file extension associated with your configuration file plays a role in how it is interpreted. Configuration files ending with **.yml** and **.yaml** are assumed to be [YAML based](/caronc/apprise/wiki/Config_yaml) while anything else is assumed to be [TEXT based](/caronc/apprise/wiki/Config_text).

      **Note:** The `file://` is assumed to be the default schema used in your configuration URL (even if you didn't specify it). Hence Apprise will also just accept the path as-is such as `/absolute/path/to/apprise.cfg` or `relative/path/to/config.yaml`. | +| **http://** and **https://** | Retrieves your configuration from a web server (via the HTTP protocol). An example of this might be: `http://localhost/apprise/` or `https://example.com/apprise/config`.

      The server response plays a key role in how Apprise interprets the data content. The **Content-Type** residing in the _Response Header_ must identify contain one of the following:

      [YAML based](/caronc/apprise/wiki/Config_yaml):
      - **text/yaml**
      - **text/x-yaml**
      - **application/yaml**
      - **application/x-yaml**

      [TEXT based](/caronc/apprise/wiki/Config_text):
      - **text/plain**
      - **text/html**

      **Note:** Apprise always makes a **POST** to the server(s) in question. All content returned should be encoded as **UTF-8**. + +### Configuration Format Override +You can always over-ride the Apprise configuration detection process (whether it is YAML or TEXT formatted) by simply adding `?format=text` or `?format=yaml` to the end of your configuration URL. This will enforce that the configuration polled is to be interpreted in a specific way. For example: +* `file:///etc/apprise/caronc.cfg?format=yaml` : This forces what would have otherwise been interpreted as a TEXT file (because of the extension) to be interpreted as a YAML one. +* `http://localhost/my/apprise/config?format=text`: Force the processing of the web response to be a TEXT base configuration. + +### Apprise API +There is an Apprise API built for hosting your configuration on the cloud; [check it out here](https://github.com/caronc/apprise-api). This is a great cloud solution for centralizing your configuration on your network so it is accessible from anywhere. + +## CLI +To get started you can check out this [[dedicated wiki page on the CLI|CLI_Usage]]. +The following lines work really with the command line: +* **--config** (**-c**): so you can manually load configuration files and process the notification URLs from there. You only need to provide this option if you don't have a configuration file already set up in the default search paths (explained below). +* **--tag** (**-g**): so you can filter what you notify by the label you assigned to them. + +If the Apprise CLI tool is executed without any notification URLs or Configuration based ones specified, then the following local files are tested to see if they exist and can be processed: +* **~/.apprise** +* **~/.apprise.yml** +* **~/.config/apprise** +* **~/.config/apprise.yml** + +Microsoft Windows users can find the configuration path as: +* **%APPDATA%/Apprise/apprise** +* **%APPDATA%/Apprise/apprise.yml** +* **%LOCALAPPDATA%/Apprise/apprise** +* **%LOCALAPPDATA%/Apprise/apprise.yml** + +**Note:** The configuration locations identified above are ignored if a **--config** (**-c**) argument is specified. + +### CLI Examples: +Here are some simple examples: +```bash +# The following will only send notifications to services that has the +# `tv` tag associated with it. +apprise -vvv -b "Notify only Kodi's in house" --tag=tv +``` + +You can also get your configuration from a web server: +```bash +# website +apprise -vvv --config=https://myserver/my/apprise/config -b "notify everything" + +# you can specify as many --config (-c) lines as you want to add more +# and more notification services to be handled: +apprise -vvv --config=https://myserver/my/apprise/config \ + --config=/a/path/on/the/local/pc -b "notify everything" + +# Remember to tag everything because the most powerful feature is to +# load all of your services but only trigger the specific ones you're +# interested in notifying: +apprise -vvv --config=https://myserver/my/apprise/config \ + --config=/a/path/on/the/local/pc \ + -b "notify urls tagged with my-admin-team only" \ + --tag=my-admin-team +``` + +## Developers +For developers, there is a new object called **AppriseConfig()** which works very similar to the **AppriseAsset()** object. It's just anothr object that can be easily consumed by the Apprise() instance. + +Up until now, you would add URLs to Apprise like so: +```python +from apprise import Apprise + +# Our object +a = Apprise() + +# Add our services (associate a tag with each) +a.add('mailto://user:pass@hotmail.com', tag='email') +a.add('gnome://', tag='desktop') + +# This command actually won't notify anything because tags were associated +# with our URLs +a.notify("A message!", title="An Optional Title") + +# This however will notify all of them. It uses the special keyword 'all' +# which disregards any tag names set. +a.notify("A message!", title="An Optional Title", tag="all") + +# To notify specific URLs that were loaded, you can match them by their +# tag; the below would only access out mailto:// entry: +a.notify("A message!", title="An Optional Title", tag="email") +``` + +Well this is how little your code has to change with configuration: +```python +from apprise import AppriseConfig + +# Create an AppriseConfig() object +config = AppriseConfig() + +# Similar to the Apprise() we add our configuration paths + +# Add a configuration file by it's local path +config.add('/local/path/on/your/server/config') + +# Same as the above, except it's a good idea to get in the +# habit of locating local files with the file:// prefix. +config.add('file://~.apprise') + +# URLs work too http:// an https:// +config.add('http://localhost/my/apprise/config/url/path') +config.add('http://example.com/config') + +# --- +# Our new config object can be simply added into our apprise +# instance as though it were another notification service +# it were a notification service: +a.add(config) + +# Send off our all of our notifications +a.notify("A message!", title="An Optional Title") + +# filter our notifications by those associated with the +# devops tag: +a.notify(tag="devops") +``` + +## :label: Tagging from the CLI: +Tagging (with the **--tag=** (or **-g**) allows you to only notify entries from the configuration you defined that you want to. You could define hundreds of entries and through tagging, just notify a few (or if any at all). + +```bash +# assuming you got your configuration in place; tagging works like so: +apprise -b "has TagA" --tag=TagA +apprise -b "has TagA OR TagB" --tag=TagA --tag=TagB + +# For each item you group with the same --tag value is AND'ed +apprise -b "has TagA AND TagB" --tag="TagA, TagB" +apprise -b "has (TagA AND TagB) OR TagC" --tag="TagA, TagB" --tag=TagC +``` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/config_text.md b/content/en/docs/Integrations/.Notifications/config_text.md new file mode 100644 index 00000000..ac6f9943 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/config_text.md @@ -0,0 +1,80 @@ +## Text Based Apprise Configuration +The TEXT Based configuration files are pretty straight forward and very easy to work with. You just provide a list of your notification URLs like so. +```apache +# Use pound/hashtag (#) characters to comment lines +# Here is an example of a very basic entry (without tagging): +mailto://someone:theirpassword@gmail.com +slack://token_a/token_b/token_c +``` + +Tagging is a very feature-rich aspect of Apprise, and you can easily associate tags with your URLs by just placing them before your URL you define. If you want to specify more then one tag, just separate them with a space and/or comma. +```apache +# Use pound/hashtag (#) characters to comment lines +# The syntax is = or just on each line +# +# Here is an example of a very basic entry (without tagging): +mailto://someone:theirpassword@gmail.com + +# Now here is an example of tag associations to another URL +# The equal sign (=) delimits the tag from the actual URL: +desktop=gnome:// + +# If you have more then one tag you'd like to associate with it, +# simply use a comma (,) and or space to delimit your tags. +# The same rules apply afterwards; just use an equal sign (=) +# to mark the end of your tagging definitions and start your +# notification service URL: +tv,kitchen=kodi://myuser:mypass@kitchen.hostame +tv,basement=kodi://myuser:mypass@basement.hostame +``` + +## Expanding Configuration Sources +The TEXT based configuration also supports the keyword `include` which allows you to pull more configuration down from other locations. For example: +```apache +# Perhaps this is your default configuration that is always read +# stored in ~/.config/apprise (or ~/.apprise) + +# The following could import all of the configuration located on your +# Apprise API: +include http://localhost:8080/get/apprise +``` + +From there you can easily use the CLI tool from the command line while managing your configuration remotely: +```bash +# automatically reads our above configuration +# Which further imports our additional configuration entries: +apprise -vv -t "my title" -b "my message body" +``` + +You can freely mix/match include statements and Apprise URLs as well, for example: +```apache +# Our config file located in ~/.config/apprise (or ~/.apprise) + +# Our imports +include http://localhost:8080/get/apprise + +# A relative config file import (relative to 'this' configuration file) +include more_configuration.cfg + +# Absolute path inclusion works well too: +include /etc/apprise/cfg + +# you can still include your other URLs here too +mailto://someone:theirpassword@gmail.com + +# as always, it's recommended you tag everything and then just +# use the --tag (or -g) switch to access the entries. This +# is especially important if you're going to start storing your +# configuration elsewhere too! +devops=slack://tokenA/tokenB/TokenC +``` + +All loaded configuration files can also contain the `include` keyword as well. But by default they `include` recursion only happens at 1 level. If you want to allow more files to be included, you need to specify `--recursion-depth` (`-R`) and set it to the number of recursive levels you will allow the include to occur for. By default this is set to 1 with the `apprise` tool. + +**Note:** For security reasons, an `http://` configuration source can NOT `include` a `file://` source. + +## Web Hosted TEXT Configuration +Apprise can retrieve configuration files from over a network as well using the HTTP protocol. +For HTTP requests, the **Content-Type** HTTP Header (_which defines Mime Type_) is very important. Apprise will parse remote network hosted configuration files as TEXT so long as you're using one of the following **Content-Type** entries: +- `text/plain` +- `text/html` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/config_yaml.md b/content/en/docs/Integrations/.Notifications/config_yaml.md new file mode 100644 index 00000000..0182ef10 --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/config_yaml.md @@ -0,0 +1,169 @@ +## YAML Based Apprise Configuration +YAML Support offers a much more advance use then what is provided by the TEXT format. Apprise expects configuration files to be found with the extension of `.yml` or `.yaml`. + +Here is a configuration example in it's absolute simplest form: +```yaml +# +# Minimal Configuration Example +# + +# Define your URLs +urls: + # Either on-line each entry like this: + - json://localhost + - xml://localhost + + # Or add a colon to the end of the URL where you can optionally provide + # over-ride entries. One of the most likely entry to be used here + # is the tag entry. This gets extended to the global tag (if defined) + # above + - windows://: + # 'tag' is a special keyword that allows you to associate tags with your + # services: + - tag: desktop +``` + +To expand on **tags**, you can also identify a _global entry_ that will be applied _to ALL of the subsequent URL entries defined in the YAML file_. Example +```yaml +# +# Global Tag Configuration Example +# + +# Define our version +version: 1 + +# Our global tags to associate with all of the URLs we define +tag: admin, devops + +# Define your URLs (Mandatory!) +urls: + - xml://localhost + - json://localhost + - kodi://user:pass@myserver +``` + +You can over-ride the AppriseAsset Object too if you know the objects you want to update using the special keyword **asset**. +```yaml +# +# Asset Override Configuration Example +# + +# Define our version +version: 1 + +# Define an Asset object if you wish (Optional) +asset: + app_id: NuxRef + app_desc: NuxRef Notification + app_url: http://nuxref.com + +# Define your URLs (Mandatory!) +urls: + - mailto://bill:pass@protomail.com +``` + +YAML configuration gets more powerful when you want to utilize a URL more then once. Here is a more complicated example: +```yaml +# if no version is specified then version 1 is presumed. Thus this is a +# completely optional field. It's a good idea to just add this line because it +# will help with future ambiguity (if it ever occurs). +version: 1 + +# Define an Asset object if you wish (Optional) +asset: + app_id: AppriseTest + app_desc: Apprise Test Notifications + app_url: http://nuxref.com + +# Optionally define some global tags to associate with ALL of your +# urls below. +tag: admin, devops + +# Define your URLs +urls: + # One-liner (no colon at the end); just the url as you'd expect it: + - json://localhost + + # A colon grants you customization; the below associates another tag + # with our URL. This means it will have admin, devops and customer: + - xml://localhost: + - tag: customer + + # Replication Example # + # The more elements you specify under a URL the more times the URL will + # get replicated and used. Hence this entry actually could be considered + # 2 URLs being called with just the destination email address changed: + - mailto://george:password@gmail.com: + - to: jason@hotmail.com + - to: fred@live.com + + # Again... to re-iterate, the above mailto:// would actually fire two (2) + # separate emails each with a different destination address specified. + # Be careful when defining your arguments and differentiating between + # when to use the dash (-) and when not to. Each time you do, you will + # cause another instance to be created. + + # Defining more then 1 element to a multi-set is easy, it looks like this: + - mailto://jackson:abc123@hotmail.com: + - to: jeff@gmail.com + tag: jeff, customer + + - to: chris@yahoo.com + tag: chris, customer +``` + +## Expanding Configuration Sources +The YAML based configuration also supports the keyword `include` which allows you to pull more configuration down from other locations. For example: +```yaml +# Perhaps this is your default configuration that is always read +# stored in ~/.config/apprise.yml (or ~/.apprise.yml) +include: + # The following could import all of the configuration located on your + # Apprise API: + - http://localhost:8080/get/apprise +``` + +From there you can easily use the CLI tool from the command line while managing your configuration remotely: +```bash +# automatically reads our above configuration +# Which further imports our additional configuration entries: +apprise -vv -t "my title" -b "my message body" +``` + +You can freely mix/match include statements and Apprise URLs as well, for example: +```yaml +# Our config file located in ~/.config/apprise.yml (or ~/.apprise.yml) + +# Our imports +include: + # web based include (use https:// too if you like) + - http://localhost:8080/get/apprise + # A relative config file import (relative to 'this' configuration file) + - more_configuration.cfg + # Absolute path inclusion works well too: + - /etc/apprise/cfg + +# you can still include your other URLs here too if you want: +# Define your URLs +urls: + - json://localhost + + # It's recommended you tag everything and then just + # use the --tag (or -g) switch to access the entries. This + # is especially important if you're going to start storing your + # configuration elsewhere too! + - slack://tokenA/tokenB/TokenC: + - tag: devops +``` + +All loaded configuration files can also contain the `include` keyword as well. But by default they `include` recursion only happens at 1 level. If you want to allow more files to be included, you need to specify `--recursion-depth` (`-R`) and set it to the number of recursive levels you will allow the include to occur for. By default this is set to 1 with the `apprise` tool. + +**Note:** For security reasons, an `http://` configuration source can NOT `include` a `file://` source. + +## Web Hosted YAML Configuration +Apprise can retrieve configuration files from over a network as well using the HTTP protocol. +For HTTP requests, the **Content-Type** HTTP Header (_which defines Mime Type_) is very important. Apprise will parse remote network hosted configuration files as YAML so long as you're using one of the following **Content-Type** entries: +* `text/yaml` +* `text/x-yaml` +* `application/yaml` +* `application/x-yaml` \ No newline at end of file diff --git a/content/en/docs/Integrations/.Notifications/showcase.md b/content/en/docs/Integrations/.Notifications/showcase.md new file mode 100644 index 00000000..f7ad3bae --- /dev/null +++ b/content/en/docs/Integrations/.Notifications/showcase.md @@ -0,0 +1,26 @@ +## Coverage +Apprise has been mentioned in: +- [Self Hosted Podcast 48](https://selfhosted.show/48) - Jul 2nd, 2021 +- [Self Hosted Podcast 27](https://selfhosted.show/27) - Sep 11, 2020 +- [Home Assistant Podcast 58](https://hasspodcast.io/ha058/) - Oct 31, 2019 +- [Python Bytes Podcast Episode 138](https://pythonbytes.fm/episodes/show/138/will-pyoxidizer-weld-shut-one-of-python-s-major-gaps) - Jul 4, 2019 + +## Integrations +- [Home Assistant](https://www.home-assistant.io/): Open source home automation that puts local control and privacy first. +- [Uptime Kuma](https://github.com/louislam/uptime-kuma): A popular self-hosted monitoring tool like _[Uptime Robot](https://uptimerobot.com/)_. +- [Apprise-GA](https://github.com/cstuder/apprise-ga): Apprise [GitHub Action](https://github.com/features/actions) integration. Also available in the [GitHub Marketplace](https://github.com/marketplace/actions/apprise-notification). +- [Bazarr](https://www.bazarr.media/): A companion application to [Sonarr](https://sonarr.tv) and [Radarr](https://radarr.video). It manages and downloads subtitles based on your requirements. You define your preferences by TV show or movie and Bazarr takes care of everything for you. +- [Mealie](https://github.com/hay-kot/mealie): Mealie is a self hosted recipe manager and meal planner. +- [Ouroboros](https://github.com/pyouroboros/ouroboros): Automatically update running docker containers with newest available image. +- [Mailrise](https://github.com/YoRyan/mailrise): Listens for emails and relays them through Apprise. +- [Apprise-Skill](https://github.com/domcross/apprise-skill): A component of [Mycroft](https://mycroft.ai/) voice assistant platform allowing it to work with the Apprise library. +- [Traktarr](https://github.com/l3uddz/traktarr): Script to add new series & movies to [Sonarr](https://sonarr.tv)/[Radarr](https://radarr.video) based on [Trakt](https://trakt.tv) lists. +- [Healthchecks](https://healthchecks.io): A Cron Monitoring Tool written in Python & Django. +- [ChangeDetection](https://github.com/dgtlmoon/changedetection.io): A tool that monitors websites you provide it so that it can alert you when the page changes (and/or is updated). +- [binance-trade-bot](https://github.com/edeng23/binance-trade-bot): A platform that monitors bitcoin currency value and can automatically move your coins around (while notifying you when this is done) to maximize your profit. + +## Public Acknowledgements +- [Bertelsmann's Developer Blog](https://developers.bertelsmann.com/en/blog/articles/apprise-your-push-messaging-musketeer-one-for-all-messenger-services) - Jan 28, 2022 +- [syslog-ng Blog](https://www.syslog-ng.com/community/b/blog/posts/first-steps-of-sending-alerts-to-discord-and-others-from-syslog-ng-http-and-apprise) - Apr 20, 2021 +- [Kumulos Newsletter](https://www.kumulos.com/2019/11/26/kumulos-features-update-fall-19/) - Nov 26, 2019 +- [Python Weekly - Issue 382](https://newsletry.com/Home/Python%20Weekly/b6a9876d-7f17-40f1-c7e5-08d686ed8528) - Jan 24, 2019 \ No newline at end of file diff --git a/content/en/docs/Integrations/Database_Guide/Databases/_index.md b/content/en/docs/Integrations/Database_Guide/Databases/_index.md new file mode 100644 index 00000000..18c31bbb --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/_index.md @@ -0,0 +1,10 @@ +--- +title: "Supported Databases" +linkTitle: "Supported Databases" +description: >- + This section provides a detailed list overview of the supported Databases, their setup parameters as well as connection details. +--- + + + + diff --git a/content/en/docs/Integrations/Database_Guide/Databases/ascend.md b/content/en/docs/Integrations/Database_Guide/Databases/ascend.md new file mode 100644 index 00000000..ada5ba28 --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/ascend.md @@ -0,0 +1,17 @@ +--- +title: Ascend.io +hide_title: true +sidebar_position: 10 +version: 1 +weight: 2 +--- + +## Ascend.io + +The recommended connector library to Ascend.io is [impyla](https://github.com/cloudera/impyla). + +The expected connection string is formatted as follows: + +``` +ascend://{username}:{password}@{hostname}:{port}/{database}?auth_mechanism=PLAIN;use_ssl=true +``` diff --git a/content/en/docs/Integrations/Database_Guide/Databases/athena.md b/content/en/docs/Integrations/Database_Guide/Databases/athena.md new file mode 100644 index 00000000..feabad30 --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/athena.md @@ -0,0 +1,34 @@ +--- +title: Amazon Athena +hide_title: true +sidebar_position: 4 +version: 1 +--- + +## AWS Athena + +### PyAthenaJDBC + +[PyAthenaJDBC](https://pypi.org/project/PyAthenaJDBC/) is a Python DB 2.0 compliant wrapper for the +[Amazon Athena JDBC driver](https://docs.aws.amazon.com/athena/latest/ug/connect-with-jdbc.html). + +The connection string for Amazon Athena is as follows: + +``` +awsathena+jdbc://{aws_access_key_id}:{aws_secret_access_key}@athena.{region_name}.amazonaws.com/{schema_name}?s3_staging_dir={s3_staging_dir}&... +``` + +Note that you'll need to escape & encode when forming the connection string like so: + +``` +s3://... -> s3%3A//... +``` + +### PyAthena + +You can also use [PyAthena library](https://pypi.org/project/PyAthena/) (no Java required) with the +following connection string: + +``` +awsathena+rest://{aws_access_key_id}:{aws_secret_access_key}@athena.{region_name}.amazonaws.com/{schema_name}?s3_staging_dir={s3_staging_dir}&... +``` diff --git a/content/en/docs/Integrations/Database_Guide/Databases/bigquery.md b/content/en/docs/Integrations/Database_Guide/Databases/bigquery.md new file mode 100644 index 00000000..d186d9f8 --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/bigquery.md @@ -0,0 +1,87 @@ +--- +title: Google BigQuery +hide_title: true +sidebar_position: 20 +version: 1 +--- + +## Google BigQuery + +The recommended connector library for BigQuery is +[pybigquery](https://github.com/mxmzdlv/pybigquery). + +### Install BigQuery Driver + +Follow the steps [here](/docs/databases/docker-add-drivers) about how to +install new database drivers when setting up {{< param replacables.brand_name >}} locally via docker-compose. + +``` +echo "pybigquery" >> ./docker/requirements-local.txt +``` + +### Connecting to BigQuery + +When adding a new BigQuery connection in {{< param replacables.brand_name >}}, you'll need to add the GCP Service Account +credentials file (as a JSON). + +1. Create your Service Account via the Google Cloud Platform control panel, provide it access to the + appropriate BigQuery datasets, and download the JSON configuration file for the service account. +2. In {{< param replacables.brand_name >}} you can either upload that JSON or add the JSON blob in the following format (this should be the content of your credential JSON file): + +``` +{ + "type": "service_account", + "project_id": "...", + "private_key_id": "...", + "private_key": "...", + "client_email": "...", + "client_id": "...", + "auth_uri": "...", + "token_uri": "...", + "auth_provider_x509_cert_url": "...", + "client_x509_cert_url": "..." + } +``` + + + +3. Additionally, can connect via SQLAlchemy URI instead + + The connection string for BigQuery looks like: + + ``` + bigquery://{project_id} + ``` + + Go to the **Advanced** tab, Add a JSON blob to the **Secure Extra** field in the database configuration form with + the following format: + + ``` + { + "credentials_info": + } + ``` + + The resulting file should have this structure: + + ``` + { + "credentials_info": { + "type": "service_account", + "project_id": "...", + "private_key_id": "...", + "private_key": "...", + "client_email": "...", + "client_id": "...", + "auth_uri": "...", + "token_uri": "...", + "auth_provider_x509_cert_url": "...", + "client_x509_cert_url": "..." + } + } + ``` + +You should then be able to connect to your BigQuery datasets. + +To be able to upload CSV or Excel files to BigQuery in {{< param replacables.brand_name >}}, you'll need to also add the +[pandas_gbq](https://github.com/pydata/pandas-gbq) library. diff --git a/content/en/docs/Integrations/Database_Guide/Databases/clickhouse.md b/content/en/docs/Integrations/Database_Guide/Databases/clickhouse.md new file mode 100644 index 00000000..95ddd63a --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/clickhouse.md @@ -0,0 +1,44 @@ +--- +title: Clickhouse +hide_title: true +sidebar_position: 15 +version: 1 +--- + +## Clickhouse + +To use Clickhouse with {{< param replacables.brand_name >}} you will need to add the following Python libraries: + +``` +clickhouse-driver==0.2.0 +clickhouse-sqlalchemy==0.1.6 +``` + +If running {{< param replacables.brand_name >}} using Docker Compose, add the following to your `./docker/requirements-local.txt` file: + +``` +clickhouse-driver>=0.2.0 +clickhouse-sqlalchemy>=0.1.6 +``` + +The recommended connector library for Clickhouse is +[sqlalchemy-clickhouse](https://github.com/cloudflare/sqlalchemy-clickhouse). + +The expected connection string is formatted as follows: + +``` +clickhouse+native://:@:/[?options…]clickhouse://{username}:{password}@{hostname}:{port}/{database} +``` + +Here's a concrete example of a real connection string: + +``` +clickhouse+native://demo:demo@github.demo.trial.altinity.cloud/default?secure=true +``` + +If you're using Clickhouse locally on your computer, you can get away with using a native protocol URL that +uses the default user without a password (and doesn't encrypt the connection): + +``` +clickhouse+native://localhost/default +``` diff --git a/content/en/docs/Integrations/Database_Guide/Databases/cockroachdb.md b/content/en/docs/Integrations/Database_Guide/Databases/cockroachdb.md new file mode 100644 index 00000000..5ecb9a61 --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/cockroachdb.md @@ -0,0 +1,17 @@ +--- +title: CockroachDB +hide_title: true +sidebar_position: 16 +version: 1 +--- + +## CockroachDB + +The recommended connector library for CockroachDB is +[sqlalchemy-cockroachdb](https://github.com/cockroachdb/sqlalchemy-cockroachdb). + +The expected connection string is formatted as follows: + +``` +cockroachdb://root@{hostname}:{port}/{database}?sslmode=disable +``` diff --git a/content/en/docs/Integrations/Database_Guide/Databases/cratedb.md b/content/en/docs/Integrations/Database_Guide/Databases/cratedb.md new file mode 100644 index 00000000..09a1b6d5 --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/cratedb.md @@ -0,0 +1,24 @@ +--- +title: CrateDB +hide_title: true +sidebar_position: 36 +version: 1 +--- + +## CrateDB + +The recommended connector library for CrateDB is +[crate](https://pypi.org/project/crate/). +You need to install the extras as well for this library. +We recommend adding something like the following +text to your requirements file: + +``` +crate[sqlalchemy]==0.26.0 +``` + +The expected connection string is formatted as follows: + +``` +crate://crate@127.0.0.1:4200 +``` diff --git a/content/en/docs/Integrations/Database_Guide/Databases/databricks.md b/content/en/docs/Integrations/Database_Guide/Databases/databricks.md new file mode 100644 index 00000000..9477c939 --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/databricks.md @@ -0,0 +1,67 @@ +--- +title: Databricks +hide_title: true +sidebar_position: 37 +version: 1 +--- + +## Databricks + +To connect to Databricks, first install [databricks-dbapi](https://pypi.org/project/databricks-dbapi/) with the optional SQLAlchemy dependencies: + +```bash +pip install databricks-dbapi[sqlalchemy] +``` + +There are two ways to connect to Databricks: using a Hive connector or an ODBC connector. Both ways work similarly, but only ODBC can be used to connect to [SQL endpoints](https://docs.databricks.com/sql/admin/sql-endpoints.html). + +### Hive + +To use the Hive connector you need the following information from your cluster: + +- Server hostname +- Port +- HTTP path + +These can be found under "Configuration" -> "Advanced Options" -> "JDBC/ODBC". + +You also need an access token from "Settings" -> "User Settings" -> "Access Tokens". + +Once you have all this information, add a database of type "Databricks (Hive)" in {{< param replacables.brand_name >}}, and use the following SQLAlchemy URI: + +``` +databricks+pyhive://token:{access token}@{server hostname}:{port}/{database name} +``` + +You also need to add the following configuration to "Other" -> "Engine Parameters", with your HTTP path: + +``` +{"connect_args": {"http_path": "sql/protocolv1/o/****"}} +``` + +### ODBC + +For ODBC you first need to install the [ODBC drivers for your platform](https://databricks.com/spark/odbc-drivers-download). + +For a regular connection use this as the SQLAlchemy URI: + +``` +databricks+pyodbc://token:{access token}@{server hostname}:{port}/{database name} +``` + +And for the connection arguments: + +``` +{"connect_args": {"http_path": "sql/protocolv1/o/****", "driver_path": "/path/to/odbc/driver"}} +``` + +The driver path should be: + +- `/Library/simba/spark/lib/libsparkodbc_sbu.dylib` (Mac OS) +- `/opt/simba/spark/lib/64/libsparkodbc_sb64.so` (Linux) + +For a connection to a SQL endpoint you need to use the HTTP path from the endpoint: + +``` +{"connect_args": {"http_path": "/sql/1.0/endpoints/****", "driver_path": "/path/to/odbc/driver"}} +``` diff --git a/content/en/docs/Integrations/Database_Guide/Databases/dremio.md b/content/en/docs/Integrations/Database_Guide/Databases/dremio.md new file mode 100644 index 00000000..530fbc38 --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/dremio.md @@ -0,0 +1,26 @@ +--- +title: Dremio +hide_title: true +sidebar_position: 17 +version: 1 +--- + +## Dremio + +The recommended connector library for Dremio is +[sqlalchemy_dremio](https://pypi.org/project/sqlalchemy-dremio/). + +The expected connection string for ODBC (Default port is 31010) is formatted as follows: + +``` +dremio://{username}:{password}@{host}:{port}/{database_name}/dremio?SSL=1 +``` + +The expected connection string for Arrow Flight (Dremio 4.9.1+. Default port is 32010) is formatted as follows: + +``` +dremio+flight://{username}:{password}@{host}:{port}/dremio +``` + +This [blog post by Dremio](https://www.dremio.com/tutorials/dremio-apache-Feris/) has some +additional helpful instructions on connecting {{< param replacables.brand_name >}} to Dremio. diff --git a/content/en/docs/Integrations/Database_Guide/Databases/drill.md b/content/en/docs/Integrations/Database_Guide/Databases/drill.md new file mode 100644 index 00000000..303eb55c --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/drill.md @@ -0,0 +1,47 @@ +--- +title: Apache Drill +hide_title: true +sidebar_position: 6 +version: 1 +--- + +## Apache Drill + +### SQLAlchemy + +The recommended way to connect to Apache Drill is through SQLAlchemy. You can use the +[sqlalchemy-drill](https://github.com/JohnOmernik/sqlalchemy-drill) package. + +Once that is done, you can connect to Drill in two ways, either via the REST interface or by JDBC. +If you are connecting via JDBC, you must have the Drill JDBC Driver installed. + +The basic connection string for Drill looks like this: + +``` +drill+sadrill://:@:/?use_ssl=True +``` + +To connect to Drill running on a local machine running in embedded mode you can use the following +connection string: + +``` +drill+sadrill://localhost:8047/dfs?use_ssl=False +``` + +### JDBC + +Connecting to Drill through JDBC is more complicated and we recommend following +[this tutorial](https://drill.apache.org/docs/using-the-jdbc-driver/). + +The connection string looks like: + +``` +drill+jdbc://:@: +``` + +### ODBC + +We recommend reading the +[Apache Drill documentation](https://drill.apache.org/docs/installing-the-driver-on-linux/) and read +the [Github README](https://github.com/JohnOmernik/sqlalchemy-drill#usage-with-odbc) to learn how to +work with Drill through ODBC. diff --git a/content/en/docs/Integrations/Database_Guide/Databases/druid.md b/content/en/docs/Integrations/Database_Guide/Databases/druid.md new file mode 100644 index 00000000..9bd5b4ff --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/druid.md @@ -0,0 +1,42 @@ +--- +title: Apache Druid +hide_title: true +sidebar_position: 7 +version: 1 +--- + +## Apache Druid + +Use the SQLAlchemy / DBAPI connector made available in the +[pydruid library](https://pythonhosted.org/pydruid/). + +The connection string looks like: + +``` +druid://:@:/druid/v2/sql +``` + +### Customizing Druid Connection + +When adding a connection to Druid, you can customize the connection a few different ways in the +**Add Database** form. + +**Custom Certificate** + +You can add certificates in the **Root Certificate** field when configuring the new database +connection to Druid: + +{" "} + +When using a custom certificate, pydruid will automatically use https scheme. + +**Disable SSL Verification** + +To disable SSL verification, add the following to the **Extras** field: + +``` +engine_params: +{"connect_args": + {"scheme": "https", "ssl_verify_cert": false}} +``` + diff --git a/content/en/docs/Integrations/Database_Guide/Databases/elasticsearch.md b/content/en/docs/Integrations/Database_Guide/Databases/elasticsearch.md new file mode 100644 index 00000000..7c30312d --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/elasticsearch.md @@ -0,0 +1,68 @@ +--- +title: Elasticsearch +hide_title: true +sidebar_position: 18 +version: 1 +--- + +## Elasticsearch + +The recommended connector library for Elasticsearch is +[elasticsearch-dbapi](https://github.com/preset-io/elasticsearch-dbapi). + +The connection string for Elasticsearch looks like this: + +``` +elasticsearch+http://{user}:{password}@{host}:9200/ +``` + +**Using HTTPS** + +``` +elasticsearch+https://{user}:{password}@{host}:9200/ +``` + +Elasticsearch as a default limit of 10000 rows, so you can increase this limit on your cluster or +set Feris’s row limit on config + +``` +ROW_LIMIT = 10000 +``` + +You can query multiple indices on SQL Lab for example + +``` +SELECT timestamp, agent FROM "logstash" +``` + +But, to use visualizations for multiple indices you need to create an alias index on your cluster + +``` +POST /_aliases +{ + "actions" : [ + { "add" : { "index" : "logstash-**", "alias" : "logstash_all" } } + ] +} +``` + +Then register your table with the alias name logstasg_all + +**Time zone** + +By default, {{< param replacables.brand_name >}} uses UTC time zone for elasticsearch query. If you need to specify a time zone, +please edit your Database and enter the settings of your specified time zone in the Other > ENGINE PARAMETERS: + +``` +{ + "connect_args": { + "time_zone": "Asia/Shanghai" + } +} +``` + +Another issue to note about the time zone problem is that before elasticsearch7.8, if you want to convert a string into a `DATETIME` object, +you need to use the `CAST` function,but this function does not support our `time_zone` setting. So it is recommended to upgrade to the version after elasticsearch7.8. +After elasticsearch7.8, you can use the `DATETIME_PARSE` function to solve this problem. +The DATETIME_PARSE function is to support our `time_zone` setting, and here you need to fill in your elasticsearch version number in the Other > VERSION setting. +the {{< param replacables.brand_name >}} will use the `DATETIME_PARSE` function for conversion. diff --git a/content/en/docs/Integrations/Database_Guide/Databases/exasol.md b/content/en/docs/Integrations/Database_Guide/Databases/exasol.md new file mode 100644 index 00000000..de896b28 --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/exasol.md @@ -0,0 +1,17 @@ +--- +title: Exasol +hide_title: true +sidebar_position: 19 +version: 1 +--- + +## Exasol + +The recommended connector library for Exasol is +[sqlalchemy-exasol](https://github.com/exasol/sqlalchemy-exasol). + +The connection string for Exasol looks like this: + +``` +exa+pyodbc://{username}:{password}@{hostname}:{port}/my_schema?CONNECTIONLCALL=en_US.UTF-8&driver=EXAODBC +``` diff --git a/content/en/docs/Integrations/Database_Guide/Databases/firebird.md b/content/en/docs/Integrations/Database_Guide/Databases/firebird.md new file mode 100644 index 00000000..6de5f001 --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/firebird.md @@ -0,0 +1,23 @@ +--- +title: Firebird +hide_title: true +sidebar_position: 38 +version: 1 +--- + +## Firebird + +The recommended connector library for Firebird is [sqlalchemy-firebird](https://pypi.org/project/sqlalchemy-firebird/). +{{< param replacables.brand_name >}} has been tested on `sqlalchemy-firebird>=0.7.0, <0.8`. + +The recommended connection string is: + +``` +firebird+fdb://{username}:{password}@{host}:{port}//{path_to_db_file} +``` + +Here's a connection string example of {{< param replacables.brand_name >}} connecting to a local Firebird database: + +``` +firebird+fdb://SYSDBA:masterkey@192.168.86.38:3050//Library/Frameworks/Firebird.framework/Versions/A/Resources/examples/empbuild/employee.fdb +``` diff --git a/content/en/docs/Integrations/Database_Guide/Databases/firebolt.md b/content/en/docs/Integrations/Database_Guide/Databases/firebolt.md new file mode 100644 index 00000000..f0a74f5a --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/firebolt.md @@ -0,0 +1,27 @@ +--- +title: Firebolt +hide_title: true +sidebar_position: 39 +version: 1 +--- + +## Firebolt + +The recommended connector library for Firebolt is [firebolt-sqlalchemy](https://pypi.org/project/firebolt-sqlalchemy/). +{{< param replacables.brand_name >}} has been tested on `firebolt-sqlalchemy>=0.0.1`. + +The recommended connection string is: + +``` +firebolt://{username}:{password}@{database} +or +firebolt://{username}:{password}@{database}/{engine_name} +``` + +Here's a connection string example of {{< param replacables.brand_name >}} connecting to a Firebolt database: + +``` +firebolt://email@domain:password@sample_database +or +firebolt://email@domain:password@sample_database/sample_engine +``` diff --git a/content/en/docs/Integrations/Database_Guide/Databases/google-sheets.md b/content/en/docs/Integrations/Database_Guide/Databases/google-sheets.md new file mode 100644 index 00000000..c251732e --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/google-sheets.md @@ -0,0 +1,13 @@ +--- +title: Google Sheets +hide_title: true +sidebar_position: 21 +version: 1 +--- + +## Google Sheets + +Google Sheets has a very limited +[SQL API](https://developers.google.com/chart/interactive/docs/querylanguage). The recommended +connector library for Google Sheets is [shillelagh](https://github.com/betodealmeida/shillelagh). + diff --git a/content/en/docs/Integrations/Database_Guide/Databases/hana.md b/content/en/docs/Integrations/Database_Guide/Databases/hana.md new file mode 100644 index 00000000..bd434d35 --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/hana.md @@ -0,0 +1,16 @@ +--- +title: Hana +hide_title: true +sidebar_position: 22 +version: 1 +--- + +## Hana + +The recommended connector library is [sqlalchemy-hana](https://github.com/SAP/sqlalchemy-hana). + +The connection string is formatted as follows: + +``` +hana://{username}:{password}@{host}:{port} +``` diff --git a/content/en/docs/Integrations/Database_Guide/Databases/hive.md b/content/en/docs/Integrations/Database_Guide/Databases/hive.md new file mode 100644 index 00000000..6c80a7ac --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/hive.md @@ -0,0 +1,16 @@ +--- +title: Apache Hive +hide_title: true +sidebar_position: 8 +version: 1 +--- + +## Apache Hive + +The [pyhive](https://pypi.org/project/PyHive/) library is the recommended way to connect to Hive through SQLAlchemy. + +The expected connection string is formatted as follows: + +``` +hive://hive@{hostname}:{port}/{database} +``` diff --git a/content/en/docs/Integrations/Database_Guide/Databases/hologres.md b/content/en/docs/Integrations/Database_Guide/Databases/hologres.md new file mode 100644 index 00000000..bf575354 --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/hologres.md @@ -0,0 +1,24 @@ +--- +title: Hologres +hide_title: true +sidebar_position: 33 +version: 1 +--- + +## Hologres + +Hologres is a real-time interactive analytics service developed by Alibaba Cloud. It is fully compatible with PostgreSQL 11 and integrates seamlessly with the big data ecosystem. + +Hologres sample connection parameters: + +- **User Name**: The AccessKey ID of your Alibaba Cloud account. +- **Password**: The AccessKey secret of your Alibaba Cloud account. +- **Database Host**: The public endpoint of the Hologres instance. +- **Database Name**: The name of the Hologres database. +- **Port**: The port number of the Hologres instance. + +The connection string looks like: + +``` +postgresql+psycopg2://{username}:{password}@{host}:{port}/{database} +``` diff --git a/content/en/docs/Integrations/Database_Guide/Databases/ibm-db2.md b/content/en/docs/Integrations/Database_Guide/Databases/ibm-db2.md new file mode 100644 index 00000000..f044c237 --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/ibm-db2.md @@ -0,0 +1,23 @@ +--- +title: IBM DB2 +hide_title: true +sidebar_position: 23 +version: 1 +--- + +## IBM DB2 + +The [IBM_DB_SA](https://github.com/ibmdb/python-ibmdbsa/tree/master/ibm_db_sa) library provides a +Python / SQLAlchemy interface to IBM Data Servers. + +Here's the recommended connection string: + +``` +db2+ibm_db://{username}:{passport}@{hostname}:{port}/{database} +``` + +There are two DB2 dialect versions implemented in SQLAlchemy. If you are connecting to a DB2 version without `LIMIT [n]` syntax, the recommended connection string to be able to use the SQL Lab is: + +``` +ibm_db_sa://{username}:{passport}@{hostname}:{port}/{database} +``` diff --git a/content/en/docs/Integrations/Database_Guide/Databases/impala.md b/content/en/docs/Integrations/Database_Guide/Databases/impala.md new file mode 100644 index 00000000..c2e76ee9 --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/impala.md @@ -0,0 +1,16 @@ +--- +title: Apache Impala +hide_title: true +sidebar_position: 9 +version: 1 +--- + +## Apache Impala + +The recommended connector library to Apache Impala is [impyla](https://github.com/cloudera/impyla). + +The expected connection string is formatted as follows: + +``` +impala://{hostname}:{port}/{database} +``` diff --git a/content/en/docs/Integrations/Database_Guide/Databases/kylin.md b/content/en/docs/Integrations/Database_Guide/Databases/kylin.md new file mode 100644 index 00000000..7cbf35c5 --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/kylin.md @@ -0,0 +1,17 @@ +--- +title: Apache Kylin +hide_title: true +sidebar_position: 11 +version: 1 +--- + +## Apache Kylin + +The recommended connector library for Apache Kylin is +[kylinpy](https://github.com/Kyligence/kylinpy). + +The expected connection string is formatted as follows: + +``` +kylin://:@:/?=&= +``` diff --git a/content/en/docs/Integrations/Database_Guide/Databases/mysql.md b/content/en/docs/Integrations/Database_Guide/Databases/mysql.md new file mode 100644 index 00000000..e7843215 --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/mysql.md @@ -0,0 +1,29 @@ +--- +title: MySQL +hide_title: true +sidebar_position: 25 +version: 1 +--- + +## MySQL + +The recommended connector library for MySQL is [mysqlclient](https://pypi.org/project/mysqlclient/). + +Here's the connection string: + +``` +mysql://{username}:{password}@{host}/{database} +``` + +Host: + +- For Localhost or Docker running Linux: `localhost` or `127.0.0.1` +- For On Prem: IP address or Host name +- For Docker running in OSX: `docker.for.mac.host.internal` + Port: `3306` by default + +One problem with `mysqlclient` is that it will fail to connect to newer MySQL databases using `caching_sha2_password` for authentication, since the plugin is not included in the client. In this case, you should use `[mysql-connector-python](https://pypi.org/project/mysql-connector-python/)` instead: + +``` +mysql+mysqlconnector://{username}:{password}@{host}/{database} +``` diff --git a/content/en/docs/Integrations/Database_Guide/Databases/netezza.md b/content/en/docs/Integrations/Database_Guide/Databases/netezza.md new file mode 100644 index 00000000..9da4ba48 --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/netezza.md @@ -0,0 +1,17 @@ +--- +title: IBM Netezza Performance Server +hide_title: true +sidebar_position: 24 +version: 1 +--- + +## IBM Netezza Performance Server + +The [nzalchemy](https://pypi.org/project/nzalchemy/) library provides a +Python / SQLAlchemy interface to IBM Netezza Performance Server (aka Netezza). + +Here's the recommended connection string: + +``` +netezza+nzpy://{username}:{password}@{hostname}:{port}/{database} +``` diff --git a/content/en/docs/Integrations/Database_Guide/Databases/oracle.md b/content/en/docs/Integrations/Database_Guide/Databases/oracle.md new file mode 100644 index 00000000..d41f54aa --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/oracle.md @@ -0,0 +1,17 @@ +--- +title: Oracle +hide_title: true +sidebar_position: 26 +version: 1 +--- + +## Oracle + +The recommended connector library is +[cx_Oracle](https://cx-oracle.readthedocs.io/en/latest/user_guide/installation.html). + +The connection string is formatted as follows: + +``` +oracle://:@: +``` diff --git a/content/en/docs/Integrations/Database_Guide/Databases/pinot.md b/content/en/docs/Integrations/Database_Guide/Databases/pinot.md new file mode 100644 index 00000000..8d5b8c20 --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/pinot.md @@ -0,0 +1,16 @@ +--- +title: Apache Pinot +hide_title: true +sidebar_position: 12 +version: 1 +--- + +## Apache Pinot + +The recommended connector library for Apache Pinot is [pinotdb](https://pypi.org/project/pinotdb/). + +The expected connection string is formatted as follows: + +``` +pinot+http://:/query?controller=http://:/`` +``` diff --git a/content/en/docs/Integrations/Database_Guide/Databases/postgres.md b/content/en/docs/Integrations/Database_Guide/Databases/postgres.md new file mode 100644 index 00000000..cc7f062c --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/postgres.md @@ -0,0 +1,42 @@ +--- +title: Postgres +hide_title: true +sidebar_position: 27 +version: 1 +--- + +## Postgres + +Note that, if you're using docker-compose, the Postgres connector library [psycopg2](https://www.psycopg.org/docs/) +comes out of the box with Feris. + +Postgres sample connection parameters: + +- **User Name**: UserName +- **Password**: DBPassword +- **Database Host**: + - For Localhost: localhost or 127.0.0.1 + - For On Prem: IP address or Host name + - For AWS Endpoint +- **Database Name**: Database Name +- **Port**: default 5432 + +The connection string looks like: + +``` +postgresql://{username}:{password}@{host}:{port}/{database} +``` + +You can require SSL by adding `?sslmode=require` at the end: + +``` +postgresql://{username}:{password}@{host}:{port}/{database}?sslmode=require +``` + +You can read about the other SSL modes that Postgres supports in +[Table 31-1 from this documentation](https://www.postgresql.org/docs/9.1/libpq-ssl.html). + +More information about PostgreSQL connection options can be found in the +[SQLAlchemy docs](https://docs.sqlalchemy.org/en/13/dialects/postgresql.html#module-sqlalchemy.dialects.postgresql.psycopg2) +and the +[PostgreSQL docs](https://www.postgresql.org/docs/9.1/libpq-connect.html#LIBPQ-PQCONNECTDBPARAMS). diff --git a/content/en/docs/Integrations/Database_Guide/Databases/presto.md b/content/en/docs/Integrations/Database_Guide/Databases/presto.md new file mode 100644 index 00000000..f3e882ca --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/presto.md @@ -0,0 +1,37 @@ +--- +title: Presto +hide_title: true +sidebar_position: 28 +version: 1 +--- + +## Presto + +The [pyhive](https://pypi.org/project/PyHive/) library is the recommended way to connect to Presto through SQLAlchemy. + +The expected connection string is formatted as follows: + +``` +presto://{hostname}:{port}/{database} +``` + +You can pass in a username and password as well: + +``` +presto://{username}:{password}@{hostname}:{port}/{database} +``` + +Here is an example connection string with values: + +``` +presto://datascientist:securepassword@presto.example.com:8080/hive +``` + +By default {{< param replacables.brand_name >}} assumes the most recent version of Presto is being used when querying the +datasource. If you’re using an older version of Presto, you can configure it in the extra parameter: + +``` +{ + "version": "0.123" +} +``` diff --git a/content/en/docs/Integrations/Database_Guide/Databases/redshift.md b/content/en/docs/Integrations/Database_Guide/Databases/redshift.md new file mode 100644 index 00000000..32997ade --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/redshift.md @@ -0,0 +1,25 @@ +--- +title: Amazon Redshift +hide_title: true +sidebar_position: 5 +version: 1 +--- + +## AWS Redshift + +The [sqlalchemy-redshift](https://pypi.org/project/sqlalchemy-redshift/) library is the recommended +way to connect to Redshift through SQLAlchemy. + +You'll need to the following setting values to form the connection string: + +- **User Name**: userName +- **Password**: DBPassword +- **Database Host**: AWS Endpoint +- **Database Name**: Database Name +- **Port**: default 5439 + +Here's what the connection string looks like: + +``` +redshift+psycopg2://:@:5439/ +``` diff --git a/content/en/docs/Integrations/Database_Guide/Databases/rockset.md b/content/en/docs/Integrations/Database_Guide/Databases/rockset.md new file mode 100644 index 00000000..2bb78a0f --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/rockset.md @@ -0,0 +1,16 @@ +--- +title: Rockset +hide_title: true +sidebar_position: 35 +version: 1 +--- + +## Rockset + +The connection string for Rockset is: + +``` +rockset://apikey:{your-apikey}@api.rs2.usw2.rockset.com/ +``` + +For more complete instructions, we recommend the [Rockset documentation](https://docs.rockset.com/apache-Feris/). diff --git a/content/en/docs/Integrations/Database_Guide/Databases/snowflake.md b/content/en/docs/Integrations/Database_Guide/Databases/snowflake.md new file mode 100644 index 00000000..cb9ff749 --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/snowflake.md @@ -0,0 +1,31 @@ +--- +title: Snowflake +hide_title: true +sidebar_position: 29 +version: 1 +--- + +## Snowflake + +The recommended connector library for Snowflake is +[snowflake-sqlalchemy](https://pypi.org/project/snowflake-sqlalchemy/1.2.4/)<=1.2.4. + +The connection string for Snowflake looks like this: + +``` +snowflake://{user}:{password}@{account}.{region}/{database}?role={role}&warehouse={warehouse} +``` + +The schema is not necessary in the connection string, as it is defined per table/query. The role and +warehouse can be omitted if defaults are defined for the user, i.e. + +``` +snowflake://{user}:{password}@{account}.{region}/{database} +``` + +Make sure the user has privileges to access and use all required +databases/schemas/tables/views/warehouses, as the Snowflake SQLAlchemy engine does not test for +user/role rights during engine creation by default. However, when pressing the “Test Connection” +button in the Create or Edit Database dialog, user/role credentials are validated by passing +“validate_default_parameters”: True to the connect() method during engine creation. If the user/role +is not authorized to access the database, an error is recorded in the {{< param replacables.brand_name >}} logs. diff --git a/content/en/docs/Integrations/Database_Guide/Databases/solr.md b/content/en/docs/Integrations/Database_Guide/Databases/solr.md new file mode 100644 index 00000000..05084d55 --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/solr.md @@ -0,0 +1,17 @@ +--- +title: Apache Solr +hide_title: true +sidebar_position: 13 +version: 1 +--- + +## Apache Solr + +The [sqlalchemy-solr](https://pypi.org/project/sqlalchemy-solr/) library provides a +Python / SQLAlchemy interface to Apache Solr. + +The connection string for Solr looks like this: + +``` +solr://{username}:{password}@{host}:{port}/{server_path}/{collection}[/?use_ssl=true|false] +``` diff --git a/content/en/docs/Integrations/Database_Guide/Databases/spark-sql.md b/content/en/docs/Integrations/Database_Guide/Databases/spark-sql.md new file mode 100644 index 00000000..02127bd5 --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/spark-sql.md @@ -0,0 +1,16 @@ +--- +title: Apache Spark SQL +hide_title: true +sidebar_position: 14 +version: 1 +--- + +## Apache Spark SQL + +The recommended connector library for Apache Spark SQL [pyhive](https://pypi.org/project/PyHive/). + +The expected connection string is formatted as follows: + +``` +hive://hive@{hostname}:{port}/{database} +``` diff --git a/content/en/docs/Integrations/Database_Guide/Databases/sql-server.md b/content/en/docs/Integrations/Database_Guide/Databases/sql-server.md new file mode 100644 index 00000000..f9ceb4c7 --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/sql-server.md @@ -0,0 +1,16 @@ +--- +title: Microsoft SQL Server +hide_title: true +sidebar_position: 30 +version: 1 +--- + +## SQL Server + +The recommended connector library for SQL Server is [pymssql](https://github.com/pymssql/pymssql). + +The connection string for SQL Server looks like this: + +``` +mssql+pymssql://:@://?Encrypt=yes +``` diff --git a/content/en/docs/Integrations/Database_Guide/Databases/teradata.md b/content/en/docs/Integrations/Database_Guide/Databases/teradata.md new file mode 100644 index 00000000..2f765a21 --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/teradata.md @@ -0,0 +1,36 @@ +--- +title: Teradata +hide_title: true +sidebar_position: 31 +version: 1 +--- + +## Teradata + +The recommended connector library is +[teradatasqlalchemy](https://pypi.org/project/teradatasqlalchemy/). + +The connection string for Teradata looks like this: + +``` +teradata://{user}:{password}@{host} +``` + +## ODBC Driver + +There's also an older connector named + [sqlalchemy-teradata](https://github.com/Teradata/sqlalchemy-teradata) that + requires the installation of ODBC drivers. The Teradata ODBC Drivers + are available +here: https://downloads.teradata.com/download/connectivity/odbc-driver/linux + +Here are the required environment variables: + +``` +export ODBCINI=/.../teradata/client/ODBC_64/odbc.ini +export ODBCINST=/.../teradata/client/ODBC_64/odbcinst.ini +``` + +We recommend using the first library because of the + lack of requirement around ODBC drivers and + because it's more regularly updated. diff --git a/content/en/docs/Integrations/Database_Guide/Databases/trino.md b/content/en/docs/Integrations/Database_Guide/Databases/trino.md new file mode 100644 index 00000000..f009404e --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/trino.md @@ -0,0 +1,27 @@ +--- +title: Trino +hide_title: true +sidebar_position: 34 +version: 1 +--- + +## Trino + +Supported trino version 352 and higher + +The [sqlalchemy-trino](https://pypi.org/project/sqlalchemy-trino/) library is the recommended way to connect to Trino through SQLAlchemy. + +The expected connection string is formatted as follows: + +``` +trino://{username}:{password}@{hostname}:{port}/{catalog} +``` + +If you are running trino with docker on local machine please use the following connection URL + +``` +trino://trino@host.docker.internal:8080 +``` + +Reference: +[Trino-Feris-Podcast](https://trino.io/episodes/12.html) diff --git a/content/en/docs/Integrations/Database_Guide/Databases/vertica.md b/content/en/docs/Integrations/Database_Guide/Databases/vertica.md new file mode 100644 index 00000000..e096c3bc --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/vertica.md @@ -0,0 +1,31 @@ +--- +title: Vertica +hide_title: true +sidebar_position: 32 +version: 1 +--- + +## Vertica + +The recommended connector library is +[sqlalchemy-vertica-python](https://pypi.org/project/sqlalchemy-vertica-python/). The +[Vertica](http://www.vertica.com/) connection parameters are: + +- **User Name:** UserName +- **Password:** DBPassword +- **Database Host:** + - For Localhost : localhost or 127.0.0.1 + - For On Prem : IP address or Host name + - For Cloud: IP Address or Host Name +- **Database Name:** Database Name +- **Port:** default 5433 + +The connection string is formatted as follows: + +``` +vertica+vertica_python://{username}:{password}@{host}/{database} +``` + +Other parameters: + +- Load Balancer - Backup Host diff --git a/content/en/docs/Integrations/Database_Guide/Databases/yugabytedb.md b/content/en/docs/Integrations/Database_Guide/Databases/yugabytedb.md new file mode 100644 index 00000000..f6b6443a --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/Databases/yugabytedb.md @@ -0,0 +1,20 @@ +--- +title: YugabyteDB +hide_title: true +sidebar_position: 38 +version: 1 +--- + +## YugabyteDB + +[YugabyteDB](https://www.yugabyte.com/) is a distributed SQL database built on top of PostgreSQL. + +Note that, if you're using docker-compose, the +Postgres connector library [psycopg2](https://www.psycopg.org/docs/) +comes out of the box with {{< param replacables.brand_name >}}. + +The connection string looks like: + +``` +postgresql://{username}:{password}@{host}:{port}/{database} +``` diff --git a/content/en/docs/Integrations/Database_Guide/_category_.json b/content/en/docs/Integrations/Database_Guide/_category_.json new file mode 100644 index 00000000..de1c6401 --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/_category_.json @@ -0,0 +1,4 @@ +{ + "label": "Connecting to Databases", + "position": 3 +} diff --git a/content/en/docs/Integrations/Database_Guide/_index.md b/content/en/docs/Integrations/Database_Guide/_index.md new file mode 100644 index 00000000..e1b03a0c --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/_index.md @@ -0,0 +1,100 @@ +--- +title: "Database Guide" +linkTitle: "Database Guide" +description: >- + The Database Guide provides an overview of setting up databases and the specifics of each DB type. +--- + + +## Install Database Drivers + +{{< param replacables.brand_name >}} FX requires a Python DB-API database driver and a SQLAlchemy dialect to be installed for each datastore you want to connect to within the executor image. + +## Configuring Database Connections + +{{< param replacables.brand_name >}} can manage preset connection configurations. This enables a platform wide set up for both confidential as well as general access databases. + +{{< param replacables.brand_name >}} uses the SQL Alchemy Engine along with the URL template based approach to connection management. The connection configurations are maintained as secrets within the platform and are therefore not publicly accessible i.e. access is provided for administrators only. + + +## Retrieving DB Connections + +The following is how to retrieve a named connection. The following sample assumes that the connection identifier key is uploaded to the package as a secrets.json. + +```python +from fx_ef import context +import sqlalchemy as db + +db_url = context.secrets.get('my_connection') +engine = db.create_engine(db_url) + +connection = engine.connect() +metadata = db.MetaData() + +``` +In the above example the db_url is set up as a secret with name `'my_connection'`. + +Depending on whether this is a service, project or platform level secret there are different approaches to set up the secret. For service level secret the following is a sample set up for a `secrets.json` file of the package. + +```json +{ + "my_connection" = "mysql://scott:tiger@localhost/test" +} +``` +* For Project scope use the `'secrets'` tab of the Project Management UI. +* For Platform scope secrets use the `Vault UI` in the FX Manager Application. + + + +## Database Drivers + +The following table provides a guide on the python libs to be installed within the Executor docker image. For instructions on how to extend the Executor docker image please check this page: /docs/extending_executor_image + +You can read more here about how to install new database drivers and libraries into your {{< param replacables.brand_name >}} FX executor image. + +Note that many other databases are supported, the main criteria being the existence of a functional SQLAlchemy dialect and Python driver. Searching for the keyword "sqlalchemy + (database name)" should help get you to the right place. + +If your database or data engine isn't on the list but a SQL interface exists, please file an issue so we can work on documenting and supporting it. + + +A list of some of the recommended packages. + +| Database | PyPI package | +| ------------------------------------------------------------ | ------------------------------------------------------------ | +| [Amazon Athena](/docs/integrations/database_guide/databases/athena) | `pip install "PyAthenaJDBC>1.0.9` , `pip install "PyAthena>1.2.0` | +| [Amazon Redshift](/docs/integrations/database_guide/databases/redshift) | `pip install sqlalchemy-redshift` | +| [Apache Drill](/docs/integrations/database_guide/databases/drill) | `pip install sqlalchemy-drill` | +| [Apache Druid](/docs/integrations/database_guide/databases/druid) | `pip install pydruid` | +| [Apache Hive](/docs/integrations/database_guide/databases/hive) | `pip install pyhive` | +| [Apache Impala](/docs/integrations/database_guide/databases/impala) | `pip install impyla` | +| [Apache Kylin](/docs/integrations/database_guide/databases/kylin) | `pip install kylinpy` | +| [Apache Pinot](/docs/integrations/database_guide/databases/pinot) | `pip install pinotdb` | +| [Apache Solr](/docs/integrations/database_guide/databases/solr) | `pip install sqlalchemy-solr` | +| [Apache Spark SQL](/docs/integrations/database_guide/databases/spark-sql) | `pip install pyhive` | +| [Ascend.io](/docs/integrations/database_guide/databases/ascend) | `pip install impyla` | +| [Azure MS SQL](/docs/integrations/database_guide/databases/sql-server) | `pip install pymssql` | +| [Big Query](/docs/integrations/database_guide/databases/bigquery) | `pip install pybigquery` | +| [ClickHouse](/docs/integrations/database_guide/databases/clickhouse) | `pip install clickhouse-driver==0.2.0 && pip install clickhouse-sqlalchemy==0.1.6` | +| [CockroachDB](/docs/integrations/database_guide/databases/cockroachdb) | `pip install cockroachdb` | +| [Dremio](/docs/integrations/database_guide/databases/dremio) | `pip install sqlalchemy_dremio` | +| [Elasticsearch](/docs/integrations/database_guide/databases/elasticsearch) | `pip install elasticsearch-dbapi` | +| [Exasol](/docs/integrations/database_guide/databases/exasol) | `pip install sqlalchemy-exasol` | +| [Google Sheets](/docs/integrations/database_guide/databases/google-sheets) | `pip install shillelagh[gsheetsapi]` | +| [Firebolt](/docs/integrations/database_guide/databases/firebolt) | `pip install firebolt-sqlalchemy` | +| [Hologres](/docs/integrations/database_guide/databases/hologres) | `pip install psycopg2` | +| [IBM Db2](/docs/integrations/database_guide/databases/ibm-db2) | `pip install ibm_db_sa` | +| [IBM Netezza Performance Server](/docs/integrations/database_guide/databases/netezza) | `pip install nzalchemy` | +| [MySQL](/docs/integrations/database_guide/databases/mysql) | `pip install mysqlclient` | +| [Oracle](/docs/integrations/database_guide/databases/oracle) | `pip install cx_Oracle` | +| [PostgreSQL](/docs/integrations/database_guide/databases/postgres) | `pip install psycopg2` | +| [Trino](/docs/integrations/database_guide/databases/trino) | `pip install sqlalchemy-trino` | +| [Presto](/docs/integrations/database_guide/databases/presto) | `pip install pyhive` | +| [SAP Hana](/docs/integrations/database_guide/databases/hana) | `pip install hdbcli sqlalchemy-hana or pip install apache-Feris[hana]` | +| [Snowflake](/docs/integrations/database_guide/databases/snowflake) | `pip install snowflake-sqlalchemy` | +| SQLite | No additional library needed | +| [SQL Server](/docs/integrations/database_guide/databases/sql-server) | `pip install pymssql` | +| [Teradata](/docs/integrations/database_guide/databases/teradata) | `pip install teradatasqlalchemy` | +| [Vertica](/docs/integrations/database_guide/databases/vertica) | `pip install sqlalchemy-vertica-python` | +| [Yugabyte](/docs/integrations/database_guide/databases/yugabytedb) | `pip install psycopg2` | + +------ \ No newline at end of file diff --git a/content/en/docs/Integrations/Database_Guide/mv.sh b/content/en/docs/Integrations/Database_Guide/mv.sh new file mode 100755 index 00000000..242e8399 --- /dev/null +++ b/content/en/docs/Integrations/Database_Guide/mv.sh @@ -0,0 +1,4 @@ +for f in *.mdx; do + mv -- "$f" "${f%.mdx}.md" +done + diff --git a/content/en/docs/Integrations/Edge_Adapters.md b/content/en/docs/Integrations/Edge_Adapters.md new file mode 100644 index 00000000..ba259b3f --- /dev/null +++ b/content/en/docs/Integrations/Edge_Adapters.md @@ -0,0 +1,226 @@ +--- +title: "Edge Adapters" +linkTitle: "Edge Adapters" +ags: [quickstart, events, adaptors] +categories: ["Knowledge Base"] +weight: 210 +description: >- + A Guide to Integrations Using the {{< param replacables.brand_name >}} Edge Adapter. +--- + +The Event Source Adapter enables easy integration of external event streams to Ferris. + +The role of the Event Source Adapter is to receive events from external streams, convert them into Cloud Events and push them to the ferris.events Kafka Topic. The Cloud Events that are generated will contain an indicator of the source, one or more specific event types (depending on the type of source and the use case) and the content of the source event in the payload of the output Cloud Event. + + +## Example Event Source Adapters + +The following are a couple of examples of source adapters: + +Generic Webhook Adapter : Exposes a webhook end point outside the cluster which may be used to submit events as webhook requets. The generic adapter may source multiple event types and does not filter the content. It may be used for example to simultaneously accept AWS EventBrige CouldEvents and GitHub Webhooks. It is the role of a package to filter or split events as is suited for the use case. + +Twitter Adapter: Streams twitter based on configured hash tags and converts them to cloud events. + +IBM MQ Adapter + +Kafka Adapter: Sources data from JSON streams within kafka and converts them to Cloud Events. + +Azure MessageBus Adapter: + +Amazon SQS Adapter + +MQTT Adapter + +Redis Queue Adapter + +ActiveMQ Source + +Amazon CloudWatch Logs Source + +Amazon CloudWatch Metrics Sink + +Amazon DynamoDB Sink + +Amazon Kinesis Source + +Amazon Redshift Sink + +Amazon SQS Source + +Amazon S3 Sink + +AWS Lambda Sink + +Azure Blob Storage Sink + +Azure Cognitive Search Sink + +Azure Cosmos DB Sink + +Azure Data Lake Storage Gen2 Sink + +Azure Event Hubs Source + +Azure Functions Sink + +Azure Service Bus Source + +Azure Synapse Analytics Sink + +Databricks Delta Lake Sink + +Datadog Metrics Sink + +Datagen Source (development and testing) + +Elasticsearch Service Sink + +GitHub Source + +Google BigQuery Sink + +Google Cloud BigTable Sink + +Google Cloud Functions Sink + +Google Cloud Spanner Sink + +Google Cloud Storage Sink + +Google Pub/Sub Source + +HTTP Sink + +IBM MQ Source + +Microsoft SQL Server CDC Source (Debezium) + +Microsoft SQL Server Sink (JDBC) + +Microsoft SQL Server Source (JDBC) + +MongoDB Atlas Sink + +MongoDB Atlas Source + +MQTT Sink + +MQTT Source + +MySQL CDC Source (Debezium) + +MySQL Sink (JDBC) + +MySQL Source (JDBC) + +Oracle Database Sink + +Oracle Database Source + +PagerDuty Sink + +PostgreSQL CDC Source (Debezium) + +PostgreSQL Sink (JDBC) + +PostgreSQL Source (JDBC) + +RabbitMQ Sink + +RabbitMQ Source Connector + +Redis Sink + +Salesforce Bulk API Source + +Salesforce CDC Source + +Salesforce Platform Event Sink + +Salesforce Platform Event Source + +Salesforce PushTopic Source + +Salesforce SObject Sink + +ServiceNow Sink + +ServiceNow Source + +SFTP Sink + +SFTP Source + +Snowflake Sink + +Solace Sink + +Splunk Sink + +Zendesk Source + + +## Generic Webhook Adapter + +The Edge Adapter exposes a single endpoint for Webhooks. The webhook may be used for a large number of incoming integrations. Some examples are provided below. + +> To see the API please visit webhook.edge.YOURDOMAIN.COM/ui +> _Example webhook.edge.ferris.ai/ui_ + +In order to use the end point you must first generate a token to be used when submitting to the endpoint. To generate a token please follow instructions in the Secrets section. + +{{< blocks/youtube color="white" video="https://www.youtube.com/embed/vJDATHEaeK8">}} + + +## How it Works + +The {{< param replacables.brand_name >}} Edge Adapter is an edge service which is exposed to services outside the network for incoming integrations with external services. It exposes a single token protected endpoint which accepts a JSON payload within a POST request. + +The payload encapsulated within the POST is forwarded to the ferris.events topic with the data encapsulated in the Cloud Events 'data' section. The event type is 'ferris.events.webhook.incoming' . + +The platform may host any number of packages which then process the webhooks based on parsing the data section. + +The {{< param replacables.brand_name >}} Edge Adapter is one of the few services exposed to the Internet. + +**Ferris Edge Adapter** +
      FERRIS
      EDGE
      ADAPTER
      FERRIS...
      AWS
      EVENT
      BRIDGE
      AWS...
      AWS S3
      AWS S3
      GITHUB
      WEBHOOK
      GITHUB...
      TWILLIO
      EVENTSTREAM
      WEBHOOK
      TWILLIO...
      80+? AWS SERVICES
      80+? AWS SERVICES
      JSON-2-CLOUDEVENT
      CONVERTER
      JSON-2-CLOUDEVENT...
      KAFKA
      EVENTS TOPIC
      KAFKA...
      PACKAGE A
      PACKAGE A
      PACKAGE B
      PACKAGE B
      ANY
      WEBHOOK
      SERVICE ( SLACK...)
      ANY...
      Text is not SVG - cannot display
      + + +## Integrations + +The following sections document details on some of the possible integrations. + + +### AWS EventBridge + +{{< blocks/youtube color="white" video="https://www.loom.com/embed/7ec1542c166f4dbb941510f4dbf5c2f0">}} + +### AWS S3 ( please switch images) + +A pre-requisite is to ensure that EventBridge is sending Events to Ferris. Please see this section on how to set it up. + +Create a bucket and switch to the Properties Tab of the UI + +{{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/image-20220305125600033.png">}} + +Scroll to the bottom and turn on Event Bridge Notfications by clicking on the Edit button below the section Amazon EventBridge + +{{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/image-20220305125800254.png">}} + +### GitHub Integration + +To be notified on changes to a GitHub Repo please follow the steps below. + +Click on the 'Settings' icon for the repo + +{{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/image-20220305123206740.png">}} + +Select the Webhooks menu on the left of the 'Settings' page. Then click on the 'Add webhook' button. + +{{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/image-20220305123425675.png">}} + +Add the URL of your edge adapter end point. And ensure the content type is application/json. Finally add the api token generated on the {{< param replacables.brand_name >}} Management UI. Further in the page you may select what event types should be sent. If unsure please maintain the default settings. + +{{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/image-20220305123919206.png">}} + +Test your integration by pushing an update to the repository. \ No newline at end of file diff --git a/content/en/docs/Integrations/Notifications_And_Messaging/_index.md b/content/en/docs/Integrations/Notifications_And_Messaging/_index.md new file mode 100644 index 00000000..705a55f3 --- /dev/null +++ b/content/en/docs/Integrations/Notifications_And_Messaging/_index.md @@ -0,0 +1,230 @@ +--- +title: "Notifications and Messaging" +linkTitle: "Notifications and Messaging" +description: >- + How to integrate a notifications with the {{< param replacables.brand_name >}} Platform. +--- + +{{< param replacables.brand_name >}} provides you access to over 40 notification services such as Slack, Email and Telegram. + +{{< param replacables.brand_name >}} FX uses the Apprise Python Libs as an engine for notifiation dispatch. The power of Apprise gives you access to over 40 notification services. A complete list is provided in a table and the end of the document. + +In order to send notifications from your package you need is to create and emit a pre-defined event type. + +## How to send notifications from your package + +In order to send notifications from your package you need to send a 'ferris.notifications.apprise.notification' event from your package + +You can do it like so. + +```python +from fx_ef import context + +# Please note that the value for the url_template is the name used within the config +# For a specifc URL template in configurations. +# Please see configurations sample on how to configure + +data = { +"url_template": "slack_1", +"body": "This is the content", +"title": "This is the subject" +} + +event_type = "ferris.notifications.apprise.notification" +context.events.send(event_type, data) + +``` + + + +## How does it Work? + +There are 2 approaches to implementing the notifications support. + +* Implementation within a {{< param replacables.brand_name >}} Service +* Implementation in an Exit Gateway + +The 2nd option is used in platforms which are behind a firewall and therefore require the gateway to be outside the firewall for accessing external services. In these cases the adapter runs as a separate container. + +Irrespective of the infrastructure implementation the service internal API (as illustrated above) does not change. + +The following documentation refers to Option 1. + + + +## Pre-Requisites + +In order to send notifcations + +* The Apprise Libs must be present in the Executor Image. + +* You must have the Apprise Notifications Packages installed and running. You can find the code base further below in document. + +* You must upload a secrets.json file for the Apprise Notifications Package. Please note that you should maintain a separate copy of the configs since this will not be displayed in your configuration manager since it contains credentials. + +* A sample configuration file is provided below. Please use the table based on Apprise documentation to understand the URL Template structure. + +* Once the Apprise Notifications Package is installed along with the configurations you must link the package to be triggered by the 'ferris.notifications.apprise.notification' event. + + + + + +## The {{< param replacables.brand_name >}} Apprise Package + +The following is code for an {{< param replacables.brand_name >}} executor package to send apprise based notifications. + +To send a notification from within your python application, just do the following: + +```python +import apprise +from fx_ef import context + +# Getting the incoming parameters +url_template_name = context.params.get('url_template') + +# Create an Apprise instance +apobj = apprise.Apprise() + +# Setting up the Apprise object with URLs +# The URL is retreived from the uploaded secrets.json +apobj.add(context.secrets.get(url_template_name)) + +try: + apobj.notify( + body=get_param('body'), + title=get_param('title'), + ) +except Exception as ex: + print(ex) +``` + + + +## Configuration + +The following is a sample configuration which is uploaded as a secrets.json file for the {{< param replacables.brand_name >}} Apprise Package. + +The configuration consists of a set of named URL templates. With each url_template being based on the Apprise URL schema as shown in the sections further in document. + +While you are free to name URL templates as you wish it is preferred to prefix them with an indication of the underlying service being used to send notifications. + +```json +{ + "slack_1": "slack://TokenA/TokenB/TokenC/", + "slack_2": "slack://TokenA/TokenB/TokenC/Channel", + "slack_3":"slack://botname@TokenA/TokenB/TokenC/Channel", + "slack_4": "slack://user@TokenA/TokenB/TokenC/Channel1/Channel2/ChannelN", + "telegram_1": "tgram://bottoken/ChatID", + "telegram_2": "tgram://bottoken/ChatID1/ChatID2/ChatIDN" +} +``` + +The configurations must be added to a secrets.json file and uploaded as part of the apprise_package. + +The apprise package must be configured to be triggered by the 'ferris.notifications.apprise.notification' event. + +### Popular Notification Services + +The table below identifies the services this tool supports and some example service urls you need to use in order to take advantage of it. + +Click on any of the services listed below to get more details on how you can configure Apprise to access them. + +| Notification Service | Service ID | Default Port | Example Syntax | +| -------------------- | ---------- | ------------ | -------------- | +| [Apprise API](https://github.com/caronc/apprise/wiki/Notify_apprise_api) | apprise:// or apprises:// | (TCP) 80 or 443 | apprise://hostname/Token| +| [AWS SES](https://github.com/caronc/apprise/wiki/Notify_ses) | ses:// | (TCP) 443 | ses://user@domain/AccessKeyID/AccessSecretKey/RegionName
      ses://user@domain/AccessKeyID/AccessSecretKey/RegionName/email1/email2/emailN| +| [Discord](https://github.com/caronc/apprise/wiki/Notify_discord) | discord:// | (TCP) 443 | discord://webhook_id/webhook_token
      discord://avatar@webhook_id/webhook_token| +| [Emby](https://github.com/caronc/apprise/wiki/Notify_emby) | emby:// or embys:// | (TCP) 8096 | emby://user@hostname/
      emby://user:password@hostname| +| [Enigma2](https://github.com/caronc/apprise/wiki/Notify_enigma2) | enigma2:// or enigma2s:// | (TCP) 80 or 443 | enigma2://hostname| +| [Faast](https://github.com/caronc/apprise/wiki/Notify_faast) | faast:// | (TCP) 443 | faast://authorizationtoken| +| [FCM](https://github.com/caronc/apprise/wiki/Notify_fcm) | fcm:// | (TCP) 443 | fcm://project@apikey/DEVICE_ID
      fcm://project@apikey/#TOPIC
      fcm://project@apikey/DEVICE_ID1/#topic1/#topic2/DEVICE_ID2/| +| [Flock](https://github.com/caronc/apprise/wiki/Notify_flock) | flock:// | (TCP) 443 | flock://token
      flock://botname@token
      flock://app_token/u:userid
      flock://app_token/g:channel_id
      flock://app_token/u:userid/g:channel_id| +| [Gitter](https://github.com/caronc/apprise/wiki/Notify_gitter) | gitter:// | (TCP) 443 | gitter://token/room
      gitter://token/room1/room2/roomN| +| [Google Chat](https://github.com/caronc/apprise/wiki/Notify_googlechat) | gchat:// | (TCP) 443 | gchat://workspace/key/token| +| [Gotify](https://github.com/caronc/apprise/wiki/Notify_gotify) | gotify:// or gotifys:// | (TCP) 80 or 443 | gotify://hostname/token
      gotifys://hostname/token?priority=high| +| [Growl](https://github.com/caronc/apprise/wiki/Notify_growl) | growl:// | (UDP) 23053 | growl://hostname
      growl://hostname:portno
      growl://password@hostname
      growl://password@hostname:port
      **Note**: you can also use the get parameter _version_ which can allow the growl request to behave using the older v1.x protocol. An example would look like: growl://hostname?version=1| +| [Home Assistant](https://github.com/caronc/apprise/wiki/Notify_homeassistant) | hassio:// or hassios:// | (TCP) 8123 or 443 | hassio://hostname/accesstoken
      hassio://user@hostname/accesstoken
      hassio://user:password@hostname:port/accesstoken
      hassio://hostname/optional/path/accesstoken| +| [IFTTT](https://github.com/caronc/apprise/wiki/Notify_ifttt) | ifttt:// | (TCP) 443 | ifttt://webhooksID/Event
      ifttt://webhooksID/Event1/Event2/EventN
      ifttt://webhooksID/Event1/?+Key=Value
      ifttt://webhooksID/Event1/?-Key=value1| +| [Join](https://github.com/caronc/apprise/wiki/Notify_join) | join:// | (TCP) 443 | join://apikey/device
      join://apikey/device1/device2/deviceN/
      join://apikey/group
      join://apikey/groupA/groupB/groupN
      join://apikey/DeviceA/groupA/groupN/DeviceN/| +| [KODI](https://github.com/caronc/apprise/wiki/Notify_kodi) | kodi:// or kodis:// | (TCP) 8080 or 443 | kodi://hostname
      kodi://user@hostname
      kodi://user:password@hostname:port| +| [Kumulos](https://github.com/caronc/apprise/wiki/Notify_kumulos) | kumulos:// | (TCP) 443 | kumulos://apikey/serverkey| +| [LaMetric Time](https://github.com/caronc/apprise/wiki/Notify_lametric) | lametric:// | (TCP) 443 | lametric://apikey@device_ipaddr
      lametric://apikey@hostname:port
      lametric://client_id@client_secret| +| [Mailgun](https://github.com/caronc/apprise/wiki/Notify_mailgun) | mailgun:// | (TCP) 443 | mailgun://user@hostname/apikey
      mailgun://user@hostname/apikey/email
      mailgun://user@hostname/apikey/email1/email2/emailN
      mailgun://user@hostname/apikey/?name="From%20User"| +| [Matrix](https://github.com/caronc/apprise/wiki/Notify_matrix) | matrix:// or matrixs:// | (TCP) 80 or 443 | matrix://hostname
      matrix://user@hostname
      matrixs://user:pass@hostname:port/#room_alias
      matrixs://user:pass@hostname:port/!room_id
      matrixs://user:pass@hostname:port/#room_alias/!room_id/#room2
      matrixs://token@hostname:port/?webhook=matrix
      matrix://user:token@hostname/?webhook=slack&format=markdown| +| [Mattermost](https://github.com/caronc/apprise/wiki/Notify_mattermost) | mmost:// or mmosts:// | (TCP) 8065 | mmost://hostname/authkey
      mmost://hostname:80/authkey
      mmost://user@hostname:80/authkey
      mmost://hostname/authkey?channel=channel
      mmosts://hostname/authkey
      mmosts://user@hostname/authkey
      | +| [Microsoft Teams](https://github.com/caronc/apprise/wiki/Notify_msteams) | msteams:// | (TCP) 443 | msteams://TokenA/TokenB/TokenC/| +| [MQTT](https://github.com/caronc/apprise/wiki/Notify_mqtt) | mqtt:// or mqtts:// | (TCP) 1883 or 8883 | mqtt://hostname/topic
      mqtt://user@hostname/topic
      mqtts://user:pass@hostname:9883/topic| +| [Nextcloud](https://github.com/caronc/apprise/wiki/Notify_nextcloud) | ncloud:// or nclouds:// | (TCP) 80 or 443 | ncloud://adminuser:pass@host/User
      nclouds://adminuser:pass@host/User1/User2/UserN| +| [NextcloudTalk](https://github.com/caronc/apprise/wiki/Notify_nextcloudtalk) | nctalk:// or nctalks:// | (TCP) 80 or 443 | nctalk://user:pass@host/RoomId
      nctalks://user:pass@host/RoomId1/RoomId2/RoomIdN| +| [Notica](https://github.com/caronc/apprise/wiki/Notify_notica) | notica:// | (TCP) 443 | notica://Token/| +| [Notifico](https://github.com/caronc/apprise/wiki/Notify_notifico) | notifico:// | (TCP) 443 | notifico://ProjectID/MessageHook/| +| [Office 365](https://github.com/caronc/apprise/wiki/Notify_office365) | o365:// | (TCP) 443 | o365://TenantID:AccountEmail/ClientID/ClientSecret
      o365://TenantID:AccountEmail/ClientID/ClientSecret/TargetEmail
      o365://TenantID:AccountEmail/ClientID/ClientSecret/TargetEmail1/TargetEmail2/TargetEmailN| +| [OneSignal](https://github.com/caronc/apprise/wiki/Notify_onesignal) | onesignal:// | (TCP) 443 | onesignal://AppID@APIKey/PlayerID
      onesignal://TemplateID:AppID@APIKey/UserID
      onesignal://AppID@APIKey/#IncludeSegment
      onesignal://AppID@APIKey/Email| +| [Opsgenie](https://github.com/caronc/apprise/wiki/Notify_opsgenie) | opsgenie:// | (TCP) 443 | opsgenie://APIKey
      opsgenie://APIKey/UserID
      opsgenie://APIKey/#Team
      opsgenie://APIKey/\*Schedule
      opsgenie://APIKey/^Escalation| +| [ParsePlatform](https://github.com/caronc/apprise/wiki/Notify_parseplatform) | parsep:// or parseps:// | (TCP) 80 or 443 | parsep://AppID:MasterKey@Hostname
      parseps://AppID:MasterKey@Hostname| +| [PopcornNotify](https://github.com/caronc/apprise/wiki/Notify_popcornnotify) | popcorn:// | (TCP) 443 | popcorn://ApiKey/ToPhoneNo
      popcorn://ApiKey/ToPhoneNo1/ToPhoneNo2/ToPhoneNoN/
      popcorn://ApiKey/ToEmail
      popcorn://ApiKey/ToEmail1/ToEmail2/ToEmailN/
      popcorn://ApiKey/ToPhoneNo1/ToEmail1/ToPhoneNoN/ToEmailN| +| [Prowl](https://github.com/caronc/apprise/wiki/Notify_prowl) | prowl:// | (TCP) 443 | prowl://apikey
      prowl://apikey/providerkey| +| [PushBullet](https://github.com/caronc/apprise/wiki/Notify_pushbullet) | pbul:// | (TCP) 443 | pbul://accesstoken
      pbul://accesstoken/#channel
      pbul://accesstoken/A_DEVICE_ID
      pbul://accesstoken/email@address.com
      pbul://accesstoken/#channel/#channel2/email@address.net/DEVICE| +| [Push (Techulus)](https://github.com/caronc/apprise/wiki/Notify_techulus) | push:// | (TCP) 443 | push://apikey/| +| [Pushed](https://github.com/caronc/apprise/wiki/Notify_pushed) | pushed:// | (TCP) 443 | pushed://appkey/appsecret/
      pushed://appkey/appsecret/#ChannelAlias
      pushed://appkey/appsecret/#ChannelAlias1/#ChannelAlias2/#ChannelAliasN
      pushed://appkey/appsecret/@UserPushedID
      pushed://appkey/appsecret/@UserPushedID1/@UserPushedID2/@UserPushedIDN| +| [Pushover](https://github.com/caronc/apprise/wiki/Notify_pushover) | pover:// | (TCP) 443 | pover://user@token
      pover://user@token/DEVICE
      pover://user@token/DEVICE1/DEVICE2/DEVICEN
      **Note**: you must specify both your user_id and token| +| [PushSafer](https://github.com/caronc/apprise/wiki/Notify_pushsafer) | psafer:// or psafers:// | (TCP) 80 or 443 | psafer://privatekey
      psafers://privatekey/DEVICE
      psafer://privatekey/DEVICE1/DEVICE2/DEVICEN| +| [Reddit](https://github.com/caronc/apprise/wiki/Notify_reddit) | reddit:// | (TCP) 443 | reddit://user:password@app_id/app_secret/subreddit
      reddit://user:password@app_id/app_secret/sub1/sub2/subN| +| [Rocket.Chat](https://github.com/caronc/apprise/wiki/Notify_rocketchat) | rocket:// or rockets:// | (TCP) 80 or 443 | rocket://user:password@hostname/RoomID/Channel
      rockets://user:password@hostname:443/#Channel1/#Channel1/RoomID
      rocket://user:password@hostname/#Channel
      rocket://webhook@hostname
      rockets://webhook@hostname/@User/#Channel| +| [Ryver](https://github.com/caronc/apprise/wiki/Notify_ryver) | ryver:// | (TCP) 443 | ryver://Organization/Token
      ryver://botname@Organization/Token| +| [SendGrid](https://github.com/caronc/apprise/wiki/Notify_sendgrid) | sendgrid:// | (TCP) 443 | sendgrid://APIToken:FromEmail/
      sendgrid://APIToken:FromEmail/ToEmail
      sendgrid://APIToken:FromEmail/ToEmail1/ToEmail2/ToEmailN/| +| [ServerChan](https://github.com/caronc/apprise/wiki/Notify_serverchan) | serverchan:// | (TCP) 443 | serverchan://token/| +| [SimplePush](https://github.com/caronc/apprise/wiki/Notify_simplepush) | spush:// | (TCP) 443 | spush://apikey
      spush://salt:password@apikey
      spush://apikey?event=Apprise| +| [Slack](https://github.com/caronc/apprise/wiki/Notify_slack) | slack:// | (TCP) 443 | slack://TokenA/TokenB/TokenC/
      slack://TokenA/TokenB/TokenC/Channel
      slack://botname@TokenA/TokenB/TokenC/Channel
      slack://user@TokenA/TokenB/TokenC/Channel1/Channel2/ChannelN| +| [SMTP2Go](https://github.com/caronc/apprise/wiki/Notify_smtp2go) | smtp2go:// | (TCP) 443 | smtp2go://user@hostname/apikey
      smtp2go://user@hostname/apikey/email
      smtp2go://user@hostname/apikey/email1/email2/emailN
      smtp2go://user@hostname/apikey/?name="From%20User"| +| [Streamlabs](https://github.com/caronc/apprise/wiki/Notify_streamlabs) | strmlabs:// | (TCP) 443 | strmlabs://AccessToken/
      strmlabs://AccessToken/?name=name&identifier=identifier&amount=0¤cy=USD| +| [SparkPost](https://github.com/caronc/apprise/wiki/Notify_sparkpost) | sparkpost:// | (TCP) 443 | sparkpost://user@hostname/apikey
      sparkpost://user@hostname/apikey/email
      sparkpost://user@hostname/apikey/email1/email2/emailN
      sparkpost://user@hostname/apikey/?name="From%20User"| +| [Spontit](https://github.com/caronc/apprise/wiki/Notify_spontit) | spontit:// | (TCP) 443 | spontit://UserID@APIKey/
      spontit://UserID@APIKey/Channel
      spontit://UserID@APIKey/Channel1/Channel2/ChannelN| +| [Syslog](https://github.com/caronc/apprise/wiki/Notify_syslog) | syslog:// | (UDP) 514 (_if hostname specified_) | syslog://
      syslog://Facility
      syslog://hostname
      syslog://hostname/Facility| +| [Telegram](https://github.com/caronc/apprise/wiki/Notify_telegram) | tgram:// | (TCP) 443 | tgram://bottoken/ChatID
      tgram://bottoken/ChatID1/ChatID2/ChatIDN| +| [Twitter](https://github.com/caronc/apprise/wiki/Notify_twitter) | twitter:// | (TCP) 443 | twitter://CKey/CSecret/AKey/ASecret
      twitter://user@CKey/CSecret/AKey/ASecret
      twitter://CKey/CSecret/AKey/ASecret/User1/User2/User2
      twitter://CKey/CSecret/AKey/ASecret?mode=tweet| +| [Twist](https://github.com/caronc/apprise/wiki/Notify_twist) | twist:// | (TCP) 443 | twist://pasword:login
      twist://password:login/#channel
      twist://password:login/#team:channel
      twist://password:login/#team:channel1/channel2/#team3:channel| +| [XMPP](https://github.com/caronc/apprise/wiki/Notify_xmpp) | xmpp:// or xmpps:// | (TCP) 5222 or 5223 | xmpp://user:password@hostname
      xmpps://user:password@hostname:port?jid=user@hostname/resource
      xmpps://user:password@hostname/target@myhost, target2@myhost/resource| +| [Webex Teams (Cisco)](https://github.com/caronc/apprise/wiki/Notify_wxteams) | wxteams:// | (TCP) 443 | wxteams://Token| +| [Zulip Chat](https://github.com/caronc/apprise/wiki/Notify_zulip) | zulip:// | (TCP) 443 | zulip://botname@Organization/Token
      zulip://botname@Organization/Token/Stream
      zulip://botname@Organization/Token/Email| + + +### SMS Notification Support +| Notification Service | Service ID | Default Port | Example Syntax | +| -------------------- | ---------- | ------------ | -------------- | +| [AWS SNS](https://github.com/caronc/apprise/wiki/Notify_sns) | sns:// | (TCP) 443 | sns://AccessKeyID/AccessSecretKey/RegionName/+PhoneNo
      sns://AccessKeyID/AccessSecretKey/RegionName/+PhoneNo1/+PhoneNo2/+PhoneNoN
      sns://AccessKeyID/AccessSecretKey/RegionName/Topic
      sns://AccessKeyID/AccessSecretKey/RegionName/Topic1/Topic2/TopicN +| [ClickSend](https://github.com/caronc/apprise/wiki/Notify_clicksend) | clicksend:// | (TCP) 443 | clicksend://user:pass@PhoneNo
      clicksend://user:pass@ToPhoneNo1/ToPhoneNo2/ToPhoneNoN +| [DAPNET](https://github.com/caronc/apprise/wiki/Notify_dapnet) | dapnet:// | (TCP) 80 | dapnet://user:pass@callsign
      dapnet://user:pass@callsign1/callsign2/callsignN +| [D7 Networks](https://github.com/caronc/apprise/wiki/Notify_d7networks) | d7sms:// | (TCP) 443 | d7sms://user:pass@PhoneNo
      d7sms://user:pass@ToPhoneNo1/ToPhoneNo2/ToPhoneNoN +| [DingTalk](https://github.com/caronc/apprise/wiki/Notify_dingtalk) | dingtalk:// | (TCP) 443 | dingtalk://token/
      dingtalk://token/ToPhoneNo
      dingtalk://token/ToPhoneNo1/ToPhoneNo2/ToPhoneNo1/ +| [Kavenegar](https://github.com/caronc/apprise/wiki/Notify_kavenegar) | kavenegar:// | (TCP) 443 | kavenegar://ApiKey/ToPhoneNo
      kavenegar://FromPhoneNo@ApiKey/ToPhoneNo
      kavenegar://ApiKey/ToPhoneNo1/ToPhoneNo2/ToPhoneNoN +| [MessageBird](https://github.com/caronc/apprise/wiki/Notify_messagebird) | msgbird:// | (TCP) 443 | msgbird://ApiKey/FromPhoneNo
      msgbird://ApiKey/FromPhoneNo/ToPhoneNo
      msgbird://ApiKey/FromPhoneNo/ToPhoneNo1/ToPhoneNo2/ToPhoneNoN/ +| [MSG91](https://github.com/caronc/apprise/wiki/Notify_msg91) | msg91:// | (TCP) 443 | msg91://AuthKey/ToPhoneNo
      msg91://SenderID@AuthKey/ToPhoneNo
      msg91://AuthKey/ToPhoneNo1/ToPhoneNo2/ToPhoneNoN/ +| [Nexmo](https://github.com/caronc/apprise/wiki/Notify_nexmo) | nexmo:// | (TCP) 443 | nexmo://ApiKey:ApiSecret@FromPhoneNo
      nexmo://ApiKey:ApiSecret@FromPhoneNo/ToPhoneNo
      nexmo://ApiKey:ApiSecret@FromPhoneNo/ToPhoneNo1/ToPhoneNo2/ToPhoneNoN/ +| [Sinch](https://github.com/caronc/apprise/wiki/Notify_sinch) | sinch:// | (TCP) 443 | sinch://ServicePlanId:ApiToken@FromPhoneNo
      sinch://ServicePlanId:ApiToken@FromPhoneNo/ToPhoneNo
      sinch://ServicePlanId:ApiToken@FromPhoneNo/ToPhoneNo1/ToPhoneNo2/ToPhoneNoN/
      sinch://ServicePlanId:ApiToken@ShortCode/ToPhoneNo
      sinch://ServicePlanId:ApiToken@ShortCode/ToPhoneNo1/ToPhoneNo2/ToPhoneNoN/ +| [Twilio](https://github.com/caronc/apprise/wiki/Notify_twilio) | twilio:// | (TCP) 443 | twilio://AccountSid:AuthToken@FromPhoneNo
      twilio://AccountSid:AuthToken@FromPhoneNo/ToPhoneNo
      twilio://AccountSid:AuthToken@FromPhoneNo/ToPhoneNo1/ToPhoneNo2/ToPhoneNoN/
      twilio://AccountSid:AuthToken@FromPhoneNo/ToPhoneNo?apikey=Key
      twilio://AccountSid:AuthToken@ShortCode/ToPhoneNo
      twilio://AccountSid:AuthToken@ShortCode/ToPhoneNo1/ToPhoneNo2/ToPhoneNoN/ + +## Desktop Notification Support +| Notification Service | Service ID | Default Port | Example Syntax | +| -------------------- | ---------- | ------------ | -------------- | +| [Linux DBus Notifications](https://github.com/caronc/apprise/wiki/Notify_dbus) | dbus://
      qt://
      glib://
      kde:// | n/a | dbus://
      qt://
      glib://
      kde:// +| [Linux Gnome Notifications](https://github.com/caronc/apprise/wiki/Notify_gnome) | gnome:// | n/a | gnome:// +| [MacOS X Notifications](https://github.com/caronc/apprise/wiki/Notify_macosx) | macosx:// | n/a | macosx:// +| [Windows Notifications](https://github.com/caronc/apprise/wiki/Notify_windows) | windows:// | n/a | windows:// + +### Email Support +| Service ID | Default Port | Example Syntax | +| ---------- | ------------ | -------------- | +| [mailto://](https://github.com/caronc/apprise/wiki/Notify_email) | (TCP) 25 | mailto://userid:pass@domain.com
      mailto://domain.com?user=userid&pass=password
      mailto://domain.com:2525?user=userid&pass=password
      mailto://user@gmail.com&pass=password
      mailto://mySendingUsername:mySendingPassword@example.com?to=receivingAddress@example.com
      mailto://userid:password@example.com?smtp=mail.example.com&from=noreply@example.com&name=no%20reply +| [mailtos://](https://github.com/caronc/apprise/wiki/Notify_email) | (TCP) 587 | mailtos://userid:pass@domain.com
      mailtos://domain.com?user=userid&pass=password
      mailtos://domain.com:465?user=userid&pass=password
      mailtos://user@hotmail.com&pass=password
      mailtos://mySendingUsername:mySendingPassword@example.com?to=receivingAddress@example.com
      mailtos://userid:password@example.com?smtp=mail.example.com&from=noreply@example.com&name=no%20reply + +Apprise have some email services built right into it (such as yahoo, fastmail, hotmail, gmail, etc) that greatly simplify the mailto:// service. See more details [here](https://github.com/caronc/apprise/wiki/Notify_email). + +### Custom Notifications +| Post Method | Service ID | Default Port | Example Syntax | +| -------------------- | ---------- | ------------ | -------------- | +| [Form](https://github.com/caronc/apprise/wiki/Notify_Custom_Form) | form:// or form:// | (TCP) 80 or 443 | form://hostname
      form://user@hostname
      form://user:password@hostname:port
      form://hostname/a/path/to/post/to +| [JSON](https://github.com/caronc/apprise/wiki/Notify_Custom_JSON) | json:// or jsons:// | (TCP) 80 or 443 | json://hostname
      json://user@hostname
      json://user:password@hostname:port
      json://hostname/a/path/to/post/to +| [XML](https://github.com/caronc/apprise/wiki/Notify_Custom_XML) | xml:// or xmls:// | (TCP) 80 or 443 | xml://hostname
      xml://user@hostname
      xml://user:password@hostname:port
      xml://hostname/a/path/to/post/to + diff --git a/content/en/docs/Integrations/Platform_Extensions.md b/content/en/docs/Integrations/Platform_Extensions.md new file mode 100644 index 00000000..86ba4c6b --- /dev/null +++ b/content/en/docs/Integrations/Platform_Extensions.md @@ -0,0 +1,34 @@ +--- +title: "Extending the Platform" +linkTitle: "Extending the Platform" +tags: [quickstart, connect, register] +categories: ["Knowledge Base"] +weight: 211 +description: >- + Extending the Platform. +--- + + +## Extending the Platform + +The platform may be extended at 3 logical points within the event life cycle. + + +1. **At Entry Point:** + * These extensions are responsible for injecting external event streams into the platform. Primarily they mediate betweeen the external event stream and the internal CloudEvents based Kafka Topics. These run on separate containers within the platform. The following are the typical examples. + * **Event Gateway**: are the primary mechanism. To build event gateways we provide templates. Please check this document on extension. + + 2. **At Processing** + * These are extensions that operate on internal event streams or are required by services that are created on the platform. The following are the types thereof. + * **Configuration Adapters and UIs.** These are primarily used for connection setups and configurations which are applicable across the platform. Examples are the variety of connection set up UIs we provide. They are very easy to create. Use the the followinng guide to build your own. + * **Python Libraries and Modules:** These are attached to the Executor. It primarily involves extending the Executor image with the required library. In order to add them to the platform use this guide. + * **Event Processing Packages**: These are services that only modify event attributes which normally convert one type of event to another. These can be implemented as services within the platform. Please see following guide to see how they are used and some typical scenarios. + * **No Code Generators:** Generators combine UI based with templated code to allow a no-code based approach to creating services. Please check this guide on how that works. + +3. **At Exit Point** + * These are primarily modules that interact with external systems but operate across the example. They primarily operate on streams that originate from the platform and mediate with the outside. These run on separate containers within the platform. The following are typical implementaions. Examples are + * **Protocol Adapters** They connect and integrate between the internal Kafka Event Streams and external protocols. For example webhook adapter, such as Kafka to IBM MQ adapter. Their primary purpose is to offload activity from the platform which may cause bottle necks or require long running services. + * **Splitters and Filters:** These may operate on strams to split content or event infromation into derivative streams. Or feed data into supporting infrastructure. The elastissearch and Splunk adapters are typical examples. In order to build these use the following guide and related templates. + + + diff --git a/content/en/docs/Integrations/_index.md b/content/en/docs/Integrations/_index.md new file mode 100644 index 00000000..07157d08 --- /dev/null +++ b/content/en/docs/Integrations/_index.md @@ -0,0 +1,7 @@ +--- +title: "Integrations Guide" +linkTitle: "Integrations Guide" +weight: 105 +description: >- + The Integrations Guide provides an overview of integrations with various services and infrastructure. +--- \ No newline at end of file diff --git a/content/en/docs/Release_Notes/Release-Notes-1.0.2.md b/content/en/docs/Release_Notes/Release-Notes-1.0.2.md new file mode 100644 index 00000000..6591a5de --- /dev/null +++ b/content/en/docs/Release_Notes/Release-Notes-1.0.2.md @@ -0,0 +1,50 @@ +--- +title: "Release 1.0.2" +tags: ["releases"] +categories: ["Release Notes"] +linkTitle: +weight: 100 +description: >- + New features, improvements and fixes provided to you in the 2nd Quarter of the year 2021. + +--- + + + + +## New added_blue + +- [x] The new **Services Overview** is a useful page to monitor and manage all platform services. +- [x] The **Configuration Manager** is a central point to manage configurations for all deployed services. +- [x] The **Executor Framework** is a powerful new utility that enables creating, orchestrating, sequencing and running of pretty much any combination of jobs and services. +- [x] With the **Job Scheduler** we have added a newly embedded console for scheduling, running and monitoring jobs, without the need of embedding third-party job scheduler. +- [x] We have introduced a new **Workflow Manager** to support the building of processes, including four-eye reviews, approval chains and quality gates. +- [x] We have integrated our **Slack Support Channel** into the Control Center menu so that you can contact us directly and easily with your questions. +- [x] To get quicker access to the most frequently used services and components we've built a list of **Platform Services links** and placed them into the Control Center navigation. +- [x] We have added a new **File Storage** capability for creating and uploading files and managing buckets. +- [x] Introduced a new **User Interface** and **Navigation**. +- [x] Made important improvements of **Access Rights Management**. + +--- + +## Changed changed_yellow + +- no changes to report + +--- + +## Improved improved_green + +- [x] We have improved the Files Manager and in particular **Files Onboarding Monitor** to embed user credentials and therefore to make the files uploading and management more user specific. +- [x] We have taken further strides to streamline and simplify the **User Management**. Creating users and assign their own secured container (working space) is now fully automated - albeit not yet self-service + +--- + +## Fixed fixed_red + +- no fixes to report + +--- + + + diff --git a/content/en/docs/Release_Notes/Release-Notes-1.0.3.md b/content/en/docs/Release_Notes/Release-Notes-1.0.3.md new file mode 100644 index 00000000..4ff0294f --- /dev/null +++ b/content/en/docs/Release_Notes/Release-Notes-1.0.3.md @@ -0,0 +1,52 @@ +--- +title: "Release 1.0.3" +tags: ["releases"] +categories: ["Release Notes"] +linkTitle: +weight: 100 +description: >- + New features, improvements and fixes provided to you in the 3rd Quarter of the year 2021. + +--- + + + + + + +## New added_blue + +- [x] We have introduced a **Tagging** capability, which enables users to tag almost any data element on the platform making it easy and user friendly to navigate through the application and find items. +- [x] The **Checks Framework** greatly improves the data quality monitoring around any kind of data sources, data types and event types. +- [x] The **Topics & Consumer Monitor** listens in on Kafka and provides an up-to-the-second view of data sourcing and streaming. +- [x] New **Service Logs** can be used to track all services running within the network as well as their logs and events that occur between them. Each log or event have their own data stored within elasticsearch and can be filtered by numerous fields. +- [x] **Jupyter Auto Deploy** makes the onboarding of new users even faster and self-serviced. +- [x] **Events** are stored and managed in the new Event Registry. + +--- + +## Changed changed_yellow + +- no changes to report + +--- + +## Improved improved_green + +- [x] We have improved elements of the **Security and Access Rights Management** in the areas of User and Access Management (Keycloak), Certificates Management as well as Authentication and Authorization (OpenID). Each role has its own set of permissions, which makes it possible to restrict users access to specific areas of the platform based on their role. +- [x] Parametrised **Workflows** can be defined with by users with easy json files by following a specific set of rules. Complex use cases are supported such as chaining multiple actions one by another and having control over what happens in a different scenarios. Built in approval capability for each workflow action is supported and easy to implement by default. +- [x] The new **{{< param replacables.brand_name >}} Executor** has become even better and more powerful. Add any Python, SQL or JSON to execute any type of jobs. +- [x] The **Scheduler** is now fully embedded with the Executor. +- [x] **Files Upload** has improved and is integrated with the Workflow and Approvals functionality. Files have type specific validation and are stored in different buckets. Bucket creation can be done within the same FAB module. +- [x] The **{{< param replacables.brand_name >}} Application Builder (FAB)** is now capable of auto generating new pages (UI) on the fly. + +--- + +## Fixed fixed_red + +- no fixes to report + +--- + + + diff --git a/content/en/docs/Release_Notes/Release-Notes-1.0.4.md b/content/en/docs/Release_Notes/Release-Notes-1.0.4.md new file mode 100644 index 00000000..d39ab688 --- /dev/null +++ b/content/en/docs/Release_Notes/Release-Notes-1.0.4.md @@ -0,0 +1,46 @@ +--- +title: "Release 1.0.4" +tags: ["releases"] +categories: ["Release Notes"] +linkTitle: +weight: 100 +description: >- + New features, improvements and fixes provided to you in the 4th Quarter of the year 2021. + +--- + + + + + +## New added_blue + +- [x] We have collected our growing list {{< param replacables.brand_name >}} APIs and documented them in a Service Inventory. +- [x] Introduced **Projects** as a new concept, enabling the of grouping of like processes as well as the security based segregation of them. +- [x] Integrated {{< param replacables.brand_name >}} with Voilà, turning Jupyter notebooks into standalone web applications. +- [x] Live logs are there, providing real-time log data. + +--- + +## Changed changed_yellow + +- no changes to report + +--- + +## Improved improved_green + +- [x] The **self-service onboarding** has received further improvement. The onboarding flow as well as the corresponding documentation have been made even easier to follow. +- [x] The **FAB** ({{< param replacables.brand_name >}} Application Builder) has improved in the area performance. +- [x] We have linked **Minio** to the FAB UI, API and Database. +- [x] The **Executor** framework is being continuosly improved. +- [x] Tags have been implemented across all {{< param replacables.brand_name >}} components. + +--- + +## Fixed fixed_red + +- [x] A number of general bug fixes have been implement. + +--- + diff --git a/content/en/docs/Release_Notes/Release-Notes-2.0.1.md b/content/en/docs/Release_Notes/Release-Notes-2.0.1.md new file mode 100644 index 00000000..b666182e --- /dev/null +++ b/content/en/docs/Release_Notes/Release-Notes-2.0.1.md @@ -0,0 +1,41 @@ +--- +title: "Release 2.0.1" +tags: ["releases"] +categories: ["Release Notes"] +linkTitle: +weight: 100 +description: >- + New features, improvements and fixes provided to you in the 1st Quarter of the year 2022. + + +--- + + + + + +## New added_blue + +- [x] We have introduced the **Simple {{< param replacables.brand_name >}} Dashboard** (Landing Page/Dashboard) developed in ReactJS to provide insights and analytics around typical platform related metrics mostly related to Data Ops and detailed event handling. It can be finetuned and tailored to customer specific needs. The details can be found under the [Landing Page(Dashboard)](/docs/user-guide/landing_page/ "LandingPage") subcategory in the User Guide. +- [x] The first version of the **Open API REST Server - Generator** has been built which can be used for generating standardised REST APIs from the OpenAPI specification. +- [x] Created **Dashboard API** which is used for feeding various charts on **{{< param replacables.brand_name >}} Dashboard** including statistics for Executions by status, trigger type, average time of executions, number of executions per package etc. +- [x] Introduction of **manifest.json** files which can be uploaded with a package and used to define package execution entrypoint (name of the script that will be executed), order of scripts execution, schedule, tags, trigger event, etc. +- [x] Added **Execution Context** to fx_ef package which is accessible to any .py script at runtime and can be used for fetching configuration, secrets, parameters, information of the executing package and for manipulating the package state. + +--- + +## Changed changed_yellow + +- [x] **PostgreSQL wrapper** was added to **{{< param replacables.brand_name >}} CLI** + +--- + +## Improved improved_green + +- [x] Overall performance of the UI was enhanced + +--- + +## Fixed fixed_red + +- [x] Synchronisation of Git Repositories that contain empty packages diff --git a/content/en/docs/Release_Notes/Release-Notes-2.0.2.md b/content/en/docs/Release_Notes/Release-Notes-2.0.2.md new file mode 100644 index 00000000..eabaa598 --- /dev/null +++ b/content/en/docs/Release_Notes/Release-Notes-2.0.2.md @@ -0,0 +1,35 @@ +--- +title: "Release 2.0.2" +tags: ["releases"] +categories: ["Release Notes"] +linkTitle: +weight: 100 +description: >- + New features, improvements and fixes provided to you in the 2nd Quarter of the year 2022. + + +--- + +## New added_blue + +- [x] Introduction of **Secrets Management UI & API** that can be used for securely storing and accessing secrets on the Platform and Project level. + +--- + +## Changed changed_yellow + +- [x] Re-enabling of the Project/Package/Git Repository/Execution **deletion feature**. +- [x] **Version 2 of ferris_cli package** published to public Pypi repository. + +--- + +## Improved improved_green + +- [x] **Executor fx_ef package** adjusted for local development. +- [x] Various changes on the **{{< param replacables.brand_name >}} UI**. + +--- + +## Fixed fixed_red + +- [x] no fixes to report diff --git a/content/en/docs/Release_Notes/Release-Notes-2.3.2.md b/content/en/docs/Release_Notes/Release-Notes-2.3.2.md new file mode 100644 index 00000000..fc157b7f --- /dev/null +++ b/content/en/docs/Release_Notes/Release-Notes-2.3.2.md @@ -0,0 +1,33 @@ +--- +title: "Release 2.3.2" +tags: ["releases"] +categories: ["Release Notes"] +linkTitle: +weight: 100 +description: >- + New features, improvements and fixes provided to you in the 2nd Quarter of the year 2023. + +--- + +## New added_blue + +- [x] We introduced a new **Skip Execution** button to prevent scheduled executions in the Ferris Executor from running. With this button, users can skip executions until further notice. With the same button, users can unskip executions and and thus bring them back into the planned execution cycle. +- [x] We added the ability of a **dynamic priorisations of executions** by forcing one-time executions on different priority lanes than the service’s default. + +--- + +## Changed changed_yellow + +- [x] _No changes at this time_ + +--- + +## Improved improved_green + +- [x] Improved various User Interface and functional components to provide for a better and more intuitive user experience. + +--- + +## Fixed fixed_red + +- [x] Various small system improvements and bug fixes \ No newline at end of file diff --git a/content/en/docs/Release_Notes/Release-Notes-2.3.3.md b/content/en/docs/Release_Notes/Release-Notes-2.3.3.md new file mode 100644 index 00000000..9a64d5d9 --- /dev/null +++ b/content/en/docs/Release_Notes/Release-Notes-2.3.3.md @@ -0,0 +1,37 @@ +--- +title: "Release 2.3.3" +tags: ["releases"] +categories: ["Release Notes"] +linkTitle: +weight: 100 +description: >- + New features, improvements and fixes provided to you in the 3rd Quarter of the year 2023. + +--- + +## New added_blue + +- [x] Added a **re-run with edited parameters** function, enabling the re-triggering of jobs and changing run-time parameters on the fly. +- [x] Added **Live Logs** for the Executor (FX) and Event Based Kubernetes (K8X) enables viewing real-time logs as the jobs run. +- [x] Added a **Trigger button** to trigger events and storing schema directly from within the UI. +- [x] Added an **Event Bridge** to send requests outside the Ferris cluster. + +--- + +## Changed changed_yellow + +- [x] _No changes at this time_ + +--- + +## Improved improved_green + +- [x] Allow to **edit trigger parameters** on view execution and run using those parameters +- [x] Various edits and imprvements of the **Ferris Documentation**. +- [x] Various changes on the **{{< param replacables.brand_name >}} UI**. + +--- + +## Fixed fixed_red + +- [x] Various small system improvements and bug fixes \ No newline at end of file diff --git a/content/en/docs/Release_Notes/_index.md b/content/en/docs/Release_Notes/_index.md new file mode 100644 index 00000000..60c4a2fb --- /dev/null +++ b/content/en/docs/Release_Notes/_index.md @@ -0,0 +1,9 @@ +--- +title: "Release Notes" +linkTitle: "Release Notes" +weight: 108 +description: > + Quarterly release notes, including new features, upgrades and fixes. + +--- + diff --git a/content/en/docs/Security/Permissions.md b/content/en/docs/Security/Permissions.md new file mode 100644 index 00000000..58806358 --- /dev/null +++ b/content/en/docs/Security/Permissions.md @@ -0,0 +1,25 @@ +--- +title: "Permissions" +linkTitle: "Permissions" +tags: [security, access rights, permissions] +categories: [Security] +weight: 103 +description: >- + Introduction to the {{< param replacables.brand_name >}} concept of Permissions. + +--- + +## Roles + +This is us - humans - using {{< param replacables.brand_name >}} on a day to day basis. And in this section each user is listed with the most important attributes defininig name, e-mail, status and - most importantly - the associated roles. + +> To get the the Users page, navigate to: Security > List Roles + +{{< blocks/screenshot color="white" image="/streamzero/images/security/list_users_page.png">}} + +*Example: List of Users* + + \ No newline at end of file diff --git a/content/en/docs/Security/Statistics.md b/content/en/docs/Security/Statistics.md new file mode 100644 index 00000000..27ba4ce2 --- /dev/null +++ b/content/en/docs/Security/Statistics.md @@ -0,0 +1,21 @@ +--- +title: "Statistics" +linkTitle: "Statistics" +tags: [security, access rights] +categories: [Security] +weight: 104 +description: >- + Statistics lists the number of successful as well as failed login counts by user. + +--- + +## Users Statistics + +Here a Security Lead finds useful information on any user's successful as well as failed login attempts. + +Navigate to: Security > User's Statistics + +{{< blocks/screenshot color="white" image="/streamzero/images/security/users_statistics.png">}} + +*Example: Users Statistics > Login count* + diff --git a/content/en/docs/Security/_index.md b/content/en/docs/Security/_index.md new file mode 100644 index 00000000..83201db9 --- /dev/null +++ b/content/en/docs/Security/_index.md @@ -0,0 +1,56 @@ +--- +title: "Security" +linkTitle: "Security" +tags: [security, access rights] +weight: 106 +categories: [Security] +description: >- + Introduction and "how-to" guide to the {{< param replacables.brand_name >}} Security and Access Rights Management. + +--- + +## Concept + +{{< param replacables.brand_name >}} takes a multi-layered and integrative approach to security and access rights management, protecting systems, networks, users and data alike. + +While the security architecture of {{< param replacables.brand_name >}} stands alone and operates well in isolation, it is built to integrate with enterprise security systems based on LDAP and Active Directory. + +It supports Single Sign On (SSO) through open protocols such as Auth0 and SAML. + +This user guide focuses on the application internal - user controlled - aspects of the seurity functions. + + +## Approach + +{{< param replacables.brand_name >}} applies the established notion of _Users_, _Roles_ and _Permissions_ and linking them to the application elements such as _Menues_, _Views_ and _Pages_. + +This approach enables the breaking of the application into granular elements and organizing them into groups of like access control areas. The ultimate benefit is the implementation of user rights on a strict "need-to-know" basis. + + +## Security Components + +The following sections describe how the security components work and how to set them up for your purpose. + +> If you want to follow the instructions and examples, you first need to connect to your {{< param replacables.brand_name >}} demo instance. + +### Navigation + +The Security menu is found on left hand navigation of Ferris. Click on the Security menu to expand it and display all security relevant menu items. + +{{< blocks/screenshot color="white" image="/streamzero/images/security/security_navigation.png">}} + +**{{< param replacables.brand_name >}} Security Navigation** + +- [**List Users:**]({{< ref "list_users.md" >}}) Setup individual users and assign one or more roles to them. If {{< param replacables.brand_name >}} is integrated with a company own Single Sign On, here is where all users can be viewed. Each user may be deactivated manually. +- [**List Roles:**]({{< ref "list_roles.md" >}}) Setup and maintain individual roles and assign them viewing, editing, executing and other rights pertinent to the character and scope of the role. Roles can be integrated and inheritet with the company Active Directory. + +**For Security Administrators only** +These menu items can only be seen and accessed with the prerequisite *Administrator* rights as granted in the *User* section. + +- [**Users Statistics:**]({{< ref "statistics.md" >}}) Useful graphical statistic displaying the login behavior of individual users, such as login count and failed logins. +- [**User Registrations:**]({{< ref "" >}}) Listing pending registration requests. +- [**Base Permissions:**]({{< ref "permissions.md" >}}) Listing the base permissions. +- [**Views / Menus:**]({{< ref "" >}}) Listing of all Menu and View (aka Pages, UI) items. +- [**Permissions on Views/Menus:**]({{< ref "" >}}) Listing of the assigned permissions of each Menu and View element. + +> NOTE that it is considered a good practice that security related tasks are provided to only a few dedicated Security Leads within the organization. For that purpose, setting up a dedicated **Security Lead** role is advised.* diff --git a/content/en/docs/Security/list_roles.md b/content/en/docs/Security/list_roles.md new file mode 100644 index 00000000..e1e4b85d --- /dev/null +++ b/content/en/docs/Security/list_roles.md @@ -0,0 +1,26 @@ +--- +title: "Roles" +linkTitle: "Roles" +tags: [security, access rights, roles] +categories: [Security] +weight: 102 +description: >- + Introduction of the Roles concept, including the meaning and application of permissions. + +--- + +## Roles + +Roles represent a collection of permissions (or rights) that can be performed within {{< param replacables.brand_name >}}. Each role is taylored to a specific set of responsibilities and tasks within {{< param replacables.brand_name >}}. + +To get to the the Users page, navigate to: Security > List Roles + +{{< blocks/screenshot color="white" image="/streamzero/images/security/list_roles_page.png">}} +*Example: List of {{< param replacables.brand_name >}} Roles* + +Note that each of these capabilities depends on the Permissions given to your Role. Some roles may be given full rights (e.g. add, show, edit, delete), where others may only be given viewing rights (e.g. show). As a result, some users may only be seeing the "Show Roles" magnifying glass icon. + + \ No newline at end of file diff --git a/content/en/docs/Security/list_users.md b/content/en/docs/Security/list_users.md new file mode 100644 index 00000000..ce5cf42e --- /dev/null +++ b/content/en/docs/Security/list_users.md @@ -0,0 +1,26 @@ +--- +title: "Users" +linkTitle: "Users" +tags: [security, access rights, users] +categories: [Security] +weight: 101 +description: >- + Understanding the meaning, role and setup of Users within {{< param replacables.brand_name >}}. + +--- + +## Users + +This is us - humans - using {{< param replacables.brand_name >}} on a day to day basis. Each user is listed with the most important attributes defininig name, e-mail, status and - most importantly - the associated roles. + +To get the the Users page, navigate to: Security > List Users + +{{< blocks/screenshot color="white" image="/streamzero/images/security/list_users_page.png">}} +*Example: List of {{< param replacables.brand_name >}} Users* + +In the security context, a user is assigned one or more roles, each containing a set of Permissions. Hence, there is a loose connection between Users and the Permissions they are given. + + \ No newline at end of file diff --git a/content/en/docs/Solutions/001 Prospect 360 - Finance.md b/content/en/docs/Solutions/001 Prospect 360 - Finance.md new file mode 100644 index 00000000..cbd1f79c --- /dev/null +++ b/content/en/docs/Solutions/001 Prospect 360 - Finance.md @@ -0,0 +1,103 @@ +--- +title: Prospecting 360 Degree +category: use case +industries: [Financial Service, Insurance] +owner: Tom Debus +tags: [kyc, knowyourcustomer, finance, financialservices, onboarding, prospecting, prospect360] +clients: [UBS, CS, BJB, VPB, Newcent] +--- + +## Prospect 360° + + +### Executive summary +For many strategic prospects, especially in banking and insurance, the preparation of possible offers and establishment of a real relationship either involves great effort or lacks structure and focus. The Prospect 360° use case augments traditional advisor intelligence with automation to improve this original dilemma. + + +### Problem statement +Hunting for new important clients usually is driven by referrals and the search for an “ideal event“ to introduce a product or service. Existing client relationships are usually screened manually and approached directly to request an introduction, prior to offering any services. Monitoring the market and a prospect’s connections can be cumbersome and is error prone – either introductions are awkward or they do not focus on a specific and urgent need. Hence the success and conversion rates seem hard to plan. + + +### Target market / Industries +This use case is traditionally applicable to such industries where the customer engagement and acquisition process is long and costs per customer are high: + +* Financial Services +* Insurance +* General Business Services + + + +### Solution +We introduce the idea of “soft onboarding”. Instead of selling hard to a new prospect, we start to engage them with tailored and relevant pieces of information or advice free of charge. We do, however, tempt this prospect to embrace little initial pieces of an onboarding-like process, extending the period we are allowed to profile the needs and preferences of the client and the related social graph. Turning a prospect into an interested party and then increasing the levels of engagement of the period of up to six months allows for a more natural and client-driven advisory experience, that is shifting from a “product push” towards a “client pull”. + +The solution included: + +- Integration of disparate news & event information sources (licensed & public origins) +- Provisioning of select RM & client data points to understand social graphs +- Word parsing of text-based inputs (e.g. news articles and liquidity event streams) +- Onboarding & Sales Ontology matching +- Identification of possible liquidity events, new referral paths & sales topics of interest +- Aggregation of findings, reporting, notifications and organizational routing +- Ideally inclusion of reinforcement learning (via RM, client & assistant feedback loops) + + + +##### Example Use Case Agent Cascade +{{< blocks/screenshot color="white" image="/streamzero/images/solutions/uc_001_2021.png">}} + + + +### Stakeholders +- Relationship Management +- Sales +- Marketing + + +### Data elements, Assets and Deliverables + +As an Input from the client, the following items were used: + +- Sales organization setup (desks / books) + +- Client to Client / Client to Company graphs + +##### Capabilities utilized: + +- Unstructured Data +- Semantic Harmonization +- NLP +- Personalization + +##### Assets & Artefacts: + +- Financial Product Ontology +- Analytical CRM Models + +##### The deliverables included: + +- Sales & Onboarding Ontology +- Use case specific orchestration flow +- Integration with many info sources + + +### Impact and benefits +Strategic client team originally covered 200 prospects manually. Introducing Prospect 360° allowed to double that number while reducing the time-to-close by 35% — from more than 12 to an average of about 7 months. + +The use-case implementation resulted in: + +**+8%** growth of corporate loan book + +**+22%** reduction on credit defaults + + + +### Testimonials + +> "I feel a lot more as a real advisor. I can be helpful and feel informed. And I still can make my own judgments of what is relevant for my personal relationship to existing clients and new referrals. I learn as the system learns." + +— Mr. Pius Brändli, Managing Director, Credit Suisse + + + +### Tags / Keywords +#kyc, #knowyourcustomer, #finance, #financialservices, #onboarding, #prospecting, #prospect360 diff --git a/content/en/docs/Solutions/001 Prospect 360 - General.md b/content/en/docs/Solutions/001 Prospect 360 - General.md new file mode 100644 index 00000000..666589f7 --- /dev/null +++ b/content/en/docs/Solutions/001 Prospect 360 - General.md @@ -0,0 +1,80 @@ +--- +title: Prospecting 360 Degree - general +category: use case +industries: [General Services, B2B Services] +owner: Tom Debus +tags: [salesfunnel, salesautomation, newclients, prospecting, prospect360] +clients: [Wagner, Stark Gruppe] +--- + +## Prospect 360° + +### Executive summary +For many strategic prospects the preparation of possible offers and establishment of a real relationship either involves great effort or lacks structure and focus. The Prospect 360° use case augments traditional sales and marketing intelligence with automation to improve this original dilemma. + + +### Problem statement +Hunting for new important clients usually is driven by advertisement, referrals and the search for an “ideal event“ to introduce a product or service. Existing client relationships are usually screened manually and approached directly to request an introduction, prior to offering any services. Monitoring the market and a prospect’s connections can be cumbersome and is error prone – either introductions are awkward or they do not focus on a specific and urgent need. Hence the success and conversion rates seem hard to plan. + + +### Target market / Industries +This use case is suitable for the industries where the customer engagement and acquisition process is long and costs per customer are high. + + +### Solution +We introduce the idea of “soft onboarding”. Instead of selling hard to a new prospect, we start to engage them with tailored and relevant pieces of information or advice free of charge. We do, however, tempt this prospect to embrace little initial pieces of an onboarding-like process, extending the period we are allowed to profile the needs and preferences of the client and the related social graph. Turning a prospect into an interested party and then increasing the levels of engagement of the period of up to six months allows for a more natural and client-driven sales experience, that is shifting from a “product push” towards a “client pull”. + +The solution included: +- Integration of disparate news & event information sources (licensed & public origins) +- Provisioning of select sales & client data points to understand social graphs +- Word parsing of text-based inputs (e.g. news articles) +- Onboarding & Sales Ontology matching +- Identification of new referral paths & sales topics of interest +- Aggregation of findings, reporting, notifications and organizational routing +- Ideally inclusion of reinforcement learning (via sales, client & assistant feedback loops) + + + +##### Example Use Case Agent Cascade +{{< blocks/screenshot color="white" image="/streamzero/images/solutions/uc_001_2021.png">}} + + + + +### Stakeholders +- Relationship Management +- Sales +- Marketing + + +### Data elements, Assets and Deliverables + +As an Input from the client, the following items were used: +- Sales organization setup (desks / books) +- Client to Client / Client to Company graphs + +##### Capabilities utilized: + +- Unstructured Data +- Semantic Harmonization +- NLP +- Personalization + +##### Assets & Artefacts: + +- Product Ontology +- Analytical CRM Models + +##### The deliverables included: + +- Sales & Onboarding Ontology +- Use case specific orchestration flow +- Integration with many info sources + + + + + + +### Tags / Keywords +#newclients #salesfunnel #sales #marketing #salesautomation #prospecting #prospect360 diff --git a/content/en/docs/Solutions/002 Idea to Trade - Finance.md b/content/en/docs/Solutions/002 Idea to Trade - Finance.md new file mode 100644 index 00000000..bff7b7c7 --- /dev/null +++ b/content/en/docs/Solutions/002 Idea to Trade - Finance.md @@ -0,0 +1,87 @@ +--- +title: Idea to Trade / Next Best Product - Financial Services +category: use case +industries: [Financial Services, Insurance] +owner: Tom Debus +tags: [ideatotrade, nextbestproduct, salesadvice, financialservices, bank, insurance, investment] +clients: [ ] +--- + +## Idea to Trade / Next Best Product + +### Executive summary +To support advisors and clients with a “next best product” recommendation, a closed loop flow has been established +from Research / Chief Investment Officer to Relationship Managers and eventually to the client. +Evaluating which recommendations worked for RMs and Clients allowed for a learning loop informing Research & CIO +to improve selection & tailoring of investment themes. + +### Problem statement +The information flow from research or strategic asset allocation (CIO) to client advisors and eventually to clients does rarely follow +a structured path. Instead the bank‘s “house view” is communicated broadly to all front-office staff and portfolio managers. +They then use their direct client relationship to assess risk appetite and extract specific investment themes or +ideas from their client interaction. If these match, the resulting research / advice is forwarded to the client. +It seems like a lucky punch if product information leads to a trade / product sale. + +### Target market / Industries +The challenge of customizing the offering to the customer profile is a common challenge across the industries. +Financial services industry is benefiting most from this use case that can be efficiently applied e.g. in +Industries, that are benefitting most of this use-case are: +- Banks +- Investment and finance firms +- Real estate brokers +- Tax and accounting firms +- Insurance companies + +### Solution +Starting with investment themes / product and occasion specific sales / investment opportunities, +the existing client‘s portfolios and client to bank communication is screened for possible gap / fit. +Research or asset allocation can then focus their efforts on topics suggested by front-office staff and / or clients themselves. +Observing and identifying trade success the best practices are understood and can be multiplied across other (similar) client scenarios. +Asset allocation and advisors work collaboratively as they both evaluate which information / proposals / investment ideas are +forwarded to clients (and then accepted or not) and which ones are kept back by advisors (and for what reasons). + +The solution included: +- Clustering & topic mapping of existing marketing material & client portfolio structures +- Optionally inclusion of CRM notes and written advisor to client communication +- Sales Ontology setup & learning loop inclusion & topic matching +- Identification of individual investment themes / topics of interest +- Aggregation of findings, reporting, alerting & action recommendation + +### Stakeholders +- Chief Investment Officers (CIO) +- Client Advisors / Relationship Managers +- Product Managers + +### Data elements, Assets and Deliverables + +As an Input from the client, the following items were used: +- Sales organization setup (desks / books) +- Client to Client / Client to Company graphs + +Capabilities utilized: +- Unstructured Data +- Semantic Harmonization +- Natural Language Processing +- Personalization + +Assets & Artefacts: +- Financial Product Ontology +- Analytical CRM Models + +The deliverables included: +- Sales & Onboarding Ontology +- Use case specific orchestration flow +- Integration with many info sources + +### Impact and benefits +Proposal / offer conversion rates were increased by 42% after an initial learning curve & algorithm calibration phase of 6 months +resulting in additional Asset under Management growth of 8% from targeted clients. + +The use-case implementation resulted in: + +**+18%** increased targeted product sales + +**+8%** share of wallet + +### Tags / Keywords +#ideatotrade #nextbestproduct #salesadvice #financialservices #bank #insurance #investment \ No newline at end of file diff --git a/content/en/docs/Solutions/002 Idea to Trade - General.md b/content/en/docs/Solutions/002 Idea to Trade - General.md new file mode 100644 index 00000000..d8685103 --- /dev/null +++ b/content/en/docs/Solutions/002 Idea to Trade - General.md @@ -0,0 +1,64 @@ +--- +title: Idea to Trade / Next Best Product - General +category: use case +industries: [crossindustry] +owner: Tom Debus +tags: [ideatotrade, nextbestproduct, salesadvice, crossindustry] +clients: [ ] +--- + +## Idea to Trade / Next Best Product recommendation + +### Executive summary +To support sales people and clients with a “next best product” recommendation, a closed loop flow has been established +from Marketing / Research to Sales and eventually to the client. +Evaluating which recommendations worked for Sales and Clients allowed for a learning loop informing Product Management +to improve selection & tailoring of the offerings. + +### Problem statement +The information flow from Development and Product Management to Sales and Marketing and eventually to clients does rarely follow +a structured path. Instead the company‘s “house view” is communicated broadly to all staff. +Sales then use their direct client relationship to assess customer needs and extract specific requirements or +ideas from their client interaction. If these match, the resulting research / advice is forwarded to the client. +It seems like a lucky punch if product information leads to a trade / product sale. + +### Target market / Industries +Any industry which has enough data about the customer to make a recommendation for the next action / product will greatly benefit from this use-case. + +### Solution +Starting with product review and occasion specific sales opportunities, the existing client‘s portfolios and client communication are screened for possible gap / fit. Research / Product Management can then focus their efforts on topics suggested by Sales and Marketing staff and / or clients themselves. Observing and identifying trade success the best practices are understood and can be multiplied across other (similar) client scenarios. Product Management and Sales work collaboratively as they both evaluate which information / proposals / products are presented to clients (and then accepted or not) and which ones are kept back (and for what reasons). + +The solution included: +- Clustering & topic mapping of existing marketing material & client portfolio structures +- Optionally inclusion of CRM notes and written Sales to client communication +- Sales Ontology setup & learning loop inclusion & topic matching +- Identification of individual products / topics of interest +- Aggregation of findings, reporting, alerting & action recommendation + +### Stakeholders +- Top Management +- Sales and Marketing +- Product Managers + +### Data elements, Assets and Deliverables +As an Input from the client, the following items were used: +- Sales organization setup (desks / books) +- Client to Client / Client to Company graphs + +Capabilities utilized: +- Unstructured Data +- Semantic Harmonization +- NLP +- Personalization + +Assets & Artefacts: +- Product Ontology +- Analytical CRM Models + +The deliverables included: +- Sales & Onboarding Ontology +- Use case specific orchestration flow +- Integration with many info sources + +### Tags / Keywords +#ideatotrade #nextbestproduct #salesadvice #crossindustry diff --git "a/content/en/docs/Solutions/003 Churn Analysis and Alerting \342\200\223 Finance.md" "b/content/en/docs/Solutions/003 Churn Analysis and Alerting \342\200\223 Finance.md" new file mode 100644 index 00000000..544ee174 --- /dev/null +++ "b/content/en/docs/Solutions/003 Churn Analysis and Alerting \342\200\223 Finance.md" @@ -0,0 +1,68 @@ +--- +title: Churn Analysis and Alerting – Financial Services +category: use case +industries: [Financial Services, Insurance] +owner: Tom Debus +tags: [churn, churnanalysis, financialservices, insurance, bank, retention, clientretention, customerretention] +clients: [UBS] +--- + +## Churn Analysis and Alerting + +### Executive summary +In any industry, but in Financial Services in particular, screening your existing client population for signs of dissatisfaction and pending attrition can involve a broad range of analysis. Usually the focus is given to transaction pattern analysis. And while this may prove helpful it can be misleading in smaller banks with limited comparative data. We thus integrate a broader variety of less obvious indicators and include an advisor based reinforcement loop to train the models for a bank‘s specific churn footprint. + +### Problem statement +When clients close their accounts or cancel their mandates, it usually does not come as a surprise to the Relationship Manager (RM). But for obvious reasons, the RM tries to work against the loss of a client with similar if not the same tools, processes and attitudes that have led to a client being dissatisfied. This is not to say that the RM manager is the sole reason for churn. But often clients do not or not sufficiently voice their issues and simply quit the relationship. To search, become aware and then listen for softer and indirect signs is at the heart of this use case. + +### Target market / Industries +The described use case can be efficiently applied to any industry that is providing services to a large number of clients and has a large number of transactions. +Particularly following industries are benefiting most of this use-case: +- Financial services +- Insurance + +### Solution +Using historical data, client communication and structured interviews with client advisors, we create a bank-specific churn ontology, that is then used to screen existing clients on an ongoing basis. Creating an interactive reinforcement loop with new churn-cases, this classification, predictor and indicator approach is ever more fine tuned to specific segments, desks and service categories. As direct and ongoing Key Performance Indicators (KPIs) churn ratios and client satisfaction are measured alongside Assets under Management (AuM), profitability and trade volumes for the respective clients are classified as “endangered”. +Usually a gradual improvement can be monitored within 3-6 months from the start of the use case. + +The solution included: +- Initial typical churn cause analysis based on historical data (client positions, and transactions) +- Ideally inclusion of CRM notes and written advisor to client communication (prior to churn) +- Sales & Churn Ontology setup & subsequent ontology matching +- Identification of likely bank churn footprint & learning / improvement loops +- Aggregation of findings, reporting, alerting & action notifications + +### Stakeholders +- Chief Operations +- Client advisory, Relationship and Sales management + +### Data elements, Assets and Deliverables +As an Input from the client, the following items were used: +- Historic data about churned clients +- Client portfolios - positions / transactions +- Ideally pre-leave client-advisor communications + +Assets & Artefacts: +- Client Behavioral Model +- Churn Prediction +- Action Monitoring + +The deliverables included: +- Sales & Churn Indicator Ontology +- Use case specific orchestration flow + +### Impact and benefits +Lowered churn rates for distinct client segments by 16% after 6 months. Increased AuM / trades for clients „turned-around“ by about 25% within 6 months after “re-win”. + +The use-case implementation resulted in: + ++8% clients saved prior to loss of relationship + ++24% reduction of customer asset outflows + +### Testimonials +> "Changing the attitude we deal with churn from feeling like a failure to working a structured process, made all the difference. Turning around a dissatisfied client is now something transparent and achievable." +> — Mr. Roland Giger, Head of Client Book Development, UBS + +### Tags / Keywords +#churn #churnanalysis #financialservices #insurance #bank #retention #clientretention #customerretention diff --git "a/content/en/docs/Solutions/003 Churn Analysis and Alerting \342\200\223 General.md" "b/content/en/docs/Solutions/003 Churn Analysis and Alerting \342\200\223 General.md" new file mode 100644 index 00000000..a9fa347b --- /dev/null +++ "b/content/en/docs/Solutions/003 Churn Analysis and Alerting \342\200\223 General.md" @@ -0,0 +1,57 @@ +--- +title: Curn Analysis and Alerting – General +category: use case +industries: [Retail, Intertainment, Massmedia] +owner: Tom Debus +tags: [churn, churnanalysis, retail, entertainment, massmedia, retention, clientretention, customerretention] +clients: [ ] +--- + +## Churn Analysis and Alerting + +### Executive summary +Screening your existing client population for signs of dissatisfaction and pending attrition can involve a broad range of analysis. Usually the focus is given to transaction pattern analysis. And while this may prove helpful it can be misleading in smaller companies with limited comparative data. We thus integrate a broader variety of less obvious indicators and include an advisor based reinforcement loop to train the models for a company's specific churn footprint. + +### Problem statement +When clients close their accounts or cancel their subscriptions, it usually does not come as a surprise to the sales management. But for obvious reasons, the sales manager tries to work against the loss of a client with similar if not the same tools, processes and attitudes that have led to a client being dissatisfied. This is not to say that the sales manager can the sole reason for churn. But often clients do not or not sufficiently voice their issues and simply quit the relationship. To search, become aware and then listen for softer and indirect signs is at the heart of this use case. + +### Target market / Industries +The described use case can be efficiently applied to any industry that is providing services to a large number of clients and has a large number of transactions. +Particularly following industries are benefiting most of this use-case: +- Retail +- Entertainment +- Mass media + +### Solution +Using historical data, client communication and structured interviews with sales people, we create a company-specific churn ontology, that is then used to screen existing clients on an ongoing basis. Creating an interactive reinforcement loop with new churn-cases, this classification, predictor and indicator approach is ever more fine tuned to specific segments and service categories. As direct and ongoing KPIs churn ratios and client satisfaction are measured alongside generated revenues, profitability for the respective clients are classified as “endangered”. +Usually a gradual improvement can be monitored within 3-6 months from the start of the use case. + +The solution included: +- Initial typical churn cause analysis based on historical data (client positions, and transactions) +- Ideally inclusion of CRM notes and written sales to client communication (prior to churn) +- Sales & Churn Ontology setup & subsequent ontology matching +- Identification of likely company churn footprint & learning / improvement loops +- Aggregation of findings, reporting, alerting & action notifications + +### Stakeholders +- Chief Executive +- Chief Operations +- Sales and Marketing + +### Data elements, Assets and Deliverables +As an Input from the client, the following items were used: +- Historic data about churned clients +- Client portfolios - positions / transactions +- Ideally pre-leave client-advisor coms + +Assets & Artefacts: +- Client Behavioral Model +- Churn Prediction +- Action Monitoring + +The deliverables included: +- Sales & Churn Indicator Ontology +- Use case specific orchestration flow + +### Tags / Keywords +#churn #churnanalysis #retail #entertainment #massmedia #retention #clientretention #customerretention \ No newline at end of file diff --git a/content/en/docs/Solutions/004 Onboarding and Fraud Remodelling - Finance.md b/content/en/docs/Solutions/004 Onboarding and Fraud Remodelling - Finance.md new file mode 100644 index 00000000..22117d15 --- /dev/null +++ b/content/en/docs/Solutions/004 Onboarding and Fraud Remodelling - Finance.md @@ -0,0 +1,58 @@ +--- +title: Onboarding and Fraud Remodelling - Financial Services +category: use case +industries: [Financial Services] +owner: Tom Debus +tags: [fraud, fraudremodelling, compliance, kyc, financialservices, bank] +clients: [ ] +--- + +## Onboarding and Fraud Remodelling + +### Executive summary +In Financial Services more than in every other industry, leveraging an industry proven onboarding and Know Your Customer (KYC) ontology and fine-tune it to your corporate compliance policies, is an absolut must-have. Ensure manual processing, existing triggers and information sources are tied together by a unified and harmonized process supporting front-office and compliance simultaneously. + +### Problem statement +Many financial service providers struggle with the complexity and inefficiency of their client onboarding and recurring KYC monitoring practice. And many have issues, when it comes to Anti-Money Laundering (AML), source of funds and transaction monitoring compliance. The front-office staff often tries to cut corners and the compliance staff is overwhelmed with the number of cases and follow-ups that are required from them. Disintegrated and high-maintenance systems and processes are the usual status quo, with little budget and energy to change from due to the inherent risk. + +### Target market / Industries +The use case is primarily applicable to the industries that are exposed to frauds and where fraud tracking and prevention is needed. The financial services industry benefiting most from this use-case. + +### Solution +We combine the domain knowledge of what is really required by law and regulation with the opportunity to automate many aspects of the background screening and adverse media monitoring. By integrating the robustness of process of global players and the lean and mean approach FinTech startups take, we usually are able to raise quality while reducing effort. Creating a centralized compliance officer workbench that is integrated with both front-office systems as well as risk-management and compliance tools, we are able to iteratively improve the situation by synchronizing the learning of the models and predictions with the feedback from compliance experts. + +The solution included: +-Integrated Compliance Officer Workbench +- Onboarding & Compliance Ontology configuration & subsequent ontology matching +- Integration of existing screening & trigger sources with learning / improvement loops +- Automation of standard cases and pre-filling of high-probability issues +- Aggregation of findings, reporting, alerting & compliance action notifications + +### Stakeholders +- Compliance +- Security + +### Data elements, Assets and Deliverables +As an Input from the client, the following items were used: +- Existing client onboarding / KYC policies +- KYC/AML/onboarding management cases +- Client‘s contracts, positions & transactions + +Assets & Artefacts: +- Client risk attributes +- Historic client behavior information + +The deliverables included: +- Compliance & KYC Ontology +- Compliance Officer Workbench +- Use case specific orchestration flow + +### Impact and benefits +Decreased compliance team by 30% from 18 down to 12 Fuull-Time Employees by automating standard case load. Increased compliance quality and decreased client case resolution time by eliminating aspects not required by current jurisdictional scope. + +### Testimonials +> "This is a compliance expert‘s dream come true. Before I never had an oversight of where I or my team where standing. Now we can actually support our client facing colleagues." +> — Mr. XXX YYY, Title, Company ZZZ. + +### Tags / Keywords +#fraud #fraudremodelling #compliance #kyc #financialservices #bank \ No newline at end of file diff --git a/content/en/docs/Solutions/004 Onboarding and Fraud Remodelling - General.md b/content/en/docs/Solutions/004 Onboarding and Fraud Remodelling - General.md new file mode 100644 index 00000000..9cbe40c0 --- /dev/null +++ b/content/en/docs/Solutions/004 Onboarding and Fraud Remodelling - General.md @@ -0,0 +1,51 @@ +--- +title: Onboarding and Fraud Remodelling - General +category: use case +industries: [General Services, B2B Services] +owner: Tom Debus +tags: [fraud, fraudremodelling, frauddetection, fraudtracking, compliance, kyc] +clients: [ ] +--- + +## Onboarding and Fraud Remodelling + +### Executive summary +Leverage an industry proven onboarding and Know Your Customer (KYC) ontology and fine-tune it to your corporate compliance policies. Ensure manual processing, existing triggers and information sources are tied together by a unified and harmonized process supporting sales and compliance simultaneously. + +### Problem statement +Many service providers struggle with the complexity and inefficiency of their client onboarding and recurring KYC monitoring practice. And many have issues, when it comes to various types of monitoring of client’s activities. The front-office staff often tries to cut corners and the compliance staff is overwhelmed with the number of cases and follow-ups that are required from them. Disintegrated and high-maintenance systems and processes are the usual status quo, with little budget and energy to change from due to the inherent risk. + +### Target market / Industries +The use case is primarily applicable to the industries that are exposed to frauds and where fraud tracking and prevention is needed. + +### Solution +We combine the domain knowledge of what is really required by law and regulation with the opportunity to automate many aspects of the background screening and adverse media monitoring. By integrating the robustness of process of global players and the lean and mean approach, we usually are able to raise quality while reducing effort. Creating a centralized compliance officer workbench that is integrated with both front-office systems as well as risk-management and compliance tools, we are able to iteratively improve the situation by synchronizing the learning of the models and predictions with the feedback from compliance experts. + +The solution included: +-Integrated Compliance Officer Workbench +- Onboarding & Compliance Ontology configuration & subsequent ontology matching +- Integration of existing screening & trigger sources with learning / improvement loops +- Automation of standard cases and pre-filling of high-probability issues +- Aggregation of findings, reporting, alerting & compliance action notifications + +### Stakeholders +- Compliance +- Security + +### Data elements, Assets and Deliverables +As an Input from the client, the following items were used: +- Existing client onboarding / KYC policies +- KYC / onboarding management cases +- Client‘s contracts, positions & transactions + +Assets & Artefacts: +- Client risk attributes +- Historic client behavior information + +The deliverables included: +- Compliance & KYC Ontology +- Compliance Officer Workbench +- Use case specific orchestration flow + +### Tags / Keywords +#fraud #fraudremodelling #frauddetection #fraudtracking #compliance #kyc \ No newline at end of file diff --git a/content/en/docs/Solutions/005 Regulatory Single Source of Truth.md b/content/en/docs/Solutions/005 Regulatory Single Source of Truth.md new file mode 100644 index 00000000..7809d86c --- /dev/null +++ b/content/en/docs/Solutions/005 Regulatory Single Source of Truth.md @@ -0,0 +1,66 @@ +--- +title: Regulatory Single Source of Truth +category: use case +industries: [Financial Services, Insurance] +owner: Tom Debus +tags: [singlesourceoftruth, ssot, bank, privatebank, financialservices, insurance] +clients: [ ] +--- + +## Regulatory Single Source of Truth + +### Executive summary +Leveraging all existing data sources from core banking, risk- and trading systems to the Customer Relationship Management (CRM) and general ledger as granular input for your regulatory Single Source of Truth (SSoT). + +### Problem statement +Most regulatory solutions today require huge maintenance effort on both business and technology teams. Ever more granular and ever more near-time regulatory requirements further increase this pressure. Usually the various regulatory domains have created and continue to create silos for central bank, credit risk, liquidity, Anti-Money Laundering (ALM) / Know Your Customer (KYC) and transaction monitoring regulations. Further requirements from ePrivacy, Product Suitability and Sustainability regulations even further dilute these efforts. + +### Target market / Industries +- Financial services +- Insurance + +### Solution +Leveraging the semantic integration capabilities of {{< param replacables.brand_name >}} Data Platform, it allows you to reuse all the integration efforts you have previously started and yet converge on a common path towards an integrated (regulatory) enterprise view. The ability to eliminate high-maintenance Extraction Transformation Loading (ETL) coding or ETL tooling in favor of a transparent and business driven process will save you money during the initial implementation and during ongoing maintenance. +Templates and a proven process were applied to use what exists and build what’s missing without long-term lock in. + +The solution included: +- Semantic Integration leveraging all your prior integration investments +- Business driven data standardization and data quality improvements +- No Code implementation => business analysis is sufficient to generate the integration layer +- Implement data governance & data quality via reusable business checks +- Multiply your regulatory investments to be used for analytics, sales and risk + +### Stakeholders +- Compliance + +### Data elements, Assets and Deliverables +As an Input from the client, the following items were used: +- Full regulatory granular scope master & reference data (incl. UBO hierarchies) +- Client portfolio (positions / transactions) + +Assets & Artefacts: +- Private Bank Data Model +- Optimization Algorithms +- Data Quality Business Rules + +The deliverables included: +- E2E Models & Integration Schema +- Library of Business Checks + +### Impact and benefits +The semantic SSoT is now used by other functions across the bank leveraging regulatory investments for sales support, operations and risk management. + +The use-case implementation resulted in: + + **9%** reduction of risk weighted assets + + **9 FTE (50%)** reduction of regulatory reporting team + +In addition, recurring Cost of Capital savings of over 15m CHF p.a. were achieved. + +### Testimonials +> "We have semantically integrated +220 different data sources at Switzerland largest independent Private Bank. The regulatory team was able to deliver better results faster and yet decreased the team size by 30%." +> — Mr. XXX YYY, Title, Company ZZZ. + +### Tags / Keywords +#singlesourceoftruth #ssot #bank #privatebank #financialservices #insurance \ No newline at end of file diff --git a/content/en/docs/Solutions/006 Voice-based Trade Compliance.md b/content/en/docs/Solutions/006 Voice-based Trade Compliance.md new file mode 100644 index 00000000..914c1de8 --- /dev/null +++ b/content/en/docs/Solutions/006 Voice-based Trade Compliance.md @@ -0,0 +1,66 @@ +--- +title: Voice-based Trade Compliance +category: use case +industries: [Financial Services] +owner: Tom Debus +tags: [voicetradecompliance, tradecompliance, compliance, bank, communicationscreening, financialservices] +clients: [ ] +--- + +## Voice-based Trade Compliance + +### Executive summary +Convert the voice-based advisor to client phone conversations into text. Analyze for possible breaches of regulatory and compliance policies. This multi-step analytical process involves voice-to-text transcription, a compliance ontology, text parsing & natural language understanding. + +### Problem statement +Many if not most client advisors to client communications still occur via phone. These conversations happen in a black box environment that is difficult to track and audit. Potential compliance breaches in areas such as insider trading or conflict of interest can only be identified and intercepted at great cost while only listening in on select phone calls. The vast majority of conversation remains unchecked, leaving the organization in the dark and at risk. Often compliance is at odds with sales – one controlling the business, the other pushing the boundaries of acceptable risk. + +### Target market / Industries +Described use case can be efficiently applied in the industries where the track / audit of the voice-based communication is required. + +### Solution +Leveraging your existing Public Branch eXchange (PBX) phone recording infrastructure & partnering with your choice of voice-to-text transcription service, the solution is to automatically screen every conversion. The transcribed text files are parsed against the Sales & Compliance Ontology. Using Natural Language Understanding (NLU) the use case identifies which call advice and trade decisions occurred and high-lights possible compliance breaches. + +Once the predictions become more accurate a sales focused “topics-of-interest“ screening can be added. + +The solution included: +- Voice to text transcription +- Word parsing of text-based inputs +- Compliance & Sales Ontology matching +- Identification of possible compliance breaches (and / or sales topics of interest) +- Aggregation of findings, reporting, alerting +- Action recommendation + +##### Example Use Case Agent Cascade + +{{< blocks/screenshot color="white" image="/streamzero/images/solutions/uc_006_2021.png">}} + +### Stakeholders +- Compliance +- Security + +### Data elements, Assets and Deliverables + +As an Input from the client, the following items were used: +- Relationship Manager – Client Conversations (Voice or Text) +- Client portfolios - positions / transactions + +Assets & Artefacts: +- Unstructured Data +- Semantic Harmonization +- Natural language processing +- Personalization + +The deliverables included: +- Sales & Transaction Monitoring Ontology +- Use case specific orchestration flow + +### Impact and benefits +Sales Compliance headcount was reduced by 3 Full Time Employees while screening coverage increased to 90% (before only spot checks) and resolution quality focuses on sales and compliance. + +### Testimonials +> "The overall sensitivity of advisors to client‘s sentiment and requirements has increased. Also, we improved the understanding how compliance and sales can work together to achieve client satisfaction." +> — Ms. Milica L., Chief Risk Officer, Swiss Private Bank + +### Tags / Keywords +#voicetradecompliance #tradecompliance #compliance #bank #communicationscreening #financialservices diff --git a/content/en/docs/Solutions/007 Sensor-based Monitoring of Sensitive Goods.md b/content/en/docs/Solutions/007 Sensor-based Monitoring of Sensitive Goods.md new file mode 100644 index 00000000..54819f39 --- /dev/null +++ b/content/en/docs/Solutions/007 Sensor-based Monitoring of Sensitive Goods.md @@ -0,0 +1,66 @@ +--- +title: Sensor-based Monitoring of Sensitive Goods +category: use case +industries: [Intelligent packaging, Intelligent logistics, Industrial applications, Industry 4.0, Consumer and luxury goods, Home protection] +owner: Tom Debus +tags: [sensorbasedmonitoring, iot, sensitivegoods, intelligentpackaging, intelligentlogistics, industry40, industry, packaging, logistics, homeprotection, consumergoods, luxurygoods] +clients: [ ] +--- + +## Sensor-based Monitoring of Sensitive Goods + +### Executive summary +Sensor-based monitoring of sensitive goods along the transport chain (location, light, temperature, humidity & shocks in one sensor) and integration of these IoT components in a smart and decentralized cloud including the event-controlled connection to the relevant peripheral systems. + +### Problem statement +One of the biggest problems that exist in supply chain management is the lack of visibility and control once materials have left the site or warehouse. This leads to billions in losses due to missing or damaged products and leads to business inefficiency. + +### Target market / Industries +The use case can be applied for the following solutions: +- Intelligent packaging +- Intelligent logistics +- Industrial applications – Industry 4.0 +- Consumer and luxury goods +- Home protection + +### Solution +A small-size energy-autonomous intelligent cells (microelectronic parts) are integrated into any object / package to enable remain in contact to it, identify it electronically, provide a location and sense temperature, pressure, movement, light, and more. They are intelligent, able to make basic decisions and save small information pieces. +The cells communicate bidirectionally with the software through global IoT networks of our partners selecting the best energy-efficient technologies available where they are and building a neuronal backbone of objects in constant communication. +Bidirectional capabilities allow transmitting data of the sensors and receiving instructions and updates remotely. +Cells can also interact with the electronics of the objects they are attached to, they can read and transmit any parameters and providing remote control of them wherever they are. +All data of the cells are transmitted to our own cloud applications, a learning brain using the power of data management and AI of Google Cloud and Microsoft Azure where we can combine IoT data of objects with any other data in intelligent and self-learning algorithms. +The location or any other parameter of the objects can be displayed in any Browser or Smart device. +The user interfaces can be customized and allow to interact at any moment changing frequency of connection, SMS or email alarm values and who receives them (human or machine) or acting as remote control of any object. +All of this can be offered fully pay peruse, to ensure a very low entry point for the technology. You can turn the cell on and off remotely, and only pay cents per day when you use the cell capabilities or buy the solution. + +### Stakeholders +- Manufacturers +- Logistic services providers +- General management +- Supply chain +- Digitalization +- Quality control +- Risk management + +### Data elements, Assets and Deliverables +As an Input from the client, the following items were used: +- Container / goods data +- Transport routing +- Thresholds / boundary conditions +- Past Quality Control data for pattern recognition + +Assets & Artefacts: +- Routing optimization model +- IoT cloud backbone + +The deliverables included: +- Driver dispatcher and client apps +- Operational routing optimization + + + + + + +### Tags / Keywords +#sensorbasedmonitoring #iot #sensitivegoods #intelligentpackaging #intelligentlogistics #industry40 #industry #packaging #logistics #homeprotection #consumergoods #luxurygoods diff --git a/content/en/docs/Solutions/008 Metadata-controlled Data Quality & Data Lineage in Production.md b/content/en/docs/Solutions/008 Metadata-controlled Data Quality & Data Lineage in Production.md new file mode 100644 index 00000000..eb7ec27c --- /dev/null +++ b/content/en/docs/Solutions/008 Metadata-controlled Data Quality & Data Lineage in Production.md @@ -0,0 +1,57 @@ +--- +title: Metadata-controlled Data Quality & Data Lineage in Production +category: use case +industries: [Production, Serial Manufacturing, Retail, Financial Services, Pharma, Medicine, Laboratory, Cross-industry] +owner: Tom Debus +tags: [dataquality, dataqualityimprovement, machinelearning, production, manufacturing, serialmanufacturing, financialservices, medicine, laboratory, pharma, crossindustry] +clients: [ ] +--- + +## Metadata-controlled Data Quality & Data Lineage in Production + +### Executive summary +Metadata-controlled data quality & data lineage along the production chain integrated with a laboratory information system for monitoring and quality documentation of various BoM & formulation variations in the biochemical production of active pharmaceutical ingredients (preliminary production of active ingredients in transplant medicine). + +### Problem statement +Neither technical, nor rule based approaches can adequately help in raising data quality without domain expertise. +Using the domain expertise to create the rules is time consuming and often not feasible from the manpower prospective. + +### Target market / Industries +The use case is applicable to any industry dealing with large volumes of data if insufficient quality, e.g.: +- Serial manufacturing +- Mass production +- Retail +- Financial services +- Cross-industry applications + +### Solution +The approach is based on few shot manual learning, when the expert creates a few dozens of examples with real-life data. Later on from these examples the model learns strategies to identify and correct data quality errors. + +##### Example Use Case Agent Cascade +{{< blocks/screenshot color="white" image="/streamzero/images/solutions/uc_008_2021.jpeg">}} + +### Stakeholders +- Domain experts +- Product data quality +- Functional experts +- Risk managers + +### Data elements, Assets and Deliverables +As an Input from the client, the following items were used: +- Detailed data sets where the quality to be improved +- 20-30 examples of errors and their manual corrections +- Judgement on automated model performance + +Assets & Artefacts: +- {{< param replacables.brand_name >}} error correction toolbox + +The deliverables included: +- Customized data quality improvement workflow + +### Impact and benefits +The use case implementation allows to address data quality issues in an efficient manner with high quality of the process automation. If the data quality management process would remain manual, this would result in 5-6 Full Time Employees dedicated for this task. The Machine Learning model will over time accumulate respective knowledge and support domain expertise with relevant automated data quality improvement proposals. + + + +### Tags / Keywords +#dataquality #dataqualityimprovement #machinelearning #production #manufacturing #serialmanufacturing #massproduction #medicine #laboratory #pharma #financialservices #crossindustry diff --git a/content/en/docs/Solutions/009 Classification of Products.md b/content/en/docs/Solutions/009 Classification of Products.md new file mode 100644 index 00000000..96357f76 --- /dev/null +++ b/content/en/docs/Solutions/009 Classification of Products.md @@ -0,0 +1,62 @@ +--- +title: Classification of products along different regulatory frameworks +category: use case +industries: [Financial Service, Manufacturing, Retail] +owner: Tom Debus +tags: [ ] +clients: [Stark Group] +--- + +## Classification of products along different regulatory frameworks + +### Executive summary +Multi-dimensional and ontology-based classification of products along different regulatory frameworks +and systematic mapping of the company's internal specialist expertise in a knowledge graph that can be used +by both humans and algorithms or other systems. + +### Problem statement +Depending on the applicable regulation, the products and materials need to be classified for further usage. +This requires an effective classification approach and also involves the challenge of knowledge management. +Respective product / material information need to be extracted from various sources, cuch as core systems, +product information systems, etc. This task usually requires lots of time-consuming manual work utilizing the domain knowledge. + +### Target market / Industries +The use case can be efficiently applied in the following industries: +- Financial services +- Manufacturing +- Retail + +### Solution +The solution is based on analyzing the domain knowledge and converting it into ontology. +Domain knowledge is used as an input for the Machine Learning (ML) model: ontology-based annotator component +is analyzing the available data, be it text, unstructured or semi-structured data and feed it +into the ML model to perform the classification. The event-based workflow increases the efficiency and +stability of the classification process. + +### Stakeholders +- Management +- Product management +- Domain experts +- Management + +### Data elements, Assets and Deliverables +As an Input from the client, the following items were used: +- Domain expertise in the ontology +- Detailed data that needs to be classified + +Assets & Artefacts: +- Ontology-based annotator (DFA) +- Workflow engine + +The deliverables included: +- Automated classification data or documents – final workflow + +### Impact and benefits +The classification process, that cannot be done manually +with reasonable manpower and time resources is now automated. + + + +### Tags / Keywords +#classification #classificationofproduct #classificationofmaterial +#classificationautomation #automation #production #financialservices #production #retail \ No newline at end of file diff --git a/content/en/docs/Solutions/010 Balance Sheet Quality Control.md b/content/en/docs/Solutions/010 Balance Sheet Quality Control.md new file mode 100644 index 00000000..a0962f5f --- /dev/null +++ b/content/en/docs/Solutions/010 Balance Sheet Quality Control.md @@ -0,0 +1,94 @@ +## Balance Sheet Quality Control + + + + + +### Executive summary +It is a routine task for the banks to assess the balance sheet quality for their corporate clients on an annual basis and to categorize them according to the risk ratings. The use case contributes to the optimization of this routine task by automating the balance sheet comparison process and setting up smart notificaton mechanism. + + +### Problem statement + + + + + + + +### Target market / Industries + + +### Solution +The solution + + + + +Balance Sheet Quality Control Model - different use case - for financial services -banks would have to identify the balance sheet quality for the corp clients - look on the balance sheet annually - analize and to give different risk ratings - use case automation of the risk assessment - skip the balance sheets that are very similar to previous years. If significant change - analyze manually - SEPARATE USE CASE - Notification Hierarchy! - depending on the event found in the balance sheet - different people need to be notified. + + + + +The solution included: + +- + +>>>>>>>>>>Slide + + +### Stakeholders + +- + +- + + +### Data elements, Assets and Deliverables + +As an Input from the client, the following items were used: + +- + +- + +Assets & Artefacts: + +- + +- + +The deliverables included: + +- + +- + + +### Impact and benefits + +The use-case implementation resulted in: + +- + +- + + +### Testimonials +> "..." +> — Mr. XXX YYY, Title, Company ZZZ. + + +### Tags / Keywords +# \ No newline at end of file diff --git a/content/en/docs/Solutions/011 Corporate Clients Quality Assessment.md b/content/en/docs/Solutions/011 Corporate Clients Quality Assessment.md new file mode 100644 index 00000000..2925df4a --- /dev/null +++ b/content/en/docs/Solutions/011 Corporate Clients Quality Assessment.md @@ -0,0 +1,71 @@ +## Corporate Clients Quality Assessment + + + + + +### Executive summary +Implemented a fully automated data ingestion and orchestration system for the structural assessment of Corporate Client‘s balance sheets. Designed and implemented a two-level approach that includes both existing and publicly available information in a structured or semi-structured format. Extraction of relevant changes which are then compared to prior periods, explicit new data deliveries or internal policy baselines and thresholds. + + +### Problem statement +Often banks are facing high staff turnover rate and general lack of advisory staff that is resulting in lack of client intelligence. Detailed client profiles either does not exist or outdated and does not reflect the latest situation. As a result it is hardly possible to increase the share of wallet of existing corporate clients. + + +### Target market / Industries +Target industry for this use-case is Financial Services – Banks, Insurances, Asset Management Firms + + +### Solution +The solution is aimed to understand which corporate clients are able to gain a bigger share of wallet. Using the publicly available information, such as news and social media, the selling proposition for those clients is outlined. At the next stage the segmentation of those clients is performed to understand how much efforts need to be put to increase share of wallet and to shortlist most prominent of them. +This use case can be effectively combined with the Churn Analysis use-case. + + +### Stakeholders +-Sales management +- Sales staff / Relationship Management +- Key Account Management + + +### Data elements, Assets and Deliverables + +As an Input from the client, the following items were used: +- Client base +- Transaction / product usage history + +Assets & Artefacts: +- +- + +The deliverables included: +- +- + + +### Impact and benefits + +The use-case implementation resulted in: + +- + +- + + +### Testimonials +> "..." +> — Mr. XXX YYY, Title, Company ZZZ. + + +### Tags / Keywords +# \ No newline at end of file diff --git a/content/en/docs/Solutions/012 Insurance First notice of Loss.md b/content/en/docs/Solutions/012 Insurance First notice of Loss.md new file mode 100644 index 00000000..a89079aa --- /dev/null +++ b/content/en/docs/Solutions/012 Insurance First notice of Loss.md @@ -0,0 +1,69 @@ +--- +title: First Notice of Loss Automation +category: use case +industries: [Motor Vehicle Insurance] +owner: Tom Debus +tags: [insurance, FNOL, firstnoticeofloss] +clients: [Inter Europe] +--- + +## Cross Jurisdictional First Notice of Loss Automation + +### Executive summary +The handling of cross-jurisdictional accident resolutions involving more than one country was automated for a pan-European insurance group. + +### Problem statement +The client was bound to a proprietary existing legacy core system which served all operational processes but did not lend itself to agile, digital use cases. Within the cross-jurisdictional context various third-party party core systems of partner network insurers also had to be integrated in the overall flow. In addition to already digital content, file-based and even handwritten forms of the European Accident Standard had to be taken into account. The growth of the customer did not allow for a continued manual processing. + +### Target market / Industries +Focused on Industry segments, but easily configured to work for similar case management related processes that involve expert knowledge combined with extensive manual fact checking. + +### Solution +The customer wanted to automate and streamline the handling and ideally straight-through-processing of new cases whenever the context allowed for such an option and involve the correct stakeholders when a human resolution was called for. An existing data warehouse provided historic resolution data that could be used to train various Machine Learning (ML) models. In addition, a knowledge graph contained the expertise on how the company wanted to deal with certain constellations in the future. + +The solution included: +- Ingestion of all relevant base data into a use case message bus +- Automated plausibility check of the base claim (e.g. policy paid, client = driver, couterparty validity) +- ML model to assess "Fast Track" options (depending on likely cost footprint) +- Helper ML models to assess cost for vehicle, medical and legal cost +- Curation model to extend "fast track" rules within knowledge graph + +##### Example Use Case Agent Cascade +{{< blocks/screenshot color="white" image="/streamzero/images/solutions/uc_012_2021.png">}} + +### Stakeholders +- Head of Operations / Claims Handling +- Domain Expert for Motor Vehicle Accidents Underwriting +- Domain Expert from Accounting & Controlling +- Tech Expert for mobile field agent application +- Tech Expert for core system + +### Data elements, Assets and Deliverables +As an Input from the client, the following items were used: +- Core data items on policies, clients, risk vs. claim details +- Core data from insurance partner network +- Historic claims & claim resolution data warehouse + +Assets & Artefacts: + +- Claims knowledge graph & ontology +- Vehicle, medical and legal cost assessment prediction models +- Fast track viability assessment model +- Ontology curation / extention model + +The deliverables included: +- Automated decisioning on human vs. straight-through-processed case handling + +### Impact and benefits +The use-case implementation resulted in: +- the client was able to manage a +35% annual growth with fewer headcount (-3 FTE) +- turnaround times of automated cases could be reduced by >90%, from 8-10 working days to 1 day +- turnaround times of manual cases could be reduced by 30% due to elimination of manual triage +- the initial use case paved the way for additional AI based automation ideas + +### Testimonials +> "We were sceptical about the limits of automation with rather difficult data quality we initially set out with. The learning loop for both the agents involved as well as the predicition models was a true surprise to me." +> — Mr. Okatwiusz Ozimski, Inter Europe AG + +### Tags / Keywords +#insurance #firstnoticeofloss #FNOL diff --git a/content/en/docs/Solutions/013 Intraday Liquidity Management.md b/content/en/docs/Solutions/013 Intraday Liquidity Management.md new file mode 100644 index 00000000..f8c6266e --- /dev/null +++ b/content/en/docs/Solutions/013 Intraday Liquidity Management.md @@ -0,0 +1,62 @@ +--- +title: Intraday Liquidity Management Optimization +category: use case +industries: [Financial Service] +owner: Tom Debus +tags: [liquidity, liquiditymanagement, intradayliquiditymanagement, cashmanagement, BCBS, Basel3, financialservices] +clients: [] +--- + +## Intraday Liquidity Management Optimization + +### Executive summary +In order to avoid long internal lead times and to cater to stringent time-to-market expectations, an end-to-end Analytics Design and streaming real time analytics environment for group wide BCBS (Basel III) Intraday Liquidity Management was implemented. The bank's predictive liquidity and cash management models were rebuilt from scratch using real-time streams from 13 different SWIFT messaging gateways. + +### Problem statement +All financial institutions need to be on top of their liquidity levels throughout the entire day. Since every organization usually experiencing many cash inflows and outflows during the day, it is diffcult to understand what are the currenct liquidity levels. To be compliant with the regulations, the liquidity levels need to be monitored. Having too much cash is not commerically viable and too little cash is too risky. Knowing the current cash levels the bank can adjust accordingly. The entire cash balancing act is based on the cascade of different events. Cash flow events and also cash-related events need to be integrated from various transaction management systems. + +### Target market / Industries +The use case is applicable in all regualted and cash-intence industries, i.e. +- Financial service +- Treasury departments of large corporations + +### Solution +During the use case implementation 16 different cash-flow generating order systems were integrated using different schemas of how they handle transactions. +{{< param replacables.brand_name >}} Data Platform was able to resolve the complexities of the event handling to absorb all different situations and rules that need to be applied depending on the different states that the system can take. +Data sourcing patterns evolved quickly from single file batch to data streaming using Kafka and Flink. A global end-user enablement was achieved with a multi network environment for regional user and both logical and physical data segregation. Irreversible client data protection using SHA 256 hash algorithm allowed for globally integrated algorithms in spite of highly confidential raw input data. +We were able to implement dynamic throttling and acceleration of cash flows depending on current market situations and liquidity levels. + +The solution included: +- Adaptor agents to 16 cash-flow generating systems +- Throttling and acceleration logic +- Machine Learning (ML) models for liquidity projection +- Harmonized event architecture + +### Stakeholders +- Group treasury +- Group risk and compliance +- CFO + +### Data elements, Assets and Deliverables +As an Input from the client, the following items were used: +- Cash-flows + +Assets & Artefacts: +- Harmonized transactional event model +- Throttling and acceleration rule book + +The deliverables included: +- End to end solution for intraday liquidity + +### Impact and benefits +The use-case implementation resulted in: +- 10 MCHF annual savings on liquidity buffers +- 23% reduction of operations & treasury staff + +### Testimonials +>“Moving from a batch to a real-time liquidity monitoring was a substantial task that had countless positive knock-on effects throughout the organization.” + +— Mr. Juerg Schnyder, Liquidity expert, Global universal bank + +### Tags / Keywords +#liquidity #liquiditymanagement #intradayliquiditymanagement #cashmanagement #BCBS #Basel3 #financialservices \ No newline at end of file diff --git a/content/en/docs/Solutions/014 Marketing Data Analysis.md b/content/en/docs/Solutions/014 Marketing Data Analysis.md new file mode 100644 index 00000000..b3000367 --- /dev/null +++ b/content/en/docs/Solutions/014 Marketing Data Analysis.md @@ -0,0 +1,68 @@ +--- +title: Automating Marketing Data Analysis +category: use case +industries: [Financial Service] +owner: Tom Debus +tags: [marketdataanalysis, bigdata, bigdatainfrastructure, datascience, datascienceinfrastructure, financialservices, financial services, insurance, relationships] +clients: [] +--- + +## Marketing Data Analysis + +### Executive summary +Implemented the first service- and on-demand based big data and data science infrastructure for the bank. Data pipelines are built and maintained leveraging two key infrastructure components: a custom-built aggregation tool and the marketing content & event platform. The aggregation tool builds the data lake for all analytics activities and enables the marketing platform to organically grow customer and campaign projects. + +### Problem statement +Relationship Managers spend too much valuable time researching talking points and themes that fit their different client profiles. A simple product recommender usually cannot grasp the complexity of private banking relationships and hence the product recommendations are usually without impact. + +### Target market / Industries +Private banking, wealth management, +All relationship intense industries, i.e. insurance + +### Solution +Jointly with the client we developed a private banking marketing ontology (knowledge graph or rule book) that enabled various Machine Learning (ML) models to parse broad catalogue of unstructured data (financial research, company analysis, newsfeeds) to generate personalized investment themes and talking points. + +The solution included: +- Private banking marketing ontology +- Thematic aggregator agents +- Personalized clustering + +>>>Slide + +### Stakeholders +- Head of marketing and campaigns +- Market heads +- Relationship Manager +- Chief Investment Officer + +### Data elements, Assets and Deliverables + +As an Input from the client, the following items were used: +- Access to CRM details +- Client transaction history +- Research details + +Assets & Artefacts: +- Financial Product Classification +- Product Risk Classification +- Event Lifecycle + +The deliverables included: +- Private banking marketing ontology +- Thematic aggregator agents +- Personalized clustering +- End to end event cascade and workflow integration + +### Impact and benefits +Achieve a fully transparent Close the Loop on Campaigns and increased RoMI by 18%. Furthermore, this first mover program established the big data sandbox as a service capability to the entire bank. Also this project enabled marketing for the first time to close the loop between their digital client touchpoints and the events and campaigns run. + +The use-case implementation resulted in: +- +18% increase in RoMI (return on marketing investments) +- -17% savings on campaign spend + +### Testimonials +> "Using {{< param replacables.brand_name >}} we were able to digest a massive amount of text and extract personalized investment themes which allows our RMs to increase their face time with the clients and surprise them with the meaningful content." +> — Mr. R. Giger, Head of Marketing and Campaigns, Swiss Private Bank + +### Tags / Keywords +#marketdataanalysis #bigdata #bigdatainfrastructure #datascience #datascienceinfrastructure #financialservices #bank \ No newline at end of file diff --git a/content/en/docs/Solutions/_index.md b/content/en/docs/Solutions/_index.md new file mode 100644 index 00000000..72a8aee1 --- /dev/null +++ b/content/en/docs/Solutions/_index.md @@ -0,0 +1,14 @@ +--- +title: "Solutions" +linkTitle: "Solutions" +tags: [solutions, case studies] +weight: 107 +categories: ["solutions"] +description: >- + {{< param replacables.brand_name >}} FX is being used across industry verticals, such as banking, insurances, logistics and manufacturing, but also in horizontal business applications such as finance and human resources. The growing list of case studies may give you some insights and ideas how {{< param replacables.brand_name >}} may be put to good use in your organization. +--- + + + + + diff --git a/content/en/docs/User_Guide/CronJob.md b/content/en/docs/User_Guide/CronJob.md new file mode 100644 index 00000000..123609fd --- /dev/null +++ b/content/en/docs/User_Guide/CronJob.md @@ -0,0 +1,193 @@ +--- +title: "CronJob" +linkTitle: "CronJob" +weight: 204 +description: >- + How to use CronJob to schedule regularly recurring actions. +--- + +CronJobs are used to schedule regularly recurring actions such as backups, report generation and similar items. Each of those tasks should be configured to recur for an indefinite period into the future on a regular frequency (for example: once a day / week / month). The user also can define the point in time within that interval when the job should start. + +## Example: + +This example CronJob manifest would execute and trigger an event every minute: + +```json +schedule: "*/1 * * * *" +``` + +## Cron Schedule Syntax + +```json +# ┌───────────── minute (0 - 59) +# │ ┌───────────── hour (0 - 23) +# │ │ ┌───────────── day of the month (1 - 31) +# │ │ │ ┌───────────── month (1 - 12) +# │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday; +# │ │ │ │ │ 7 is also Sunday on some systems) +# │ │ │ │ │ +# │ │ │ │ │ +# * * * * * +``` + +For example, the line below states that the task must be started every Friday at midnight, as well as on the 13th of each month at midnight: + +``` +0 0 13 * 5 +``` + +To generate CronJob schedule expressions, you can also use web tools like [crontab.guru](https://crontab.guru/). + + + +### Useful Cron Patterns + + + +| Entry | Description | Equivalent to | +| ---------------------- | ---------------------------------------------------------- | ------------- | +| @yearly (or @annually) | Run once a year at midnight of 1 January | 0 0 1 1 * | +| @monthly | Run once a month at midnight of the first day of the month | 0 0 1 * * | +| @weekly | Run once a week at midnight on Sunday morning | 0 0 * * 0 | +| @daily (or @midnight) | Run once a day at midnight | 0 0 * * * | +| @hourly | Run once an hour at the beginning of the hour | 0 * * * * | + + + +## 20 Useful Crontab Examples + +Here is the list of examples for scheduling cron jobs in a Linux system using crontab. + +##### 1. Schedule a cron to execute at 2am daily. + +This will be useful for scheduling database backup on a daily basis. + +```shell +0 2 * * * +``` + +- Asterisk (*) is used for matching all the records. + +##### 2. Schedule a cron to execute twice a day. + +Below example command will execute at 5 AM and 5 PM daily. You can specify multiple time stamps by comma-separated. + +``` +0 5,17 * * * +``` + +##### 3. Schedule a cron to execute on every minutes. + +Generally, we don’t require any script to execute on every minute but in some cases, you may need to configure it. + +``` +* * * * * +``` + +##### 4. Schedule a cron to execute on every Sunday at 5 PM. + +This type of cron is useful for doing weekly tasks, like log rotation, etc. + +``` +0 17 * * sun +``` + +##### 5. Schedule a cron to execute on every 10 minutes. + +If you want to run your script on 10 minutes interval, you can configure like below. These types of crons are useful for monitoring. + +``` +*/10 * * * * +``` + +***/10:** means to run every 10 minutes. Same as if you want to execute on every 5 minutes use */5. + +##### 6. Schedule a cron to execute on selected months. + +Sometimes we required scheduling a task to be executed for selected months only. Below example script will run in January, May and August months. + +``` +* * * jan,may,aug +``` + +##### 7. Schedule a cron to execute on selected days. + +If you required scheduling a task to be executed for selected days only. The below example will run on each Sunday and Friday at 5 PM. + +``` +0 17 * * sun,fri +``` + +##### 8. Schedule a cron to execute on first sunday of every month. + +To schedule a script to execute a script on the first Sunday only is not possible by time parameter, But we can use the condition in command fields to do it. + +``` +0 2 * * sun [ $(date +%d) -le 07 ] && /script/script.sh +``` + +##### 9. Schedule a cron to execute on every four hours. + +If you want to run a script on 4 hours interval. It can be configured like below. + +```shell +0 */4 * * * +``` + +##### 10. Schedule a cron to execute twice on every Sunday and Monday. + +To schedule a task to execute twice on Sunday and Monday only. Use the following settings to do it. + +```shell +0 4,17 * * sun,mon +``` + +##### 11. Schedule a cron to execute on every 30 Seconds. + +To schedule a task to execute every 30 seconds is not possible by time parameters, But it can be done by schedule same cron twice as below. + +```shell +* * * * * /scripts/script.sh +* * * * * sleep 30; /scripts/script.sh +``` + +##### 13. Schedule tasks to execute on yearly ( @yearly ). + +@yearly timestamp is similar to “**0 0 1 1 \***“. It will execute a task on the first minute of every year, It may useful to send new year greetings 🙂 + +```shell +@yearly /scripts/script.sh +``` + +##### 14. Schedule tasks to execute on monthly ( @monthly ). + +@monthly timestamp is similar to “**0 0 1 \* \***“. It will execute a task in the first minute of the month. It may useful to do monthly tasks like paying the bills and invoicing to customers. + +```shell +@monthly /scripts/script.sh +``` + +##### 15. Schedule tasks to execute on Weekly ( @weekly ). + +@weekly timestamp is similar to “**0 0 \* \* mon**“. It will execute a task in the first minute of the week. It may useful to do weekly tasks like the cleanup of the system etc. + +```shell +@weekly /bin/script.sh +``` + +##### 16. Schedule tasks to execute on daily ( @daily ). + +@daily timestamp is similar to “**0 0 \* \* \***“. It will execute a task in the first minute of every day, It may useful to do daily tasks. + +```shell +@daily +``` + +##### 17. Schedule tasks to execute on hourly ( @hourly ). + +@hourly timestamp is similar to “**0 \* \* \* \***“. It will execute a task in the first minute of every hour, It may useful to do hourly tasks. + +```shell +@hourly +``` + diff --git a/content/en/docs/User_Guide/Executions_Packages.md b/content/en/docs/User_Guide/Executions_Packages.md new file mode 100644 index 00000000..8631add3 --- /dev/null +++ b/content/en/docs/User_Guide/Executions_Packages.md @@ -0,0 +1,144 @@ +--- +title: "Executions - Packages" +linkTitle: "Executions - Packages" +weight: 206 +description: >- + How to use the Executions/Packages Framework for script automation and package (execution) triggering. +--- + +The Executions/Packages is an event oriented framework that allows enterprise organizations the automation of script processing which can be triggered by: + +- Manually: By clicking the ‘Run’ button on the {{< param replacables.brand_name >}} FX Management Server. +- On Schedule: As a cron job whereas the Cron expression is added on the UI. +- On Event: Where a package is configured to be triggered bt the FX Router when a specific type of event is observed on the platform. + +It allows users to deploy their locally tested scripts without DevOps specific changes or the need to learn complex DSL (description and configuration language). In tandem with Git integrated source code management FX allows distributed and fragmented tech teans to easily deploy and test new versions of code in an agile way with changes being applied immediately. + +Contiuous Change Integration / Change Deployment becomes a component based and building block driven approach, where packages can be configurable and parametrised. All scripts and their parameters like secrets and environment variables form packages which makes them reusable for similar jobs or event chains. Event based package triggering allows users to run multiple packages in parallel as a reaction to the same event. + +# Executions - Packages + +Primary entities for "Executions" are packages which are composed by scripts that are executed in a predefined order. + +## Executions -> Packages + +This Use Case defines how to create and run a new package. + +1. Click on *Executions* on the left side of the dashboard menu to open drop-down +2. Click on *Packages* +3. Click on *+Add* to create a package + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/executions_packages_add_roboto.png">}} + +## Create Package + +| **Field name** | **Steps & Description** | +| :---------------------------- | ------------------------------------------------------------ | +| 1. Name | 1. Name the package | +| 2. Description | 2. Descripe the package | +| 3. Project | 3. Select the project to which the package will be bound | +| 4. Tags | 4. Add Tags of choice manually or select from predefined tags | +| 5. Schedule | 5. Schedule cron job -> "Cron like schedule definition. NOTE: day of week start on Monday (0 - Monday, 6 - Sunday) example: "20****" -> **The whole definition of [Cron Jobs](/docs/user-guide/cronjob/ "CronJob") can be found in the next sub-category of this UserGuide** | +| 6. Trigger Event Type | 6. Select Value -> select event type to trigger the exectution of the package -> please visit the sub-category [Events](/docs/user-guide/events/ "Events") to get a better understanding of how to set event triggers. | +| 7. Allow Manual Triggering | 7. Checkbox -> click to allow manual triggering of the package | +| 8. Active | 8. Checkbox -> click to set the package to active | +| 9. File Upload (choose file) | 9. Click on Choose file (Optional) to upload a script -> upload a JSON "config.json" script to configure the package | +| 10. File Upload (choose file) | 10. Click on Choose file (Optional) to upload a script -> upload a python "test_scr.py" script to pull the configuratio from config file and print all items | +| 11. Save | 11. Click Save to save packages | +| *Supported File upload Types* | 4 different file types are supported:
      **1. ".py file"** -> A PY file is a program file or script written in Python, an interpreted object-oriented programming language.
      **2. ".json file"** -> A JSON file is a file that stores simple data structures and objects in JavaScript Object Notation (JSON) format, which is a standard data interchange format.
      **3. ".sql file"** -> A (SQL) file with .sql extension is a Structured Query Language (SQL) file that contains code to work with relational databases.
      **4. ".csv file"** -> A CSV (comma-separated values) file is a text file that has a specific format which allows data to be saved in a table structured format. | + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/create_package_with_configs_roboto.png">}} + +##### config.json script + +The `config.json` file contains key/value configuration pairs that can be accessed in scripts at execution time. + +```json +{"somekey":"some value 2"} +``` + +##### test_scr.py script + +This is an example script that shows how configuration from `config.json` file can be accessed from a script. `package_name` will be passed to the script as argument and then can be used for fetching configuration using `ApplicationConfigurator` from `ferris_cli` python package. + +```python +import sys, json +from ferris_cli.v2 import ApplicationConfigurator + +fa = json.loads(sys.argv[1]) + +package_name = fa['package_name'] +config = ApplicationConfigurator.get(package_name) + +for k, v in config.items(): + print(f"{k} -> {v}") + print(v) +``` + +## Check Created Package + +The created package should be triggered every 20 minutes of every hour but can also be run manually. + +- Click on the magnifying glass icon to open the package's details page + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/click_loupe_package_roboto.png">}} + +1. Check details page +2. Click on "Show Trigger Event" + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/package_details_show_trigger_event_roboto.png">}} + +1. Check the triggered event details +2. Close + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/triggered_event_details_roboto.png">}} + +## Package Executions / Runs + +- Click on the "Run" button down the page to run the package manually + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/run_button_manual_package.png">}} + +It will automatically transfer you to the "List Package Executions" tab + +1. Check runs/package executions to see if you manually triggered execution was processed +2. Click on the magnifying glass icon of your latest manually triggered run to open details page of the exectuion + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/manual_run_check_loupe_details_roboto.png">}} + +1. Check the details "Show Package Execution" of the run/exection +2. Click on "List Steps" tab to see the steps of the execution + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/exection_manual_run_details_roboto.png">}} + +1. Check the steps of the run and status (completed; pending; unprocessed; failed) +2. Click on "Show Results" to verify the script for failed executions + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/manual_run_list_steps_roboto.png">}} + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/script_manual_run_execution.png">}} + +- Close window + +**Note that currently only python and sql handlers are available, files of different type will be unprocessed.** + +## Save a Run/Execution + +1. Go back to the "List Package Executions" tab +2. Click on the edit icon to open make the run/execution editable + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/list_package_executions_edit_manual_run_roboto.png">}} + +1. Name the execution/run +2. Describe the execution/run +3. Click "Saved" check box +4. Save + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/save_execution_run_roboto.png">}} + +1. Click on Executions to open dropdown +2. Click on Saved Executions to check the saved run + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/check_saved_run_roboto.png">}} + +**In the next section "UI Generator", the importance of the saved run will be showcased**. \ No newline at end of file diff --git a/content/en/docs/User_Guide/Logging_and_Monitoring.md b/content/en/docs/User_Guide/Logging_and_Monitoring.md new file mode 100644 index 00000000..049f75f5 --- /dev/null +++ b/content/en/docs/User_Guide/Logging_and_Monitoring.md @@ -0,0 +1,45 @@ +--- +title: "Logging and Monitoring" +linkTitle: "Logging and Monitoring" +tags: [quickstart, log, monitor] +categories: ["Knowledge Base"] +weight: 208 +description: >- + Development Lifecycle of an FX Service. +--- + +{{< param replacables.brand_name >}} FX aggregates all operational data into Elasticsearch. Most operational data and events are transported through Kafka from which it is placed in Elasticsearch by Elasticsearch Sink containers. + +The following are the Kex Data and The matching Elasticsearch Indexes + + +## Logs + +Contains Logs from all applications. Elasticsearch index is XYZ + + +## Events + +All events that are transported through the ferris.events Topic are loaded into Elasticsearch Index. + + +## Checking Logs + +{{< param replacables.brand_name >}} Logs are in logstash format. The logs can be aggregated from the application by using the ferris_cli library. + +The following is a sample log entry with extended descriptions below. + +Logs are identified by the app_name attribute which provides you with an indication of the application from which it was generated. + +To Filter Application Logs use the following + +App_name: + + +## Checking Events + +Events are in form of cloud events. The data section of an event is schema less i.e. the data provided in the attributes may vara from event type to event type. If you require custom extractions for specific event types the best is to tap into the + + +## Event Name Spaces + diff --git a/content/en/docs/User_Guide/_index.md b/content/en/docs/User_Guide/_index.md new file mode 100644 index 00000000..5ec2ea40 --- /dev/null +++ b/content/en/docs/User_Guide/_index.md @@ -0,0 +1,120 @@ +--- +title: "User Guide" +linkTitle: "User Guide" +tags: [quickstart, user guide] +categories: ["Knowledge Base"] +weight: 106 +description: >- + This section documents the Ferris Management User Interface (UI) in detail, so that you can quickly understand how Ferris is used effectively. Practical examples will further help in obtaining hands-on experience. +--- + +## The Ferris Management UI + +The {{< param replacables.brand_name >}} Management UI is a light weight front-end, through which all Ferris Components are managed and monitored. The Management UI is intentionally kept simple, with a structure that makes navigation fast and effective. + +Extending the user interface is easily possible with the built-in {{< param replacables.brand_name >}} Application Builder (FAB). This can become very useful for use cases demanding user input. + +## Purpose + +The primary purpose of {{< param replacables.brand_name >}} Management UI is for development teams to setup projects and services and orchestrate them to fully fledged applications. + +Through the user interface, teams are enabled to perform the following tasks: + +- Manage Projects +- Manage and orchestrate Services +- Schedule, trigger and run Jobs +- Integrate with Git Repositories +- Manage Security +- Log and Monitor executions and jobs + +## Users and Roles + +The {{< param replacables.brand_name >}} Management UI serves a number of different audiences and roles, namely: + +- Development and ebgineering teams, developing applications, uses cases and software projects +- DevOps and GitOps teams, responsible for maintaining, extending, deploying and operating applications +- Security Leads mainintaining users and privileges +- Admins, such as Project Leads maintaining projects + +## Ferris Components + +The {{< param replacables.brand_name >}} Platform follows the principles of **Event based Services and Integrations Architecture**. As a result, Ferris observes a strict, yet easy to understand paradigm of **Microservices** based engineering. + +In the following section, the Ferris Components used to establish Microservices and Event based Services and Integrations Architecture are explained in more detail: + +### Projects + +Projects is a purely logical concept within Ferris, serving the purpose of separating services and entire applications, including their development teams from one another. + +Within a project, teams can connect to their Git Repos and integrate code snippets, microservices and APIs. The real-time integration of these integrations greatly reduce any deployment overhead and thus make an immediate code evaluation possible. + +Secrets may also be setup on a project basis, effectively enabling project level integrations and security. + +### Services + +Services are one of the core elements of the {{< param replacables.brand_name >}} Platform. A Service represents one or many code sippets, microservices or APIs. It can be augmented by configuration and manifest files, and together each Service represents a fully fledged process or micro application. + +{{< param replacables.brand_name >}} Services are extremely capable and powerful objects, since they are reusable anywhere and can be triggered by any type of Event, be that: + +- Trigger Event, _by any internal or external cloud-based event_ +- Cron Schedule, _on a preset schedule_ +- Manually, _from within a Service_ + +### Events + +Events are the triggers kicking of a Service. As with Services, Events are kept flexible, and can take any form, such as a manual file upload on some company internal system, a periodic file ingestion process on a data platform, or a high-throughput monitoring of trading executions. + +Events may happen anywhere in the realm of global cloud services or internally on legacy inhouse systems. Very often, events happen on a wide variety of such systems and services. + +The nature of Events is that each Event a) listens to something particular happening, and then b) triggering one or more services. + +Events and Services enter a symbiotic relationship: Events trigger Services, and in turn a completed Service may trigger one or more Events. Thus the relationships between Services and Events can be: + +- One to one (1 : 1) +- One to many (1 : n) +- Many to many (n : n) + +### Jobs + +Jobs are the actual running of Services and Events. As mentioned earlier, Jobs may be triggered either manually, on schedule, or by a trigger event. + +{{< param replacables.brand_name >}} Jobs are powerful because they are universally executable by Developers and DevOps alike: + +- Developers: setup and execute Jobs to test Services +- DevOps: integrate Services with (Kafka) event triggers or schedule recuring Jobs through Cron + +### Taxonomies + +Similar to Projects, Taxonomies or Tags are a logical concept to help facilitate the management of the diverse {{< param replacables.brand_name >}} elements. + +One difference to Projects is though, that Tags may be used across Projects. + +### Git + +As one of the **first enablers of GitOps**, {{< param replacables.brand_name >}} provides a secure integration with Git Repos, enabling a direct embedding of code in {{< param replacables.brand_name >}} Services. + +As a result, real-time testing as well as no-code deployment of new Services and APIs is made possible with one simple integration. + +> _Note: {{< param replacables.brand_name >}} supports integrations with GitHub, GitLab and Bitbucket_ + +### File Uploads and Object Storage + +In order to faciliate development and testing, but also embeddable in end user applications, {{< param replacables.brand_name >}} provides an S3 based file upload mechanism. + +### Secrets + +The {{< param replacables.brand_name >}} Platform makes extensive use of Secrets. Secrets may be applied on three levels: + +- Services, to be used on a partcular Service only +- Projects, to be used within a project realm, and by the project members only +- Platform, may be used by all projects, services and users of the platform + +Secrets are stored within the {{< param replacables.brand_name >}} Secrets Vault + +### Security + +{{< param replacables.brand_name >}} Users and Roles are managed in the Security section. Users and Roles are managed by one or more designated Security Leads. + +Roles and Permissions are freely configurable to specific organization, project or application needs. + +{{< param replacables.brand_name >}} integrates with any external Information Access Management (IAM) system, and SSO, Auth0 or SAML method. diff --git a/content/en/docs/User_Guide/events.md b/content/en/docs/User_Guide/events.md new file mode 100644 index 00000000..91c5f3b5 --- /dev/null +++ b/content/en/docs/User_Guide/events.md @@ -0,0 +1,108 @@ +--- +title: "Events" +linkTitle: "Events" +weight: 205 +description: >- + How to configure a package to be triggered bt the FX Router when a specific type of event is observed on the platform. +--- + +FX is an event driven platform wich means that each action generating an event can be reused for further triggering of executions. Also within an executing script, an event can be generated and sent as a message. Each event is defined at least by it’s source, type and payload (data). Event message format is following the cloudevents standard. A list of all event types is maintained so the user can bound package execution to certain event type, which means that each time such an event is received, the package execution will be triggered. + +# Events + +Events are messages passed through the platform which are generated by Services. + +Events are in the form of JSON formatted messages which adhere to the CloudEvents format. They carry a Header which indicates the event type and a Payload (or Data section) which contain information about the event. + +To have a better detailed understanding of how Events are generated, please refer to the [Architecture](/docs/overview/architecture-overview/ "Architecture Overview") subcategory in the *Overview* category. + +## Events + +This use case defines how to configure a package to be triggered bt the FX Router when a specific type of event is observed on the platform. + +1. Click on *Events* on the left side of the dashboard menu to open drop-down +2. Click on *Event Types* +3. Check the predefined *Event Types* + - ferris.apps.modules.approvals.step_approval_completed + - ferris.apps.modules.minio.file_uploaded + +Events can be created within scripts during package execution by sending a message to the **Kafka Topic** using the `ferris_cli` python package. For example, a package can be bound to a file_upload event that is triggered every time a file gets uploaded to **MinIO** using FX file storage module. New event types will be registered as they are sent to the **Kafka Topic** using the `ferris_cli` python package. + +Further details regarding `ferris_cli` can be found in the subcategory [Development Lifecycle](/docs/developerguide/development-lifecycle/ "development-lifecycle") in the *Developer Guide*. + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/events_event_types.png">}} + +## Executions - Packages -> file upload trigger event + +In this use case an existing package will be edited to define the file upload event type. + +1. Click on *Executions* on the left side of the dashboard menu to open drop-down +2. Click on *Packages* +3. Click on the edit record button to edit the existing package *Test Package with Scripts* + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/edit_package_event.png">}} + +1. Delete the *CronJob Schedule* to allow a *Trigger Event Type* +2. Select the *Value* of the event type (ferris.apps.modules.minio.file_uploaded) +3. Save the edited package. + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/save_edited_package_event.png">}} + +## File Storage + +To finalize the process, a file needs to be uploaded to a MinIO bucket (file storage). + +1. Click on *File Storage* on the left side of the dashboard menu to open drop-down +2. Click on *List Files* +3. Click on *+Add* to upload a file to the bucket + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/list_files_event.png">}} + +1. Choose file to upload +2. Choose *File Type* (CSV Table; Plain Text; JSON) +3. Select the *Bucket Name* +4. Click on *Save* to save the file + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/upload_file_event.png">}} + +To verify if the package execution has been triggered, go back to the initial, edited package. + +1. Click on *Executions* on the left side of the dashboard menu to open drop-down +2. Click on *Packages* +3. Click on the magnifying glass to open the details page of the package *Test Package with Scripts* + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/package_details_event.png">}} + +It will automatically open the *List Package Executions* tab. + +1. Check the last Event, date and time to verify it corresponds to the time the file was uploaded +2. Click on the magnifying glass to open the details page of the triggered execution + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/list_triggered_event.png">}} + +- Check the details page of the event triggered run + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/details_page_event.png">}} + +## Workflow -> approval completed trigger event + +To finalize the second trigger event (ferris.apps.modules.approvals.step_approval_completed), an existing Workflow will be used to trigger a Case Management that will need to get approved. + +1. Click on *Workflows* on the left side of the dashboard menu to open the drop-down +2. Click on *List Workflows* +3. Click on the magnifying glass to show the details page of the workflow + +Note that before even getting a closer look at the Workflow details, the *Entrypoint Event* is displayed -> ferris.apps.modules.minio.file_uploaded + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/workflow_second_trigger_event.png">}} + +Check the details in the JSON snippet to understand what or which event types will trigger the second event type. The first event type shown in the JSON snippet is: ferris.apps.modules.minio.file_uploaded -> which means that a file will need to get uploaded for the event to get triggered. The second event type shown in the JSON snippet is: ferris.apps.modules.approvals.step_approval_completed -> meaning the uploaded file will need to get approved in the *Case Management* module before the wanted event gets triggered. + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/workflow_trigger_event_types.png">}} + +## Case Management -> -> approval completed trigger event + +1. Upload a file to a bucket (the process of uploading a file was described in detail on top of this page) +2. Click on *Case Management* on the left side of the dashboard menu to open the drop-down +3. Click on *Approvals* + diff --git a/content/en/docs/User_Guide/landing_page.md b/content/en/docs/User_Guide/landing_page.md new file mode 100644 index 00000000..99b670b7 --- /dev/null +++ b/content/en/docs/User_Guide/landing_page.md @@ -0,0 +1,55 @@ +--- +title: "Landing Page (Dashboard)" +linkTitle: "Landing page (Dashboard)" +weight: 201 +description: >- + Overview of the {{< param replacables.brand_name >}} FX Dashboard. +--- + +The {{< param replacables.brand_name >}} FX Landing Page provides insights and analytics around typical platform related metrics mostly related to DevOps and DataOps and detailed event handling. It can be finetuned and tailored to customer specific needs. + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/dashboard_landing_page.png">}} + +In this specific use case the insights and analytics of the {{< param replacables.brand_name >}} FX Platform are highlighted as follows: + +- In the first row, the last 18 executions and the last 18 executions with failed state + + - the last 18 executions showcase the following details: + + - Package (name) + - Status + - Execution time + - Finished + + - the last 18 executions with failed state showcase the following details: + + - Package (name) + - Status failed + - Triggered time + + It allows users of the platform to verify why the triggered package has failed executing. + +- In the second row, the executions statuses per day (last 7 days) and the executions by status (last 7 days) + + - Completed + - Failed + - Pending + - In_progress + +- In the third row, the exectuions trigger type per day (last 7 days) and the exectuions by trigger type (last 7 days) + + - triggered + - scheduled + - manual + +- In the 4th row, the average execution time per day (last 7 days) and the most recently updated packages + + - the details of the most recently updated packages are divided as follows: + - Package + - Project + - Updated on (date and time) + +- In the 5th row, the most frequently executed packages in the last 7 days with the following details: + + - Package (name) + - Number of exections \ No newline at end of file diff --git a/content/en/docs/User_Guide/login_register.md b/content/en/docs/User_Guide/login_register.md new file mode 100644 index 00000000..ba1ca8a5 --- /dev/null +++ b/content/en/docs/User_Guide/login_register.md @@ -0,0 +1,22 @@ +--- +title: "Login and Registration" +linkTitle: "Login and Registration" +weight: 201 +description: >- + Logging into Ferris, registering a new or performing self-service actions. +--- + +Connecting to {{< param replacables.brand_name >}} done through the Home URL. In the case of the public demo instance, it is [home.ferris.ai](https://home.ferris.ai). However, in most cases, it will be a URL that is unique to your own organization and domain. + +### Identity Access Management + +The {{< param replacables.brand_name >}} Identity and Access Management backbone is managed by Keycloak, a powerful open source IAM service. Depending on the configuration, users may authenticated via: + +- Local provisioning +- SSO, SAML, Auth0 + +Keycloak also supports self-service capabilities such as: + +- User Registration +- Password Reset + diff --git a/content/en/docs/User_Guide/project_creation_and_users_within_project.md b/content/en/docs/User_Guide/project_creation_and_users_within_project.md new file mode 100644 index 00000000..4301ca08 --- /dev/null +++ b/content/en/docs/User_Guide/project_creation_and_users_within_project.md @@ -0,0 +1,157 @@ +--- +title: "Projects" +linkTitle: "Projects" +weight: 202 +description: > + Projects are used to define logical teams and separate definitions and access of like services. Projects control member access to services and executions as well as to Git Repos and Secrets. +--- + +Following, we walk you through the Projects setup and maintenance: + +1. Click on *Projects* in the menu on the left side to open dropdown and then on List Projects +2. Click on *"+Add"* + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/list_projects_add_roboto.png">}} + +1. Name the new project +2. Save + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/create_project_roboto.png">}} + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/list_projects_created_roboto.png">}} + +{{< param replacables.brand_name >}} Projects form the overarching organizational bracket for different types of objects. All users, packages, scripts, parameters, secrets and other elements are organized into projects to ease enterprise data management. The default owner for new projects is the Platform Admin (PA). + +Once created, a project consists of the following components, each represented by a separat tab: + +- Details +- Charts +- Project Users +- Services +- Git Repositories +- Secrets + +## Details +This tab acts as an overview page to the project. + +A number of predefined widgets show the status of the project and its latest executions. + +Clicking the _edit_ button, the project name may be changed. + + + +## Charts +Charts are useful graphical widgets that summarize the hostory and health of any particular service or execution. + +New charts can be added by any user to a project. + +Charts creation is documented in the User Guide > _Charts_ + +## Project Users +The creator of a new Project is automatically assigned the _Project Owner_ role. There are multiple roles available within a project. These are defined as follows: + +- **Project Owner** - has all permissions on the project and related components (packages, users), including the deletion of the project, its repos, services, secrets and users. By default it is the user that created project. However, there can be multiple Project Owners assigned to one project. +- **Project Admin** - has all permissions as Owner except deletion +- **Project User** - has only list / view permissions + +> Note that users with an assigned _Administrator_ role (see _Security_ section), may see all projects, even those that they are not a member of. Standard users do not see any projects and packages they are not assigned to. + +All rights of a project role are translated to package level. A user with a _Project User_ role will therefore not be able to edit packages of that project, but only to list and view them and run a (manual) execution. + +## Services +This tab contains all _Services_ that belong to the project. + +A Service may belong to only one Project. However, multiple Services may belong to one Project. + +### Git Repositories +One of the {{< param replacables.brand_name >}} core strength is its direct and real-time integration to Git. By connecting a Git Repository with {{< param replacables.brand_name >}}, any code snippet may be executed as soon as it is checked into the repo. + +Integrations may be done on a branch and environment basis. For example: The Git _Main_ branch may be mapped to the {{< param replacables.brand_name >}} _Production_ environment, while the Git _Dev_ branch is mapped to {{< param replacables.brand_name >}} _Dev / Test_. + +Connecting and integrating Git Repos is fully secured and transparent by using _Key Value Pairs_, stored in Git and {{< param replacables.brand_name >}}. + +### Secrets +Project level _Secrets_ may inlcude API key, digital certificates or passwords. By definition, these are only available to __Services__ and _Users_ within the project. + + + diff --git a/content/en/docs/User_Guide/taxonomy_tagging.md b/content/en/docs/User_Guide/taxonomy_tagging.md new file mode 100644 index 00000000..8034e862 --- /dev/null +++ b/content/en/docs/User_Guide/taxonomy_tagging.md @@ -0,0 +1,67 @@ +--- +title: "Taxonomy/Tagging" +linkTitle: "Taxonomy/Tagging" +weight: 203 +description: > + How to add Tags and the importance of Taxonomy. +--- + +Taxonomies or Tags describe the ability to organize and structure types and classes of objects and their correlations within executions/packages, events (event types) and workflows across any given application, use case or project. Tags are searchable and makes it easy to group and relate objects across different components and lifecycle stages. + +As a generic base module "taggability" can easily be included in any model, use case or application by the developers/users. + +**Note: As of the current release the Taxonomy is universal across all projects, use cases and cannot be segregated along different functional domains. It is thus essential to create a unified naming convention to be shared among the different projects & user groups.** + +# Taxonomies / Tags + +1. Click on Taxonomies in the left menu to open dropdown and then on Tags +2. Click Add to create a tag + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/taxonomies_tags_add_roboto.png">}} + +1. Name Tag +2. Save + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/create_tag_save_roboto.png">}} + +- Check created Tag(s) + +1. Click on the magnifying glass to open details (show tag) page +2. This will automatically transfer you to the tag details page +3. Click on *List Packages* to see in which packages the same tag is used +4. Click on *List Workflows* to see in which workflows the same tag is used (in this example no workflow is associated with the tag just created) +5. Click on *Event Types* to see in which event type the same tag is uses (in this example no event type is associated with the tag just created) +6. Click on the Edit icon (List tags page) to edit/rename a tag + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/tag_list_click_loupe_details_roboto.png">}} + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/show_tag_details_roboto.png">}} + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/list_tag_packages_roboto.png">}} + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/list_workflows_tag_roboto.png">}} + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/list_event_types_tag_roboto.png">}} + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/list_tags_edit_roboto.png">}} + +## Search Tag + +1. Click *Search* on top of the *List Tags* / *Details Page* +2. Click *Add Filter* to choose a filter (currently only the "Name" filter is supported) +3. From the dropdown list choose the tag to be searched for + +- Starts with +- Ends with +- Contains +- Equal to +- Etc. + +4. Insert tag "Name" +5. Hit the Search button + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/search_tag_filter_roboto.png">}} + +- Check search results + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/search_results_new.png">}} \ No newline at end of file diff --git a/content/en/docs/_index.md b/content/en/docs/_index.md new file mode 100755 index 00000000..dc991a0c --- /dev/null +++ b/content/en/docs/_index.md @@ -0,0 +1,14 @@ + +--- +title: "DOCUMENTATION" +linkTitle: "DOCUMENTATION" +weight: 0 +menu: + main: + weight: 0 +--- + + + + + diff --git a/content/en/docs/fx/Developer_Guide/Creating_and_Configuring_Your_First_FX_Service.md b/content/en/docs/fx/Developer_Guide/Creating_and_Configuring_Your_First_FX_Service.md new file mode 100644 index 00000000..dc1a13c8 --- /dev/null +++ b/content/en/docs/fx/Developer_Guide/Creating_and_Configuring_Your_First_FX_Service.md @@ -0,0 +1,196 @@ +--- +title: "Creating and Configuring Your First FX Service" +linkTitle: "Your First FX Service" +tags: [quickstart, connect, register] +weight: 202 +categories: ["Knowledge Base"] +description: >- + Creating and Configuring Your First FX Service +--- + +## Creating and Configuring Your First FX Service in a Local Environment + +{{% alert title="No Infrastructure Required!" color="warning" %}} +When it comes to developing FX services, there's no need for complex infrastructure setups. Nearly all of the platform's essential features can be replicated in your local development environment. +{{% /alert %}} + +This guide provides a clear walkthrough of the process for creating and simulating services, all within the comfort of your desktop IDE. By following these steps, you'll be able to seamlessly generate and define services, and then simulate their behavior before taking them live. + +### Step 1: Create a Virtual Environment + +Before you start working on your FX service, it's a good practice to create a virtual environment to isolate your project's dependencies. A virtual environment ensures that the packages you install for this project won't conflict with packages from other projects. You can create a virtual environment using a tool like `virtualenv`: + +```bash +# Replace "my_fx_project" with your desired project name +virtualenv my_fx_project-env +``` + +Activate the virtual environment: + +```bash +source my_fx_project-env/bin/activate +``` + +### Step 2: Set Environment Variable + +Set the `EF_ENV` environment variable to "local" to indicate that you're working in a local development environment: + +```bash +export EF_ENV=local +``` + +### Step 3: Project Directory Setup + +Create a directory that will serve as the main project directory. Inside this directory, you will organize multiple services. For example: + +```bash +mkdir my_fx_project +cd my_fx_project +``` + +### Step 4: Create Service Directory + +Within the project directory, create a subdirectory for your specific service. This directory should have a name that consists of alphanumeric characters in lowercase, along with underscores (`_`) and hyphens (`-`), no _spaces_ allowed: + +```bash +mkdir my-service-name +cd my-service-name +``` + +**Step 5: Create app.py** + +Inside the service directory, create an `app.py` file. This file will serve as the entry point for your FX service. In this file, import the necessary context from the `fx_ef` (core library) for your service: + +```python +# app.py +from fx_ef import context + +# Your service code starts here +``` + +### Step 6: Run app.py + +Run the `app.py` file. This step generates two JSON files: +- `ef_env.json`: Simulates the parameters, secrets, and configurations of the service. +- `ef_package_state.json`: Holds the execution state of the service. + +The above two files are used to simulate the service environment and are not used at runtime. They should not be checked in to git. A sample _.gitignore_ for FX projects is provided here The GitIgnore File + +```bash +python app.py +``` + +### Step 7: Expand Your Service + +With the initial setup done, you can now expand the `app.py` file with your actual service logic. Build and implement your FX service within this file. + + +### Step 8: Module Placement + +It's important to note that any modules (additional Python files) your service relies on should be placed within the same directory as the `app.py` file. FX does not support nested directories for modules. + +By following these steps, you'll be able to create your first FX service in a local environment, set up the necessary configurations, and start building your service logic. Remember to activate your virtual environment whenever you work on this project and customize the `app.py` file to match the functionality you want your FX service to provide. + +Adding a `manifest.json` file to describe your FX service to the platform is an essential step for proper integration and communication. Here's how you can create and structure the `manifest.json` file: + + +### Step 9: Create manifest.json + +Inside your service directory, create a `manifest.json` file. This JSON file will contain metadata about your service, allowing the FX platform to understand and interact with it. + +The `manifest.json` file provides vital information about your FX service, making it easier for the platform to understand and manage your service's behavior and dependencies. + +By including this file and its necessary attributes, your service can be properly registered, tracked, and executed within the FX platform. This manifest file essentially acts as a contract between your service and the platform, enabling seamless integration. + +**Understanding manifest.json: Defining Your Service** + +The `manifest.json` file plays a critical role in describing your application to the FX Platform, as well as to fellow users and developers. Below is a sample `manifest.json` file along with an explanation of its parameters: + +**`manifest.json` Example:** + +```json +{ + "description": "Service with manifest file", + "entrypoint": "app.py", + "execution_order": ["app_1.py", "app.py"], + "tags": ["devops"], + "trigger_events": ["ferris.apps.minio.file_uploaded"], + "schedule": "54 * * * *", + "allow_manual_triggering": true, + "active": true, + "output_events": ["test_type_101", "test_type_112"] +} +``` + +**Parameters and Descriptions:** + +| Parameter | Description | +|------------------------|------------------------------------------------------------------------------------------------------------------| +| `description` | A brief description of the service. | +| `entrypoint` | The script that will be executed when the service is triggered. | +| `execution_order` | An array indicating the sequence of scripts to be executed. If both `entrypoint` and `execution_order` are defined, `entrypoint` will be used. | +| `tags` | An array of tags that categorize the service. | +| `trigger_events` | An array of events that will trigger the service's execution. | +| `schedule` | _Optional: A cron-like definition for scheduling service executions._ | +| `allow_manual_triggering` | Indicates whether the service can be triggered manually. | +| `active` | Indicates whether the service is active or inactive. | +| `output_events` | An Array of event types this service emits. | + +This `manifest.json` file provides essential metadata about your service, making it easier for both the platform and other users to understand its purpose, behavior, and triggers. By customizing these parameters, you tailor the service's behavior to your specific requirements. + +> The output_events attribute has no impact on the service at run time. But it helps display on the UI the downstream services that are connected to this service and thereby helps other developers who wish to connect to this service. + + +### Step 10: Expand ef_env.json + +The `ef_env.json` file plays a critical role in simulating your service's environment during development. While on the FX platform, parameters, configs, and secrets are managed differently, in the local environment, you can define these elements within this JSON file for simulation purposes. + +```json +{ + "parameters": { + "param1": "value1", + "param2": "value2" + }, + "secrets": { + "secret_key1": "secret_value1", + "secret_key2": "secret_value2" + }, + "configs": { + "config_key1": "config_value1", + "config_key2": "config_value2" + } +} +``` + +- `"parameters"`: In the local environment, you can define parameters directly within this dictionary. These parameters are typically accessed within your service code using the `fx_ef` library. + +- `"secrets"`: Similarly, you can define secret values in this section. While on the platform, secrets will be managed through the UI and loaded into your service securely. During local simulation, you can include sample secret values for testing. + +- `"configs"`: For configuration values, you can specify them in this dictionary. However, on the FX platform, configuration values are usually managed through an external `config.json` file. This is done to keep sensitive configuration data separate from your codebase. + +> Keep in mind that the `ef_env.json` file is only for simulation purposes. On the FX platform, parameters are passed through trigger event payloads, configurations come from the `config.json` file, and secrets are managed through the platform's UI. + +By expanding your `ef_env.json` file with the appropriate parameters, secrets, and sample configuration values, you'll be able to effectively simulate your service's behavior in a local environment. This allows you to test and refine your service logic before deploying it on the FX platform, where parameters, secrets, and configurations are handled differently. + + +### Step 11: Exploring the `fx_ef` Library + +In the following section, we'll delve into the capabilities of the `fx_ef` library. This library serves as a bridge between your FX service and the platform, allowing you to seamlessly implement various platform features within your service's logic. + +The `fx_ef` library encapsulates essential functionalities that enable your service to interact with the FX platform, handling triggers, events, and more. By leveraging these features, you can create robust and responsive FX services that seamlessly integrate with the platform's ecosystem. + +Here's a sneak peek at some of the functionalities offered by the `fx_ef` library: + +1. **Event Handling**: The library facilitates event-driven architecture, allowing your service to react to various triggers from the platform. Whether it's an incoming data event or an external signal, the library provides the tools to manage and respond to events effectively. + +2. **Parameter Access**: While on the FX platform, parameters are passed through trigger event payloads. The library offers methods to access these parameters effortlessly, enabling your service to make decisions and take actions based on the provided inputs. + +3. **Configuration Management**: Although configuration values are typically managed through a separate `config.json` file on the platform, the `fx_ef` library simplifies the process of accessing these configurations from within your service code. + +4. **Secrets Handling**: On the platform, secrets are managed securely through the UI. The library ensures that your service can access these secrets securely when running on the platform. + +5. **Service State Tracking**: The library also assists in managing your service's execution state, tracking its progress and ensuring smooth operation. + +By tapping into the capabilities of the `fx_ef` library, you can build powerful and versatile FX services that seamlessly integrate with the FX platform's functionalities. In the next section, we'll dive deeper into the specifics of how to utilize these features in your service logic. + +Stay tuned as we explore the `fx_ef` library in depth, unraveling the tools at your disposal for creating impactful and responsive FX services. diff --git a/content/en/docs/fx/Developer_Guide/Database_Integration.md b/content/en/docs/fx/Developer_Guide/Database_Integration.md new file mode 100644 index 00000000..1c53fc20 --- /dev/null +++ b/content/en/docs/fx/Developer_Guide/Database_Integration.md @@ -0,0 +1,117 @@ +--- +title: "Database Integration" +linkTitle: "Database Integration" +tags: [quickstart, database, integration] +categories: ["Knowledge Base"] +weight: 209 +description: >- + How to integrate a Database with the {{< param replacables.brand_name >}} Platform. +--- + +## Install Database Drivers + +{{< param replacables.brand_name >}} requires a Python DB-API database driver and a SQLAlchemy dialect to be installed for each datastore you want to connect to within the executor image. + + +## Configuring Database Connections + +{{< param replacables.brand_name >}} can manage preset connection configurations. This enables a platform wide set up for both confidential as well as general access databases. + +{{< param replacables.brand_name >}} uses the SQLAlchemy Engine along with the URL template based approach as connection management. The connection configurations are maintained as secrets within the platform and are therefore not publicly accessible. Meaning, access is provided for administrators only. + + +## Retrieving DB Connections + +The following is how to retrieve a named connection. The sample assumes that the connection identifier key is uploaded to the package as a secrets.json. + +```python +from fx_ef import context +import sqlalchemy as db + +db_url = context.secrets.get('my_connection') +engine = db.create_engine(db_url) + +connection = engine.connect() +metadata = db.MetaData() + +``` +In the above example the db_url is set up as a secret with name `'my_connection'`. + +Depending on whether this is a service, project or platform level secret there are different approaches to set up the secret. For service level secrets, the following is a sample set up for a `secrets.json` file of the package. + +```json +{ + "my_connection" = "mysql://scott:tiger@localhost/test" +} +``` +* For Project scope use the `'secrets'` tab of the Project Management UI. +* For Platform scope secrets use the `Vault UI` in the FX Manager Application. + + +## Database Drivers + +The following table provides a guide on the python libs to be installed within the Executor docker image. For instructions on how to extend the Executor Docker image please check this page: /docs/extending_executor_image + +You can read more here about how to install new database drivers and libraries into your {{< param replacables.brand_name >}} FX executor image. + +Note that many other databases are supported, the main criteria being the existence of a functional SQLAlchemy dialect and Python driver. + +> _Note: Searching for the keyword "sqlalchemy + (database name)" should help get you to the right place._ + +If your database or data engine isn't on the list but a SQL interface exists, please file an issue so we can work on documenting and supporting it. + + +A list of some of the recommended packages: + +| Database | PyPI package | +| ------------------------------------------------------------ | ------------------------------------------------------------ | +| [Amazon Athena](/docs/integrations/database_guide/databases/athena) | `pip install "PyAthenaJDBC>1.0.9` , `pip install "PyAthena>1.2.0` | +| [Amazon Redshift](/docs/integrations/database_guide/databases/redshift) | `pip install sqlalchemy-redshift` | +| [Apache Drill](/docs/integrations/database_guide/databases/drill) | `pip install sqlalchemy-drill` | +| [Apache Druid](/docs/integrations/database_guide/databases/druid) | `pip install pydruid` | +| [Apache Hive](/docs/integrations/database_guide/databases/hive) | `pip install pyhive` | +| [Apache Impala](/docs/integrations/database_guide/databases/impala) | `pip install impyla` | +| [Apache Kylin](/docs/integrations/database_guide/databases/kylin) | `pip install kylinpy` | +| [Apache Pinot](/docs/integrations/database_guide/databases/pinot) | `pip install pinotdb` | +| [Apache Solr](/docs/integrations/database_guide/databases/solr) | `pip install sqlalchemy-solr` | +| [Apache Spark SQL](/docs/integrations/database_guide/databases/spark-sql) | `pip install pyhive` | +| [Ascend.io](/docs/integrations/database_guide/databases/ascend) | `pip install impyla` | +| [Azure MS SQL](/docs/integrations/database_guide/databases/sql-server) | `pip install pymssql` | +| [Big Query](/docs/integrations/database_guide/databases/bigquery) | `pip install pybigquery` | +| [ClickHouse](/docs/integrations/database_guide/databases/clickhouse) | `pip install clickhouse-driver==0.2.0 && pip install clickhouse-sqlalchemy==0.1.6` | +| [CockroachDB](/docs/integrations/database_guide/databases/cockroachdb) | `pip install cockroachdb` | +| [Dremio](/docs/integrations/database_guide/databases/dremio) | `pip install sqlalchemy_dremio` | +| [Elasticsearch](/docs/integrations/database_guide/databases/elasticsearch) | `pip install elasticsearch-dbapi` | +| [Exasol](/docs/integrations/database_guide/databases/exasol) | `pip install sqlalchemy-exasol` | +| [Google Sheets](/docs/integrations/database_guide/databases/google-sheets) | `pip install shillelagh[gsheetsapi]` | +| [Firebolt](/docs/integrations/database_guide/databases/firebolt) | `pip install firebolt-sqlalchemy` | +| [Hologres](/docs/integrations/database_guide/databases/hologres) | `pip install psycopg2` | +| [IBM Db2](/docs/integrations/database_guide/databases/ibm-db2) | `pip install ibm_db_sa` | +| [IBM Netezza Performance Server](/docs/integrations/database_guide/databases/netezza) | `pip install nzalchemy` | +| [MySQL](/docs/integrations/database_guide/databases/mysql) | `pip install mysqlclient` | +| [Oracle](/docs/integrations/database_guide/databases/oracle) | `pip install cx_Oracle` | +| [PostgreSQL](/docs/integrations/database_guide/databases/postgres) | `pip install psycopg2` | +| [Trino](/docs/integrations/database_guide/databases/trino) | `pip install sqlalchemy-trino` | +| [Presto](/docs/integrations/database_guide/databases/presto) | `pip install pyhive` | +| [SAP Hana](/docs/integrations/database_guide/databases/hana) | `pip install hdbcli sqlalchemy-hana or pip install apache-Feris[hana]` | +| [Snowflake](/docs/integrations/database_guide/databases/snowflake) | `pip install snowflake-sqlalchemy` | +| SQLite | No additional library needed | +| [SQL Server](/docs/integrations/database_guide/databases/sql-server) | `pip install pymssql` | +| [Teradata](/docs/integrations/database_guide/databases/teradata) | `pip install teradatasqlalchemy` | +| [Vertica](/docs/integrations/database_guide/databases/vertica) | `pip install sqlalchemy-vertica-python` | +| [Yugabyte](/docs/integrations/database_guide/databases/yugabytedb) | `pip install psycopg2` | + +------ + +``` + + + + + + + + + + + diff --git a/content/en/docs/fx/Developer_Guide/Deploy_A_Service.md b/content/en/docs/fx/Developer_Guide/Deploy_A_Service.md new file mode 100644 index 00000000..18dfab31 --- /dev/null +++ b/content/en/docs/fx/Developer_Guide/Deploy_A_Service.md @@ -0,0 +1,146 @@ +--- +title: "Deploy a Service" +linkTitle: "Deploy a Service" +tags: [quickstart, services, deploy] +categories: ["Knowledge Base"] +weight: 204 +description: >- + Follow these step-by-step instructions to setup a real-time and responsive integration between GitHub and the Ferris Platform. +--- + +## Deploying Services: A Step-by-Step Overview + +In this section, we provide you with a concise yet comprehensive overview of the steps required to deploy a service or a collection of services onto the FX platform. Following these steps ensures a smooth transition from development to deployment. + +### Step 1: Check Services into Git + +Before anything else, ensure your collection of services is properly versioned and checked into a Git repository. This guarantees version control and a reliable source of truth for your services. + +### Step 2: Create a Project in the UI + +In the FX Platform UI, initiate the process by creating a project. Projects serve as containers for your services, aiding in organization and management. + +### Step 3: Add Git Repository to the Project + +Once your project is in place, seamlessly integrate your Git repository with it on the _Git Repositories_ tab. This connection allows the platform to access and manage your services' source code. + +### Step 4: Sync the Repository to the Platform + +The final step involves syncing the repository you've connected to your project with the FX platform. This synchronization imports the services' code, configurations, and other relevant assets into the platform environment. To do this, enter the Git Repo in Search mode and click _Sync Now_. + +By following these four fundamental steps, you're well on your way to deploying your services onto the FX platform. Each of these steps plays a vital role in ensuring that your services are seamlessly integrated, accessible, and ready for execution within the FX ecosystem. + + +## Detailed Deployment Process: From Git to FX Platform + +This section breaks down the steps outlined earlier for deploying services onto the FX platform in detail, starting with checking services into Git. + +### Check Services into Git + +Since familiarity with Git is assumed, we'll briefly touch on this step. Within the FX platform, each directory within a Git Repository represents a distinct service. Files placed directly in the root directory of a Repository are not considered part of any service. + +### Create a Project in the UI + +Creating Projects and Linking with Git Repository: + +1. **Create a New Project:** + - Navigate to the "Projects" section on the left menu, then select "List Projects." + - Click "+Add" to create a new project. + + {{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/create_project_git_int.png">}} + +2. **Name the Project:** + - Provide a name for the project. + - Save the project. +{{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/git_int_created_project.png">}} + +3. **View Project Details:** + - Click the magnifying glass icon to access the project's details page. + +{{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/loupe_git_created_project.png">}} + +{{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/git_project_details_page.png">}} + +4. **Add a GitHub Repository:** + - Access the "Git Repositories" tab. + - Click "+Add" to add an SSH repository URL. + +{{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/add_git_repo.png">}} + +5. **Copy GitHub Repo:** + - Generate a public SSH key _(if not done previously)._ + - Login to your GitHub account. + - Go to the repository you want to link. + - Click the green "Code" button to reveal repository URLs. + - Copy the SSH URL. + + {{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/github_copy_ssh_url.png">}} + +6. **Paste SSH URL:** + - Paste the copied SSH URL into the platform. + - Save to set up the repository. + - A pop-up will display a platform-generated public key. This key should be added to the GitHub Repo's Deploy Keys to enable syncing. + + {{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/create_git_repo.png">}} + +7. **Add Public Key to GitHub:** + - Return to GitHub. + - Go to Settings > Deploy Keys. + - Click "Add Deploy Key." + - Paste the generated public key, name it, and add the key. + +{{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/add_public_key_git.png">}} + +{{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/check_saved_key_git.png">}} + +8. **Synchronize the Repository:** + - Return to the FX platform, in Search mode. + - Click "Sync Now" to sync the platform with the Git Repository. + - Check the synchronized details page; branches will be added, and status changes. + +{{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/sync_now_button.png">}} + +9. **Check the Synced Packages:** + - Verify imported packages by clicking the "List Packages" tab. + - Note that the main branch is automatically synchronized. As development continues and multiple branches are used, they can also be synced individually. + +{{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/synced_repos.png">}} + +{{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/list_packages_git_import.png">}} + +10. **Change Git Branch on the Platform:** + - Users can choose a specific branch to work on or test. + - Access the Edit Repository details page. + - Select the desired branch from the dropdown (e.g., "dev"). + - Save the selection and synchronize packages. + + {{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/edit_repo_branch.png">}} + +11. **Verify Synced Packages on Dev Branch:** + - Check the "List Packages" tab to confirm successful synchronization from the dev branch. + +{{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/list_packages_git_import.png">}} + +### Managing Public Keys for Security and Access + +It's important to understand the dynamics of managing public keys to ensure security and controlled access within the FX platform environment. Here's a breakdown of key considerations: + +1. **Regenerating Public Keys:** + - You can regenerate a public key at any time if there's a concern that unauthorized access might have occurred. + - Regenerated keys must be added to GitHub again and synchronized on the platform afterward. + +2. **Ensuring Synchronization:** + - Whenever a new public key is generated, it must be added to the respective GitHub repository. + - Failure to complete this step will result in synchronization issues on the platform. + +3. **Synchronization and Key Addition:** + - When generating a new key, add it to GitHub's Deploy Keys. + - Afterward, ensure the key is synchronized on the platform to maintain access. + +4. **Revoking Access:** + - If a situation arises where platform access should be revoked, keys can be deleted directly on GitHub. + +The meticulous management of public keys is essential for maintaining the security and integrity of your FX services. By being proactive in regenerating keys, properly adding them to GitHub, and ensuring synchronization on the platform, you're taking steps to uphold a secure development and deployment environment. + +Integrate these insights into your documentation, adapting the content to match your documentation's tone and style. This note aims to provide users with a clear understanding of how to manage public keys effectively within the FX platform ecosystem. + diff --git a/content/en/docs/fx/Developer_Guide/Extending_executor_image b/content/en/docs/fx/Developer_Guide/Extending_executor_image new file mode 100644 index 00000000..2b24b2b2 --- /dev/null +++ b/content/en/docs/fx/Developer_Guide/Extending_executor_image @@ -0,0 +1,11 @@ +--- +title: "Extending the Executor Image" +linkTitle: "Extending the Executor Image" +tags: [quickstart, extensions, executor] +categories: ["Knowledge Base"] +weight: 214 +description: >- + The FX executor image can be extended to fit your requirements - Here we show you how that is done. +--- + +text \ No newline at end of file diff --git a/content/en/docs/fx/Developer_Guide/FERRIS_Executor_Helper_Simplifying_FX_Service_Development.md b/content/en/docs/fx/Developer_Guide/FERRIS_Executor_Helper_Simplifying_FX_Service_Development.md new file mode 100644 index 00000000..cf3c21f7 --- /dev/null +++ b/content/en/docs/fx/Developer_Guide/FERRIS_Executor_Helper_Simplifying_FX_Service_Development.md @@ -0,0 +1,121 @@ +--- +title: "FX Core Lib: Simplifying FX Service Development" +linkTitle: "The FX Platform Lib" +tags: [quickstart, connect, register] +weight: 203 +categories: ["Knowledge Base"] +description: >- + FX Core Lib: Simplifying FX Service Development +--- + +## FX Core Lib: Simplifying FX Service Development + +The FX Helper package, available through the `fx_ef` library, offers an array of convenient functions that streamline the development of FX services. This guide walks you through the different ways you can leverage this package to access service configurations, parameters, secrets, and state within your service logic. + + +## Accessing Package Configuration + +Retrieve configuration values that influence your service's behavior by using the `context.config.get()` method: + +```python +from fx_ef import context + +value = context.config.get('some_configuration_key') +``` + + +## Accessing Execution Parameters + +Access parameters that affect your service's execution using the `context.params.get()` method: + +```python +from fx_ef import context + +param_value = context.params.get('param_name') +``` + + +## Accessing Secrets + +Easily access secrets stored on platform, project, or package levels with the `context.secrets.get()` method: + +```python +from fx_ef import context + +secret_value = context.secrets.get('secret_name') +``` + + +## Setting Secrets + +Set secrets on project and platform levels using the `context.secrets.set()` method: + +```python +from fx_ef import context + +context.secrets.set(name="platform_secret", value={"somekey": "someval"}, context="platform") +``` + + +## Accessing Package ID and Name + +Retrieve your package's ID and name using the `context.package.id` and `context.package.name` attributes: + +```python +from fx_ef import context + +package_id = context.package.id +package_name = context.package.name +``` + + +## Accessing and Updating Package State + +Manage your service's execution state with `context.state.get()` and `context.state.put()`: + +```python +from fx_ef import context + +state_data = context.state.get() +context.state.put("some_key", "some_value") +``` + + +## Logging + +Leverage logging capabilities at different levels - DEBUG, INFO (default), ERROR, WARNING, and CRITICAL: + +```python +from fx_ef import context + +context.logging.setLevel('INFO') + +context.logging.debug("debug msg") +context.logging.info("info msg") +context.logging.error("error msg") +context.logging.warning("warning msg") +context.logging.critical("critical msg") +``` + + +## Scheduling Retry of Service Execution + +Use the `context.scheduler.retry()` method to schedule the next execution of your service from within your script: + +```python +from fx_ef import context + +# Retry in 3 minutes +job_id = context.scheduler.retry(minutes=3) + +# Retry in 3 hours +job_id = context.scheduler.retry(hours=3) + +# Retry in 3 days +job_id = context.scheduler.retry(days=3) + +# Retry on the 56th minute of the next hour +job_id = context.scheduler.retry(cron_expression="56 * * * *") +``` + +This guide provides insight into the powerful functionalities offered by the `fx_ef` library, making FX service development more efficient and intuitive. These tools empower you to create responsive and feature-rich FX services with ease. \ No newline at end of file diff --git a/content/en/docs/fx/Developer_Guide/Form_generator.md b/content/en/docs/fx/Developer_Guide/Form_generator.md new file mode 100644 index 00000000..9c8baaba --- /dev/null +++ b/content/en/docs/fx/Developer_Guide/Form_generator.md @@ -0,0 +1,148 @@ +--- +title: "Form Generator" +linkTitle: "Form Generator" +tags: [forms] +categories: ["Knowledge Base"] +weight: 208 +description: >- + How to generate Forms that trigger services. +--- + +From time to time, you may encounter scenarios in which you need to create a frontend for initiating a service, often to be used by non-technical individuals. Both FX and K8X offer the capability to define forms using a straightforward JSON structure. + +These forms are automatically generated by the {{< param replacables.brand_name >}} Management UI, based on the 'parameters.json' file. + +When a service directory includes a 'parameters.json' file, the 'Run' button in the Management UI will automatically change its icon to a 'Form' icon. + +{{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/run_UI_package_roboto.png">}} + +To add the 'parameters.json' file to an existing service directory, ensure that the 'allow_manual_triggering' in the manifest.json file is set to 'true.' + +## Template parameters.json file + +The 'parameters.json' file contains a JSON definition of fields. These fields are presented to the user when manually triggering a package execution to collect the parameter values needed to run the package. This approach allows the same package to be easily adapted and reused for different scenarios or environments simply by providing different parameter values to the same package. + +```json +{ + "fields": [ + { + "type": "text", + "label": "Some Text", + "name": "some_text", + "required": true, + "description": "This field is required" + }, + { + "type": "textarea", + "label": "Some Textarea", + "name": "some_textarea" + }, + { + "type": "file", + "label": "Some File", + "name": "some_file", + "data": { + "bucket": "testbucket", + "async": true + } + }, + { + "type": "int", + "label": "Some Number", + "name": "some_number", + "default": 1, + "min": 0, + "max": 10 + }, + { + "type": "float", + "label": "Some Float", + "name": "some_float", + "placeholder": "0.01", + "step": 0.01, + "min": 0, + "max": 10 + }, + { + "type": "select", + "label": "Some Select", + "name": "some_select", + "default": "value 2", + "choices": [ + { + "title": "Choice 1", + "value": "value 1" + }, + { + "title": "Choice 2", + "value": "value 2" + }, + { + "title": "Choice 3", + "value": "value 3" + } + ] + }, + { + "type": "multiselect", + "label": "Some MultiSelect", + "name": "some_multiselect", + "default": ["value 2", "value 3"], + "choices": [ + { + "title": "Choice 1", + "value": "value 1" + }, + { + "title": "Choice 2", + "value": "value 2" + }, + { + "title": "Choice 3", + "value": "value 3" + } + ] + }, + { + "type": "radio", + "label": "Some Radio", + "name": "some_radio", + "choices": [ + { + "title": "Choice 1", + "value": "value 1" + }, + { + "title": "Choice 2", + "value": "value 2" + }, + { + "title": "Choice 3", + "value": "value 3" + } + ] + } + ] +} +``` + +The provided template will display a form as follows: + +{{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/run_parameters_UI_roboto.png">}} + +When users enter values in the form and click the 'Run' button, the form parameters and values will be sent to the service upon triggering. These parameters will be available to the service as if it were triggered by an event with the same payload as the form values. + +Below is a sample script that extracts the parameters (notice that it's not fundamentally different from a script triggered by an event). The only exception is the text areas, which are treated as String data types and may require conversion using the appropriate JSON library. + +```python + +from fx_ef import context +import json + +event_type = context.params.get("sample_event_type") +event_source = context.package.name +data = json.loads(context.params.get("sample_payload")) + +context.events.send(event_type, event_source, data=data) + +``` \ No newline at end of file diff --git a/content/en/docs/fx/Developer_Guide/Leveraging_Event_Manipulation.md b/content/en/docs/fx/Developer_Guide/Leveraging_Event_Manipulation.md new file mode 100644 index 00000000..cda68d6d --- /dev/null +++ b/content/en/docs/fx/Developer_Guide/Leveraging_Event_Manipulation.md @@ -0,0 +1,132 @@ +--- +title: "Event Manipulation Strategies" +linkTitle: "Leveraging Event Manipulation" +tags: [quickstart, connect, events, register] +categories: ["Knowledge Base"] +weight: 213 +description: >- + Leveraging Event Manipulation. +--- + +Events are the powerful concept at the center of the {{< param replacables.brand_name >}} Platform. There are a number of strategies for using event structures. The following are a few important related topics. + +* Correlation IDs +* Event Mappings + + +## Understanding the Structure + +The {{< param replacables.brand_name >}} FX events are based on CloudEvents ... + + +## Understanding Correlation IDs + +Correlation IDs are a time tested approach within the Enterprise Software Landscape. A Correlation ID allows one to correlate 2 steps in a flow with each other and identify their location in the flow sequence. + +When a package receives an event the platform also passes a Correlation ID. The correlation ID is usually generated by the platform at the start of the event or assigned by the event originator. If a correlation ID does not exist then a package may create one using the library provided. The Correlation ID consists of 2 parts: + +```python +FLOWID_SEQUENCEID +``` + +The first part is the identifier of the unique Originator ID. The second part is a Sequence ID which is incrementally assigned by subsequent processors. This allows the processor to indicate which the next stage of the processing is. It is left to packages to determine whether they wish to pass through the Correlation ID or not. Usually it is preferable to apss the Correlation ID with any event that is generated from within a package. + +The following is a sample output: + +_ABCDEF1234_01 -> ABCDEF1234_02 -> ABCDEF1234_03_ + +Searching for a Correlation ID will result in a time sorted list of runs which were triggered. By steppoing through the rsults of each stage you can easily identify the outgoing events and the results at each stage. + + +## Leveraging Event Mapping + +Event mapping is the mechanism of converting from one event type to the other. + +This is useful for converting from one type of even to another to trigger crossflows without altering the code of the target service. + +Event mapping is done within the platform by using a configuration of event maps. Event maps describe the mapping of the attributes between the source and the target event. The also must chose between 1 of 2 strategies + +* Map ONLY Mapped Fields +* Map ALL Fieds + +### Strategy Map only Mapped Fields + +When this strategy is applied only the attributes present in the mapping file will be available in the output event. + +*Please note that you cannot map events to the same event type to avoid loopbacks.* + +**Map** + +```json +{ + "ferris.sample.event_a": "ferris.sample.event_b", + "name":"first_name", + "role": "designation" +} +``` + +**Source Event** + +```json +{ + "type": "ferris.sample.event_a", + "name":"Bal", + "role": "developer" + "mobile": "1234567" +} +``` + +**Output Event** + +When the above map is combined with the event it will result in the name and role attributes being available as first_name and designation in the output event. But the mobile number will be stripped. + +```json +{ + "type": "ferris.sample.event_b", + "first_name":"Bal", + "designation": "developer" +} +``` + +### Strategy Map All Fields + +When this strategy is applied only the attributes present in the mapping file will be available in the output event. + +*Please note that you cannot map events to the same event type to avoid loopbacks.* + +**Map** + +```json +{ + "ferris.sample.event_a": "ferris.sample.event_b", + "name":"first_name", + "role": "designation" +} +``` + +**Source Event** + +```json +{ + "type": "ferris.sample.event_a", + "name":"Bal", + "role": "developer" + "mobile": "1234567" +} +``` + +**Output Event** + +When the above map is combined with the event it will result in the name and role attributes being available as first_name and designation in the output event. But the mobile number will be stripped. + +```json +{ + "type": "ferris.sample.event_b", + "first_name":"Bal", + "designation": "developer", + "mobile": "1234567" +} +``` + + + diff --git a/content/en/docs/fx/Developer_Guide/Project_And_Code_Structure.md b/content/en/docs/fx/Developer_Guide/Project_And_Code_Structure.md new file mode 100644 index 00000000..222a9e42 --- /dev/null +++ b/content/en/docs/fx/Developer_Guide/Project_And_Code_Structure.md @@ -0,0 +1,65 @@ +--- +title: "Project and Code Structure" +linkTitle: "Project and Code Structure" +tags: [quickstart, projects] +categories: ["Knowledge Base"] +weight: 205 +description: >- + Understanding the organization of services, repositories, and the various artefacts involved is pivotal for efficient development within the FX platform. +--- + + +## Understanding Projects + +Within the FX Platform, a Project serves as a container for multiple Services. Projects don't play a functional role; they primarily aid in organizing services based on functional relationships, solution domains, or user access groups. + +A project can be associated with multiple git repositories, each containing a collection of services. + + +## Repository Structure + +In the FX platform, every directory within a repository represents a distinct service. Files located in the root directory of a repository are disregarded. + + +## Service Artefacts + +A service encompasses an assortment of scripts, modules, and assets, including configuration files. The following are the supported types of artefacts along with their respective roles: + +Artefact Type | Description +--- | --- +`*.py` | Python scripts form the core of a service. You can include multiple python scripts, and they are executed in the order defined in the `manifest.json` file. These scripts can define classes, static methods, and more. +`*.sql` | SQL files containing SQL statements. They are executed against the default database defined in the platform. These files support a _'jinja'-like_ notation for parameter extraction and embedding program logic within the SQL. +`manifest.json` | The `manifest.json` file serves to describe the service to the platform and other users. It adheres to a predefined structure and is detailed further in the [Your First FX Service](/docs/fx/developer_guide/Creating_and_Configuring_Your_First_FX_Service) section. +`config.json` | This JSON file defines the service's configuration. These values are stored in Consul once imported into the platform. Configuration values can be accessed using the service's 'context' with the `ferris_ef` module. +`secrets.json` | This file outlines the secrets accessible within a specific service. The `secrets.json` file is uploaded via the UI and should not be committed to Git. +`*.txt`, `*.json`, `*.jinja`, etc. | Various assets utilized by the service. +`parameters.json` | Optional. This file defines Micro UIs, which generate forms to trigger a service. + +Understanding the components that constitute a service, repository, and project sets the foundation for effective FX service development. With this knowledge, you can seamlessly create, organize, and manage your services within the platform. + + +Sample Repository and Directory Structure: +```plaintext +Project +│ +├── Repository +│ ├── service_1 +│ │ ├── app.py +│ │ ├── manifest.json +│ │ ├── config.json +│ │ ├── secrets.json +│ │ ├── asset.txt +│ │ └── ... +│ ├── service_2 +│ │ ├── app.py +│ │ ├── manifest.json +│ │ ├── config.json +│ │ ├── secrets.json +│ │ ├── asset.txt +│ │ └── ... +│ └── ... +│ +└── Repository ... + +``` + diff --git a/content/en/docs/fx/Developer_Guide/Secrets.md b/content/en/docs/fx/Developer_Guide/Secrets.md new file mode 100644 index 00000000..ca5a8859 --- /dev/null +++ b/content/en/docs/fx/Developer_Guide/Secrets.md @@ -0,0 +1,108 @@ +--- +title: "Secrets" +linkTitle: "Secrets" +tags: [quickstart, secrets] +categories: ["Knowledge Base"] +weight: 206 +description: >- + How to integrate Secrets (sensitive configurations) within your services + +--- + +Secrets are sensitive configuration information which you wish to use within your service. These may be a single attribute structures (such as a password) with multiple attributes. + +Secrets are the ability to deal with sensitive data through scripts (secrets.json) needed for package execution, such as: + +- Database Passwords +- Secret Keys +- API Keys +- any other sensitive data + +Secrets aren't visible to any users and are passed encrypted to the actual script at the predefined package execution time. Once the script (secrets.json) is uploaded to the platform, the data is read and securely (double encryption) stored in the database. + +## Secret Scopes + +The platform supports the following scopes for a secret. + +| Scope | Description | +| --------------------- | ------------------------------------------------------------ | +| Service secrets | Service Scope secrets are only available to the specific service within which the secret was defined. They are managed by uploading a secrets.json file on the service management UI. While they can also by synced from GIT, this is not a preferred approach in order to avoid having secrets in git. | +| Project Secrets | Secrets that are accessible to any service within a specific project. These are created by uploading a JSON file on the project secrets tab on the UI. | +| Platform Secrets | Secrets that are accessible to any service running on the platform. These are created by uploading JSON file on the Vault->Secrets page. | + +*When accessing secrets using `fx_ef.context.secrets.get('secret_name')` it will first lookup for `secret_name` within service secrets, then project and finally platform* + +### The secrets.json File + +To add service scope secrets you can upload a `secrets.json` file. + +Those values are stored double encrypted in the database and can only be accessed within the executing script. A sample `secrets.json` + +```json +{ + "DB_NAME": "test_db", + "DB_PASS": "supersecretpas$" +} +``` + +### Accessing secrets + +With `fx_ef.context.secrets` you can access secrets stored on at the platform, project or service scope. + +```python +from fx_ef import context +context.secrets.get('secret_name') +``` + +This command will first lookup for the secret named `secret_name` within package secrets (defined in `secrets.json` file of the package). If such key doesn't exist it will look for it within project secrets, and finally within the platform's secrets. If a secret with such name doesn't exist `None` will be returned. + +Can be accessed using `fx_ef.context.secrets.get('secret_name'). Can be set using `context.secrets.set("secret_name", {"somekey":"someval"}, "project")` + +Can be accessed using `fx_ef.context.secrets.get('secret_name')`. Can be set using `context.secrets.set("secret_name", {"somekey":"someval"}, "platform")` + +### Setting secrets + +Using the `fx_ef.context.secrets.set(name, value, context)` method you can set secrets on project and platform level. + +```python +from fx_ef import context + +context.secrets.set(name="platform_secret", value={"somekey":"someval"}, context="platform") +``` + +| Parameter | Description | +|-----------|---------------------------------------------------------------------------------------------| +| name | Name of the secret to be set. If secret with the same name already exist it will be updated | +| value | Value of the secret that should be set | +| context | Context of the secret. Possible values are `platform` and `project` | + + +## Create a new package + +Note that the package creation is presented in another submenu of the User Guide, so only the needed parameters will be filled in the package to showcase the Secrets functionality. + +1. Click on Executions in the left side menu and on Packages + +2. Click on Add to create a new package + +{{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/create_package_secrets.png">}} + +3. Name the package +4. Click on choose file and add the python scrypt (test_secrets.py) +5. Click on Add more scripts and click on choose file to add the JSON script (secrets.json) +6. Click on Save to save the package + +{{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/save_package_secrets.png">}} + +#### test_secrets.py script + +This is an example script that shows how secrets from the `secrets.json` file can be accessed from a script at execution time using the `get_secret()` helper function from the `fx_ef` package. + +```python +from fx_ef import context + +print(f"DB NAME: {context.secrets.get('DB_NAME')}") +print(f"DB PASS: {context.secrets.get('DB_PASS')}") + +print(f"PACKAGE NAME: {context.params.get('package_name')}") +``` \ No newline at end of file diff --git a/content/en/docs/fx/Developer_Guide/State_Management.md b/content/en/docs/fx/Developer_Guide/State_Management.md new file mode 100644 index 00000000..7757e758 --- /dev/null +++ b/content/en/docs/fx/Developer_Guide/State_Management.md @@ -0,0 +1,32 @@ +--- +title: "State Management" +linkTitle: "State Management" +tags: [state] +categories: ["Knowledge Base"] +weight: 207 +description: >- + A Guide to Managing State across Runs. +--- + +One key aspect in reactive applications is how to manage state between runs. + +With {{< param replacables.brand_name >}} FX this is simple. Each Service has a state object available at run time. All you need to do is the following. + +```python +from ferris_cli import context + +my_state = context.state.get() # returns a state previously set +some_value = my_last_state.get('key') +context.state.put('Key','Value') + +``` + +The state is stored across Service runs. A state log is also maintained and stored for reference and reload. + + +## How it works + +When a Service is started the state is loaded from the Consul key store. + +When a state is stored it is placed in Consul as well as sent to Kafka. The Kafka stream maintains an audit log of the state. And also serves to retreive state after a system shut down. + diff --git a/content/en/docs/fx/Developer_Guide/_index.md b/content/en/docs/fx/Developer_Guide/_index.md new file mode 100644 index 00000000..b81f9130 --- /dev/null +++ b/content/en/docs/fx/Developer_Guide/_index.md @@ -0,0 +1,8 @@ +--- +title: "Developer Guide" +linkTitle: "Developer Guide" +tags: [] +weight: 201 +description: >- + This Developer Guide provides step by step guidance to setting up Ferris locally, building microservices and runninung code snippets as well as entire services. +--- diff --git a/content/en/docs/fx/Developer_Guide/git_integration.md b/content/en/docs/fx/Developer_Guide/git_integration.md new file mode 100644 index 00000000..df21b2ec --- /dev/null +++ b/content/en/docs/fx/Developer_Guide/git_integration.md @@ -0,0 +1,123 @@ +--- +title: "Git Integration" +linkTitle: "Git Integration" +tags: [git, integrations] +categories: ["Knowledge Base"] +weight: 212 +description: >- + How to integrate a Git Repository with the {{< param replacables.brand_name >}} Platform. +--- + +The Git Integration is the capability to generate a connection from a Git Repository to Projects and synchronise the Packages from the Executor with the Git Repository. The goal is to execute it through the {{< param replacables.brand_name >}} FX Platform. It provides another, more fluent way for connecting scripts with the {{< param replacables.brand_name >}} FX Platform without the necessity to upload files directly to the platform. + +A new Project is created to showcase the capabilty of the git integration: + + +## Create a new project + +1. Click on Projects in the left side menu to open drop-down and then on List Projects +2. Click on +Add to create a new project + +{{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/create_project_git_int.png">}} + +3. Name the project +4. Save the new project + +{{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/git_int_created_project.png">}} + +## Check the created project + +1. Click on the magnifying glass to open the details page of the project + +{{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/loupe_git_created_project.png">}} + +{{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/git_project_details_page.png">}} + +## Add a GitHub Repository to the created project + +1. Click on the Git Repositories tab +2. Click on +Add to add a SSH repository URL + +{{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/add_git_repo.png">}} + +## Copy GitHub Repo + +*Note that before adding your GitHub Repository to the platform, a public SSH key needs to be generated.* + +1. Login to your GitHub account +2. Go to the Repository you want to add to the project, in this use case "ferris-packages" +3. Click on the the green Code button to showcase the repository URLs +4. Copy the SSH URL + +{{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/github_copy_ssh_url.png">}} + +## Paste SSH URL + +1. Paste the copied SSH URL from your repo +2. Click save to create the repository on the platform + +{{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/create_git_repo.png">}} + +*Note that a pair of public and private keys are generated for each repository which is safed on the {{< param replacables.brand_name >}} FX platform. The private key is encrypted and stored safely in the database and will never be presented to anyone, whereas the public key should be copied and added to the git repository in order to provide the {{< param replacables.brand_name >}} FX access to the repository and the possibility to clone packages.* + +{{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/public_key_repo.png">}} + +## Add the public key to GitHub + +1. Return to your GitHub account +2. Click on Settings in the top menu bar +3. Click on deploy keys +4. Click on Add deploy key + +{{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/add_public_key_git.png">}} + +5. Paste the generated public key +6. Name the public key +7. Click on Add key + +{{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/save_public_key_git.png">}} + +8. Check the saved public key + +{{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/check_saved_key_git.png">}} + +## Synchronise the repository + +1. Return to the {{< param replacables.brand_name >}} FX platform +2. Click the Sync Now button to synchronise the platform with the GitHub + +{{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/sync_now_button.png">}} + +3. Check the synchronised details page + +*Note that the branches (main; dev) were added and the status has changed (synced).* + +{{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/synced_repos.png">}} + +4. Click on the List Packages tab to verify that the packages were imported + +{{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/list_packages_git_import.png">}} + +## Change Git Branch on the platform + +*If a user wants to test or work on a specific branch, he can select the branch required to do so. The main branch is selected by default.* + +1. Click on the edit button to open the Edit Repository details page + +{{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/edit_repo_branch.png">}} + +2. Click in the drop-down to select the branch, in thise case "dev" +3. Click on Save to save the selected branch + +{{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/save_branch.png">}} + +4. Click on Sync to synchronise the packages from the dev branch + +{{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/sync_dev_branch.png">}} + +5. Click on the List Packages tab to verify the packages have been synced from the dev branch + +{{< blocks/screenshot color="white" image="/streamzero/images/developer_guide/list_packages_dev.png">}} + +*Note that a public key can be regenerated at any moment if in doubt that someone has access to it. If a new key gets generated, it needs to get added to GitHub again and synced on the platform afterwards. If the step of adding the key is missed, the synchronisation will fail. Keys can also be deleted directly on GitHub if the platform access shouldn't be granted anymore.* + diff --git a/content/en/docs/fx/Developer_Guide/gitignore_sample.md b/content/en/docs/fx/Developer_Guide/gitignore_sample.md new file mode 100644 index 00000000..b5f5327d --- /dev/null +++ b/content/en/docs/fx/Developer_Guide/gitignore_sample.md @@ -0,0 +1,181 @@ +--- +title: "Gitignore File" +linkTitle: "A Sample GitIgnore File" +tags: [quickstart, connect, register] +weight: 250 +categories: ["Knowledge Base"] +description: >- + A sample .gitignore file +--- + +You can populate your .gitignore file with the contents below to avoid storing your local settings on git. + +```shell +# Byte-compiled / optimized / DLL files +__pycache__/ +*.py[cod] +*$py.class + +# C extensions +*.so + +# Distribution / packaging +.Python +build/ +develop-eggs/ +dist/ +downloads/ +eggs/ +.eggs/ +lib/ +lib64/ +parts/ +sdist/ +var/ +wheels/ +share/python-wheels/ +*.egg-info/ +.installed.cfg +*.egg +MANIFEST + +# PyInstaller +# Usually these files are written by a python script from a template +# before PyInstaller builds the exe, so as to inject date/other infos into it. +*.manifest +*.spec + +# Installer logs +pip-log.txt +pip-delete-this-directory.txt + +# Unit test / coverage reports +htmlcov/ +.tox/ +.nox/ +.coverage +.coverage.* +.cache +nosetests.xml +coverage.xml +*.cover +*.py,cover +.hypothesis/ +.pytest_cache/ +cover/ + +# Translations +*.mo +*.pot + +# Django stuff: +*.log +local_settings.py +db.sqlite3 +db.sqlite3-journal + +# Flask stuff: +instance/ +.webassets-cache + +# Scrapy stuff: +.scrapy + +# Sphinx documentation +docs/_build/ + +# PyBuilder +.pybuilder/ +target/ + +# Jupyter Notebook +.ipynb_checkpoints + +# IPython +profile_default/ +ipython_config.py + +# pyenv +# For a library or package, you might want to ignore these files since the code is +# intended to run in multiple environments; otherwise, check them in: +# .python-version + +# pipenv +# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. +# However, in case of collaboration, if having platform-specific dependencies or dependencies +# having no cross-platform support, pipenv may install dependencies that don't work, or not +# install all needed dependencies. +#Pipfile.lock + +# poetry +# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control. +# This is especially recommended for binary packages to ensure reproducibility, and is more +# commonly ignored for libraries. +# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control +#poetry.lock + +# pdm +# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control. +#pdm.lock +# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it +# in version control. +# https://pdm.fming.dev/#use-with-ide +.pdm.toml + +# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm +__pypackages__/ + +# Celery stuff +celerybeat-schedule +celerybeat.pid + +# SageMath parsed files +*.sage.py + +# Environments +.env +.venv +env/ +venv/ +ENV/ +env.bak/ +venv.bak/ + +# Spyder project settings +.spyderproject +.spyproject + +# Rope project settings +.ropeproject + +# mkdocs documentation +/site + +# mypy +.mypy_cache/ +.dmypy.json +dmypy.json + +# Pyre type checker +.pyre/ + +# pytype static type analyzer +.pytype/ + +# Cython debug symbols +cython_debug/ + +# PyCharm +# JetBrains specific template is maintained in a separate JetBrains.gitignore that can +# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore +# and can be added to the global gitignore or merged into this file. For a more nuclear +# option (not recommended) you can uncomment the following to ignore the entire idea folder. +#.idea/ + +secrets.json +ef_env.json +ef_package_state.json + + + +``` \ No newline at end of file diff --git a/content/en/docs/fx/_index.md b/content/en/docs/fx/_index.md new file mode 100644 index 00000000..51747271 --- /dev/null +++ b/content/en/docs/fx/_index.md @@ -0,0 +1,56 @@ +--- +title: "{{< param replacables.brand_name >}} FX" +linkTitle: "{{< param replacables.brand_name >}} FX" +tags: [] +weight: 100 +description: > + The following section provides a short overview of key features of {{< param replacables.brand_name >}} FX. +--- + + +**{{< param replacables.brand_name >}} FX** is a platform for building highly scalable, cross-network sync or async microservices and agents. + +The unique low learning curve approach significantly reduces the cost of deploying enterprise wide process and integration pipelines across disparate systems at a speed. While at the same time creating a platform with practically unbound access and ease of integration. + +FX is a ground-up rethink of how both sync and async microservices are built in multi-cloud, highly volatile and fragmented software environments. + +**On FX you are effectively writing large applications by connecting “blocks” of code (Services) through Events.** An approach that is highly intuitive and in line with iterative agile practices. + +The following is a brief review of some of the benefits and features of {{< param replacables.brand_name >}} FX. Upcoming features are shown in italics. + +| Quality | Description | +| ---------------------------------------------------------- | ------------------------------------------------------------ | +| **Low Learning Curve** | Developers can practically learn within an hour how to work with FX. | +| **Highly Scalable** | Built from ground-up for scalability. The event messaging core is based on an Apache Kafka backbone, that can transmit millions of jobs per day to any number of Services without break. | +| **Resource Efficient** | FX Microservices are deployed in real time as Events comes in. There are not 100s of microservice containers running on your platform. Just three (3) components. The {{< param replacables.brand_name >}} Management UI, The Event Router and any number Executors. | +| **Plug into anything. Practically Limitless Integrations** | Utilize the extensive library functionalities offered by Python, with the potential addition of Java, DOTNET, or GoLang in the near future. Avoid being solely reliant on paid pre-packaged modules that follow rigid structures, involve complex building processes, and may result in vendor lock-in. | +| **Combined Support for Sync and Async Microservices** | Manage both your Async and Sync Service Mesh in a single interface without any expensive and cumbersome 3rd party system. Reduce the complexity of your infrastructure and the number of components. | +| **Fully Containerised and Easy to Deploy** | Pre-packaged Kubernetes Templates with minimal customisation requirements fit straight into your enterprise Kubernetes (and if you dont have one we will build you one). Run in 1 command and Scale as you wish. | +| **All Ops Data in one (1) Secure place** | We record all events, logs, alerts in Kafka and store them in daily index within Elasticsearch for easy search and loading it into other systems such as Splunk, DataDog, LogTrail etc. A single scalable fault tolerant system to transport events and operational data. | +| **Monitor Performance** | All Service Executions are continuously tracked and recorded by FX allowing you to easily identify bottlenecks in your processing. Execution details can be viewed on the UI or within Kibana/Elasticsearch. | +| **Enterprise Friendly User and Project Management** | FX can be easily plugged into your identity infrastructure, whether OIDC or AD or SAML, all of them are supported. Packages are organised by Projects enabling users to have specific roles and simplify oversight and governance of your platform. Further enhanced by tagging support promoting an enterprise wide shared semantics and Taxonomy of packages. | +| **Structured Service Documentation** | *Provide a readme.md file with your package to document it for users. Provide an OpenAPI document to automatically expose the end point and document it for users.* *Provide a manifest JSON file for describing package.* | +| **Developer Friendly GIT integration** | A seamless integration into GIT that fits straight into existing development and deployment flow. Push to git to activate code. No more, No Less. | +| **Simple Standard and Powerful Event Format** | Based on Cloud Events the FX event format is Simple JSON, familiar to every developer. If you know how to read JSON you can build a Microservice. | +| **Simple Easy to Understand and use Conventions** | A Microservice consists of a set of scripts run in sequence, receives events as JSON and sends events as JSON. Use a config.json to store configs, and use a secrets.json to store secrets. Store files in /tmp. You can use any Python libraries and also deploy your won packages with each service. | +| **Selective Feature Support** | Our 'Everything is Optional' approach to the convetions support by services means that developers can incrementally improve the quality of their code as they get more familiar with the system. A base service is just a single script and then they can build up to support configurations, UI, reusable packages, published interface packages or custom image execution. | +| **Support for Enterprise Specific Library Distributions** | Package Enterprise Specific Libraries into the base executor for use by all executors within the enterprise. Saving significant amounts of development time. | +| **Real Time Code Updates** | The FX near real time deploy means code changes are immediately active. | +| **Run A/B Testing with Ease** | Plug different code versions to the same event to evaluate and measure different outcomes. | +| **Run Anything with K8X** | The unique RUN ANYTHING architecture further breaks boundaries of running polyglot container systems. Simply tell the system on which image a code is to execute. | +| **Activate or Deactivate Services in Realtime** | Services can be activated or deactivated upon demand. | +| **Instant Scaling** | Simply increase the number of Router or Executor replicas to process faster (provided your underlying services can support it) | +| **View Logs in Realtime** | View the Logs of any execution in real time, directly on the Management UI. | +| **View Event Dependencies Easily** | Have an error? Trace easily the events which led to the Error with all the parameters used to run the event flow. | +| **UI Support for Microservices** | Drop in a metadata.json file to auto generate UIs for entering parameters for a specific package. | +| **Easy Aggregated Logging** | All services logs are aggregated and searchable and viewable in real time. Developers can log easily. | +| **Adaptive Event Schema** | FX is continuously scanning the incoming events to populate the event catalog and their schema. Making it easier for developers to write services which react to the platform events. Continuously updating you on the Events within your platform. | +| **Parallel Event Processing and Flows** | The same Event can be sent to multiple services for processing. Enabling multiple flows to be triggered. | +| **Anonymous and Published Interfaces** | Services can easily standardise and publish their interfaces making them available in the 'No-Code' flows. | +| **Event Mapping** | Easily map parameters of one event to another event. Allowing you to easily link event flows together. | +| **Event Tagging** | Tag events, enabling an easy organisation of event groups by domain. | +| **Execution Prioritisation and Cancellation** | Granular queue management to prioritise or cancel specific executions if there is a backlog. | +| **Modular Easily Extendible UI** | Add modular custom UIs to the management interface using FX extensions to the Flask App Builder for creating a custom Management UI. | + + + diff --git a/content/en/docs/fx/architecture-overview.md b/content/en/docs/fx/architecture-overview.md new file mode 100644 index 00000000..b63f743e --- /dev/null +++ b/content/en/docs/fx/architecture-overview.md @@ -0,0 +1,137 @@ +--- +title: "Architecture Overview" +linkTitle: "Architecture" +tags: [] +weight: 201 +description: >- + An comprehensive overview of the architecture of {{< param replacables.brand_name >}} FX. +--- + +## Concepts + +{{< param replacables.brand_name >}} FX is based on two simple concepts - **Services** and **Events** + +**On FX you are effectively writing large applications by connecting “blocks” of code through Events.** + +![image-20211024081829495](/images/image-20211024081829495.png) + +Each Service is a self contained piece of functionality such as loading a file, running a database view rebuild or launching a container. You can link and re-link the blocks of code at anytime. The source code can be as big or as tiny as you like. + +Each Service is triggered by an Event. Each Service also emits Events, thereby allowing other Services to be triggered following (or during) the execution of a Service. + +A Service can respond to multiple Event types, and a single Event Type may trigger multiple Services - Thereby allowing you to also extend your Application(s) on the fly with ease. + +![image-20211024080659941](/images/image-20211024080659941.png) + +You are not required to think in terms of pre-defined DAGs (Directed Acyclic Graph) and can rapidly and iteratively build, test and deploy your applications. + +{{< blocks/screenshot color="white" image="/streamzero/images/user_guide/list_projects_add_roboto.png">}} + + +### Services + +**SERVICES** are collections of scripts and modules which are executed in sequence by the **FX Executor**. + +Services are triggered by **EVENTS**, which are JSON messages which carry a header and payload. A Service can be Linked to one or more events. + +Each script is provided with the Payload of the Event that triggered it. It is the job of the **FX Router** to send Events to the appropriate Service. + +The following is a basic Service which parses the event sent to it and prints the payload. + +```python +from fx_ef import context + +# The context.params object carries the payload of the incoming event +param_value = context.params.get('param_name') + +# And this is how a service sends an event +my_event_type = "com.test.my_event_type" +data = {"my attribute": "my_value"} + +context.events.send(my_event_type, data) + + +``` + +### Events + +Events are messages passed through the platform which are generated either by Services or by the {{< param replacables.brand_name >}} Manager (in the case of manually triggered runs and scheduled runs). + +Events are in the form of JSON formatted messages which adhere to the CloudEvents format. + +Events carry a Header which indicates the event type and a Payload (or Data section) which contain information about the event. + +The following is a sample Event. + +```json +{ + "specversion" : "1.0", + "type" : "com.example.someevent", // The Event Type + "source" : "/mycontext", + "subject": null, + "id" : "C234-1234-1234", + "time" : "2018-04-05T17:31:00Z", + "datacontenttype" : "application/json", + "data" : { // The event payload as JSON + "appinfoA" : "abc", + "appinfoB" : 123, + "appinfoC" : true + } +} +``` + +### Service Triggering + +Services can be triggered in the following ways: + +- Manually: By clicking the 'Run' button on the {{< param replacables.brand_name >}} FX Management UI. +- On Schedule: As a cron job whereas the Cron expression is in the service manifest. +- On Event: Where a package is configured to be triggered by the FX Router when a specific type of event(s) is encountered on the platform - also configured in the service manifest. + +Irrespective of how a Service is triggered it is always triggered by an Event. In the case of Manual and Scheduled triggering it is the FX Platform that generates the trigger event. + +### Late Linking + +One of the most important features of the FX Platform is that you are not required to link the Service to an Event during the course of development. And you can also change the Trigger Event(s) post-deployment. + +This approach gives you a great flexibility to: + +* not having to think of pre-defined flows but to build the Flow as well as the Services iteratively. +* maintain and test multiple versions of the same Service in parallel. + + + +## The {{< param replacables.brand_name >}} FX Flow + +At the core of the FX Platform messages (Events) are passed through **Apache Kafka**. These 'events' are JSON formatted messages which adhere to the CloudEvents format. + +![image-20211024083411584](/images/image-20211024083411584.png) + +Each **Event** consists of what may be simplified as Headers and Payload. The headers indicate the type of event and other attributes. Whereas the payload are the attributes or parameters that are sent out by Services in order to either provide information about their state or for usage by downstream Services. + +The **FX Router(s)** is listening on the stream of Events passing through Kafka. Based on the configuration of the platform which is managed in the **{{< param replacables.brand_name >}} Management UI** the Router decides if a Service requires to be executed based on the Event contents. On finding a configured Handler the gateway sends a message to the Executor and informs it of which packages or scripts are required to be run. + +The **FX Executor(s)** executes the **Service**. The Service may use any Python module that is embedded in the Executor and also uses the platform internal configuration management database(at present Consul) for storing its configurations. The Executor sends a series of Events on Service execution. These are once again processed by the FX Router. + +![image-20211024084807506](/images/image-20211024084807506.png) + +The FX Executor provides infrastructure which tracks logs and maintains a record of service metrics and operational data. The Operational information is first sent to appropriate Kafka Topics from where they are picked up by Ops-Data Sinks whose role it is to store data within a *log storage system* - such as Elasticsearch, Splunk or any other storage system of choice - and in some cases also filter the data for the purpose of alerting or anomaly tracking. All operational data may be viewed and queried through tools such as **Kibana** and is also viewable on the **FX Management UI**. + + + +## Required Infrastructure + +The following are the infrastructure components required for a {{< param replacables.brand_name >}} FX installation + +| Component | Description | +| ----------------- | ------------------------------------------------------------ | +| Apache Kafka | Apache Kafka serves as the backbone to pass events and operational data within a {{< param replacables.brand_name >}} FX Installation. | +| PostgreSQL | Postgres is used as the database for the {{< param replacables.brand_name >}} FX Manager Application. | +| Consul | Consul is the configuration store used by the {{< param replacables.brand_name >}} FX platform. It is also used by the services to store their configurations. | +| MinIO | Minio provides the platform internal storage for scripts and assets used by the Services. | +| Elasticsearch | Elasticsearch is used as a central store for all operational data. Thereby making the data easiliy searchable. | +| Kibana | Kibana is used to view and query the data stored in Elasticsearch. | +| {{< param replacables.brand_name >}} FX Management UI | {{< param replacables.brand_name >}} FX Management UI is where the setup, management, orchestration, execution, testing and monitoring of the {{< param replacables.brand_name >}} FX platform takes place. | +| {{< param replacables.brand_name >}} FX Router | The Router container is responsible for listenting to events flowing through the system and forwarding the events to the appropriate micro-services that you create. | +| {{< param replacables.brand_name >}} FX Executor | The executor container(s) is where all code and services are orchestrated and executed. | + diff --git a/content/en/docs/k8x/_index.md b/content/en/docs/k8x/_index.md new file mode 100644 index 00000000..3001f721 --- /dev/null +++ b/content/en/docs/k8x/_index.md @@ -0,0 +1,37 @@ +--- +title: "{{< param replacables.brand_name >}} K8X" +linkTitle: "{{< param replacables.brand_name >}} K8X" +weight: 102 +description: > + Overview and in-depth introduction to {{< param replacables.brand_name >}} Event Driven Kubernetes. +--- + + +## What is {{< param replacables.brand_name >}} K8X? + +{{< param replacables.brand_name >}} K8X brings event driven automation to Kubernetes. + +With K8X you can create service flows which span multiple containers written in different programming languages. K8X takes over the responsibility of launching the right container when an event arrives that is mapped to the container. Further it provides the container with the incoming parameters, the service specific configurations and secrets injected into the container environment. + +Since each service or container is invoked upon an event trigger, they (service, container) are dormant and require no compute resources. + +The event driven nature of K8X makes it not only easy to use and is fast to deploy, it brings unprecedented levels of resources efficiency as well as decreased resource contention to any Kubernetes Cluster. + +## Benefits of K8X + +K8X shares the benefits provided by {{< param replacables.brand_name >}} FX in that it enables easy to build and operate an event-driven microservices platform. In contrast to FX it is no more limited to the services built in the Python Programming language - i.e. the services (and containers) may be written in any lanuage. These can leverage the simple approach of FX to retreive event parameters, service configurations and secrets. + +* K8X's first and foremost benefit is that it significantly decreases developer time to develop event-driven microservices. +* K8X provides a very low learning curve. +* K8X significantly decreases time spent on deployments and CI/CD by offering a built in deployment mechanism. +* K8X improves observability by allowing easy viewing of the status as well as logs of the associated containers. + + +## How it works +The following is a brief explanation of how K8X works: + +* Edge Adapters are responsible for sourcing events from external systems, converting the incoming events into cloud events and forwarding them to the appropriate topic in Kafka. +* These events are consumed by the K8X Hub which looks up the mapping of the event to the target services. +* The K8X Hub then deploys the appropriate service/container and injects the event parameters, service configs and secrets to the container environment. +* The container executes the service. +* The K8X Hub collects the logs from the container for monitoring of the container status. \ No newline at end of file diff --git a/content/en/docs/k8x/architecture.md b/content/en/docs/k8x/architecture.md new file mode 100644 index 00000000..d9ba5526 --- /dev/null +++ b/content/en/docs/k8x/architecture.md @@ -0,0 +1,9 @@ +--- +title: "Architecture" +linkTitle: "Architecture" +weight: 6 +description: > + {{< param replacables.brand_name >}} K8X Architecture. +--- + +Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. \ No newline at end of file diff --git a/content/en/docs/k8x/developer_guide.md b/content/en/docs/k8x/developer_guide.md new file mode 100644 index 00000000..932bf8a1 --- /dev/null +++ b/content/en/docs/k8x/developer_guide.md @@ -0,0 +1,73 @@ +--- +title: "K8X Developer Guide" +linkTitle: "Developer Guide" +weight: 1 +description: > + {{< param replacables.brand_name >}} K8X Developer Guide. +--- + + +{{< param replacables.brand_name >}} K8X aims to make it easy to build event-driven microservices in polyglot environments. As such it gives you complete freedom in selecting the language of your choice. + +In order to 'event-enable' a service K8X requires 3 artefacts to be created. +* The manifest.json file: Which describes your service to the platform. +* The deployment.yaml: A standard kubernetes deployment file which defines your Kubernetes deployment. + +Optional Files +* The parameters.json file: Which can be used to define UI Forms attached to the service for manaully trigerred runs. Please read the section on parameters.json to understand the structure of this file. +* The configs.json file: Defines configurations of the service. +* The secrets.json file: Any secrets that are to be associated with the service. These will be injected to the container on launch. + + +# The manifest.json +The following is a sample manifest.json file. +```json +{ + "name": "k8s_test_job", + "type": "k8s_job", + "description": "Deploying k8 job", + "allow_manual_triggering": true, + "active": true, + "trigger_events": ["ferris.apps.minio.file_uploaded"], + "tags": ["k8s"] +} +``` + + +The following table describes the attributes of the manifest.json file. + + +| Attribute | Description | +| ----------------------- | ------------------------------------------------------------ | +| name | Name of the service. Spaces will be replaced by underscores. | +| type | The type of the service must always be 'k8x_job' | +| description | Description of the service which will be displayed in the UI. | +| allow_manual_triggering | Values are either 'true' or 'false' . Defines whether the service may be trigerred manually from the UI. Which normally means the service is either trigerred from a micro-ui or does not expect any event parameters. | +| active | Values are either 'true' or 'false' . Can be used to selectively deactivate the service. | +| trigger_events | An array of trigger events. The service will be trigerred when any of these events arrive on the platform. | +| tags | An array of tags. Tags are used for organising and finding related services easily. | + +# The deployment.yaml file +The following is a sample deployment.yaml file + +```yaml +apiVersion: batch/v1 +kind: Job +metadata: + name: hello-world +spec: + template: + spec: + containers: + - name: hello-alpine + image: frolvlad/alpine-bash + command: [ "/bin/bash", "-c" ] + args: [ "echo BEGIN; env; sleep 5; ls -la /usr/bin; echo DONE!" ] + env: + - name: SERVICE_PORT + value: "80" + restartPolicy: Never +``` + +The above is a standard kubernetes job deployment yaml file. As you will note there is nothing special about it. When the above file is processed by K8X it will add the incoming parameters, service secrets and configs into the environment. + diff --git a/content/en/docs/k8x/user_guide.md b/content/en/docs/k8x/user_guide.md new file mode 100644 index 00000000..27dd78d4 --- /dev/null +++ b/content/en/docs/k8x/user_guide.md @@ -0,0 +1,19 @@ +--- +title: "User Guide" +linkTitle: "User Guide" +weight: 3 +description: > + {{< param replacables.brand_name >}} K8X User Guide +--- + +{{< param replacables.brand_name >}} K8X is fully integrated with FX in the {{< param replacables.brand_name >}} Management UI. Hence K8X jobs will be visible in projects view along with the FX based jobs. + +FX and K8X jobs can also share the same GIT Repos. + +# List Services +The list services view of projects will display both FX and K8X jobs. K8X jobs are differentiated by the job type. + +![image-20211024081829495](/streamzero/images/k8x/userguide/list_services.png) + + + diff --git a/content/en/docs/privacy_policy.md b/content/en/docs/privacy_policy.md new file mode 100644 index 00000000..108ca7d9 --- /dev/null +++ b/content/en/docs/privacy_policy.md @@ -0,0 +1,292 @@ +--- +title: "Privacy Policy" +linkTitle: "Privacy Policy" +tags: [privacy] +categories: [] +weight: 109 +description: >- + {{< param replacables.company_name >}} Privacy Policy. + + +--- + +Last updated: February 12, 2021 + +This Privacy Policy describes Our policies and procedures on the collection, use and disclosure of Your information when You use the Service and tells You about Your privacy rights and how the law protects You. + +We use Your Personal data to provide and improve the Service. By using the Service, You agree to the collection and use of information in accordance with this Privacy Policy. + +### Interpretation and Definitions + +#### Interpretation + +The words of which the initial letter is capitalized have meanings defined under the following conditions. The following definitions shall have the same meaning regardless of whether they appear in singular or in plural. + +#### Definitions + +For the purposes of this Privacy Policy: + +- **Account** means a unique account created for You to access our Service or parts of our Service. + +- **Company** (referred to as either "the Company", "We", "Us" or "Our" in this Agreement) refers to {{< param replacables.company_name >}}, {{< param replacables.company_address >}}. + + For the purpose of the GDPR, the Company is the Data Controller. + +- **Cookies** are small files that are placed on Your computer, mobile device or any other device by a website, containing the details of Your browsing history on that website among its many uses. + +- **Country** refers to: Switzerland + +- **Data Controller**, for the purposes of the GDPR (General Data Protection Regulation), refers to the Company as the legal person which alone or jointly with others determines the purposes and means of the processing of Personal Data. + +- **Device** means any device that can access the Service such as a computer, a cellphone or a digital tablet. + +- **Personal Data** is any information that relates to an identified or identifiable individual. + + For the purposes for GDPR, Personal Data means any information relating to You such as a name, an identification number, location data, online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity. + +- **Service** refers to the Website. + +- **Service Provider** means any natural or legal person who processes the data on behalf of the Company. It refers to third-party companies or individuals employed by the Company to facilitate the Service, to provide the Service on behalf of the Company, to perform services related to the Service or to assist the Company in analyzing how the Service is used. For the purpose of the GDPR, Service Providers are considered Data Processors. + +- **Third-party Social Media Service** refers to any website or any social network website through which a User can log in or create an account to use the Service. + +- **Usage Data** refers to data collected automatically, either generated by the use of the Service or from the Service infrastructure itself (for example, the duration of a page visit). + +- **Website** refers to {{< param replacables.company_name >}}, accessible from [{{< param replacables.company_website >}}](https://www.privacypolicies.com/live/{{< param replacables.company_website >}}) + +- **You** means the individual accessing or using the Service, or the company, or other legal entity on behalf of which such individual is accessing or using the Service, as applicable. + + Under GDPR (General Data Protection Regulation), You can be referred to as the Data Subject or as the User as you are the individual using the Service. + +### Collecting and Using Your Personal Data + +#### Types of Data Collected + +Personal Data + +While using Our Service, We may ask You to provide Us with certain personally identifiable information that can be used to contact or identify You. Personally identifiable information may include, but is not limited to: + +- Email address +- First name and last name +- Usage Data + +Usage Data + +Usage Data is collected automatically when using the Service. + +Usage Data may include information such as Your Device's Internet Protocol address (e.g. IP address), browser type, browser version, the pages of our Service that You visit, the time and date of Your visit, the time spent on those pages, unique device identifiers and other diagnostic data. + +When You access the Service by or through a mobile device, We may collect certain information automatically, including, but not limited to, the type of mobile device You use, Your mobile device unique ID, the IP address of Your mobile device, Your mobile operating system, the type of mobile Internet browser You use, unique device identifiers and other diagnostic data. + +We may also collect information that Your browser sends whenever You visit our Service or when You access the Service by or through a mobile device. + +Tracking Technologies and Cookies + +We use Cookies and similar tracking technologies to track the activity on Our Service and store certain information. Tracking technologies used are beacons, tags, and scripts to collect and track information and to improve and analyze Our Service. The technologies We use may include: + +- **Cookies or Browser Cookies.** A cookie is a small file placed on Your Device. You can instruct Your browser to refuse all Cookies or to indicate when a Cookie is being sent. However, if You do not accept Cookies, You may not be able to use some parts of our Service. Unless you have adjusted Your browser setting so that it will refuse Cookies, our Service may use Cookies. +- **Flash Cookies.** Certain features of our Service may use local stored objects (or Flash Cookies) to collect and store information about Your preferences or Your activity on our Service. Flash Cookies are not managed by the same browser settings as those used for Browser Cookies. For more information on how You can delete Flash Cookies, please read "Where can I change the settings for disabling, or deleting local shared objects?" available at https://helpx.adobe.com/flash-player/kb/disable-local-shared-objects-flash.html#main_Where_can_I_change_the_settings_for_disabling__or_deleting_local_shared_objects_ +- **Web Beacons.** Certain sections of our Service and our emails may contain small electronic files known as web beacons (also referred to as clear gifs, pixel tags, and single-pixel gifs) that permit the Company, for example, to count users who have visited those pages or opened an email and for other related website statistics (for example, recording the popularity of a certain section and verifying system and server integrity). + +Cookies can be "Persistent" or "Session" Cookies. Persistent Cookies remain on Your personal computer or mobile device when You go offline, while Session Cookies are deleted as soon as You close Your web browser. Learn more about cookies: [What Are Cookies?](https://www.privacypolicies.com/blog/cookies/). + +We use both Session and Persistent Cookies for the purposes set out below: + +- **Necessary / Essential Cookies** + + Type: Session Cookies + + Administered by: Us + + Purpose: These Cookies are essential to provide You with services available through the Website and to enable You to use some of its features. They help to authenticate users and prevent fraudulent use of user accounts. Without these Cookies, the services that You have asked for cannot be provided, and We only use these Cookies to provide You with those services. + +- **Cookies Policy / Notice Acceptance Cookies** + + Type: Persistent Cookies + + Administered by: Us + + Purpose: These Cookies identify if users have accepted the use of cookies on the Website. + +- **Functionality Cookies** + + Type: Persistent Cookies + + Administered by: Us + + Purpose: These Cookies allow us to remember choices You make when You use the Website, such as remembering your login details or language preference. The purpose of these Cookies is to provide You with a more personal experience and to avoid You having to re-enter your preferences every time You use the Website. + +- **Tracking and Performance Cookies** + + Type: Persistent Cookies + + Administered by: Third-Parties + + Purpose: These Cookies are used to track information about traffic to the Website and how users use the Website. The information gathered via these Cookies may directly or indirectly identify you as an individual visitor. This is because the information collected is typically linked to a pseudonymous identifier associated with the device you use to access the Website. We may also use these Cookies to test new pages, features or new functionality of the Website to see how our users react to them. + +For more information about the cookies we use and your choices regarding cookies, please visit our Cookies Policy or the Cookies section of our Privacy Policy. + +#### Use of Your Personal Data + +The Company may use Personal Data for the following purposes: + +- **To provide and maintain our Service**, including to monitor the usage of our Service. +- **To manage Your Account:** to manage Your registration as a user of the Service. The Personal Data You provide can give You access to different functionalities of the Service that are available to You as a registered user. +- **For the performance of a contract:** the development, compliance and undertaking of the purchase contract for the products, items or services You have purchased or of any other contract with Us through the Service. +- **To contact You:** To contact You by email, telephone calls, SMS, or other equivalent forms of electronic communication, such as a mobile application's push notifications regarding updates or informative communications related to the functionalities, products or contracted services, including the security updates, when necessary or reasonable for their implementation. +- **To provide You** with news, special offers and general information about other goods, services and events which we offer that are similar to those that you have already purchased or enquired about unless You have opted not to receive such information. +- **To manage Your requests:** To attend and manage Your requests to Us. +- **For business transfers:** We may use Your information to evaluate or conduct a merger, divestiture, restructuring, reorganization, dissolution, or other sale or transfer of some or all of Our assets, whether as a going concern or as part of bankruptcy, liquidation, or similar proceeding, in which Personal Data held by Us about our Service users is among the assets transferred. +- **For other purposes**: We may use Your information for other purposes, such as data analysis, identifying usage trends, determining the effectiveness of our promotional campaigns and to evaluate and improve our Service, products, services, marketing and your experience. + +We may share Your personal information in the following situations: + +- **With Service Providers:** We may share Your personal information with Service Providers to monitor and analyze the use of our Service, to contact You. +- **For business transfers:** We may share or transfer Your personal information in connection with, or during negotiations of, any merger, sale of Company assets, financing, or acquisition of all or a portion of Our business to another company. +- **With Affiliates:** We may share Your information with Our affiliates, in which case we will require those affiliates to honor this Privacy Policy. Affiliates include Our parent company and any other subsidiaries, joint venture partners or other companies that We control or that are under common control with Us. +- **With business partners:** We may share Your information with Our business partners to offer You certain products, services or promotions. +- **With other users:** when You share personal information or otherwise interact in the public areas with other users, such information may be viewed by all users and may be publicly distributed outside. If You interact with other users or register through a Third-Party Social Media Service, Your contacts on the Third-Party Social Media Service may see Your name, profile, pictures and description of Your activity. Similarly, other users will be able to view descriptions of Your activity, communicate with You and view Your profile. +- **With Your consent**: We may disclose Your personal information for any other purpose with Your consent. + +#### Retention of Your Personal Data + +The Company will retain Your Personal Data only for as long as is necessary for the purposes set out in this Privacy Policy. We will retain and use Your Personal Data to the extent necessary to comply with our legal obligations (for example, if we are required to retain your data to comply with applicable laws), resolve disputes, and enforce our legal agreements and policies. + +The Company will also retain Usage Data for internal analysis purposes. Usage Data is generally retained for a shorter period of time, except when this data is used to strengthen the security or to improve the functionality of Our Service, or We are legally obligated to retain this data for longer time periods. + +#### Transfer of Your Personal Data + +Your information, including Personal Data, is processed at the Company's operating offices and in any other places where the parties involved in the processing are located. It means that this information may be transferred to — and maintained on — computers located outside of Your state, province, country or other governmental jurisdiction where the data protection laws may differ than those from Your jurisdiction. + +Your consent to this Privacy Policy followed by Your submission of such information represents Your agreement to that transfer. + +The Company will take all steps reasonably necessary to ensure that Your data is treated securely and in accordance with this Privacy Policy and no transfer of Your Personal Data will take place to an organization or a country unless there are adequate controls in place including the security of Your data and other personal information. + +#### Disclosure of Your Personal Data + +Business Transactions + +If the Company is involved in a merger, acquisition or asset sale, Your Personal Data may be transferred. We will provide notice before Your Personal Data is transferred and becomes subject to a different Privacy Policy. + +Law enforcement + +Under certain circumstances, the Company may be required to disclose Your Personal Data if required to do so by law or in response to valid requests by public authorities (e.g. a court or a government agency). + +Other legal requirements + +The Company may disclose Your Personal Data in the good faith belief that such action is necessary to: + +- Comply with a legal obligation +- Protect and defend the rights or property of the Company +- Prevent or investigate possible wrongdoing in connection with the Service +- Protect the personal safety of Users of the Service or the public +- Protect against legal liability + +#### Security of Your Personal Data + +The security of Your Personal Data is important to Us, but remember that no method of transmission over the Internet, or method of electronic storage is 100% secure. While We strive to use commercially acceptable means to protect Your Personal Data, We cannot guarantee its absolute security. + +### Detailed Information on the Processing of Your Personal Data + +The Service Providers We use may have access to Your Personal Data. These third-party vendors collect, store, use, process and transfer information about Your activity on Our Service in accordance with their Privacy Policies. + +#### Analytics + +We may use third-party Service providers to monitor and analyze the use of our Service. + +- **Google Analytics** + + Google Analytics is a web analytics service offered by Google that tracks and reports website traffic. Google uses the data collected to track and monitor the use of our Service. This data is shared with other Google services. Google may use the collected data to contextualize and personalize the ads of its own advertising network. + + You can opt-out of having made your activity on the Service available to Google Analytics by installing the Google Analytics opt-out browser add-on. The add-on prevents the Google Analytics JavaScript (ga.js, analytics.js and dc.js) from sharing information with Google Analytics about visits activity. + + For more information on the privacy practices of Google, please visit the Google Privacy & Terms web page: https://policies.google.com/privacy + +#### Email Marketing + +We may use Your Personal Data to contact You with newsletters, marketing or promotional materials and other information that may be of interest to You. You may opt-out of receiving any, or all, of these communications from Us by following the unsubscribe link or instructions provided in any email We send or by contacting Us. + +We may use Email Marketing Service Providers to manage and send emails to You. + +- **Mailchimp** + + Mailchimp is an email marketing sending service provided by The Rocket Science Group LLC. + + For more information on the privacy practices of Mailchimp, please visit their Privacy policy: https://mailchimp.com/legal/privacy/ + +#### Usage, Performance and Miscellaneous + +We may use third-party Service Providers to provide better improvement of our Service. + +- **Invisible reCAPTCHA** + + We use an invisible captcha service named reCAPTCHA. reCAPTCHA is operated by Google. + + The reCAPTCHA service may collect information from You and from Your Device for security purposes. + + The information gathered by reCAPTCHA is held in accordance with the Privacy Policy of Google: https://www.google.com/intl/en/policies/privacy/ + +### GDPR Privacy + +#### Legal Basis for Processing Personal Data under GDPR + +We may process Personal Data under the following conditions: + +- **Consent:** You have given Your consent for processing Personal Data for one or more specific purposes. +- **Performance of a contract:** Provision of Personal Data is necessary for the performance of an agreement with You and/or for any pre-contractual obligations thereof. +- **Legal obligations:** Processing Personal Data is necessary for compliance with a legal obligation to which the Company is subject. +- **Vital interests:** Processing Personal Data is necessary in order to protect Your vital interests or of another natural person. +- **Public interests:** Processing Personal Data is related to a task that is carried out in the public interest or in the exercise of official authority vested in the Company. +- **Legitimate interests:** Processing Personal Data is necessary for the purposes of the legitimate interests pursued by the Company. + +In any case, the Company will gladly help to clarify the specific legal basis that applies to the processing, and in particular whether the provision of Personal Data is a statutory or contractual requirement, or a requirement necessary to enter into a contract. + +#### Your Rights under the GDPR + +The Company undertakes to respect the confidentiality of Your Personal Data and to guarantee You can exercise Your rights. + +You have the right under this Privacy Policy, and by law if You are within the EU, to: + +- **Request access to Your Personal Data.** The right to access, update or delete the information We have on You. Whenever made possible, you can access, update or request deletion of Your Personal Data directly within Your account settings section. If you are unable to perform these actions yourself, please contact Us to assist You. This also enables You to receive a copy of the Personal Data We hold about You. +- **Request correction of the Personal Data that We hold about You.** You have the right to to have any incomplete or inaccurate information We hold about You corrected. +- **Object to processing of Your Personal Data.** This right exists where We are relying on a legitimate interest as the legal basis for Our processing and there is something about Your particular situation, which makes You want to object to our processing of Your Personal Data on this ground. You also have the right to object where We are processing Your Personal Data for direct marketing purposes. +- **Request erasure of Your Personal Data.** You have the right to ask Us to delete or remove Personal Data when there is no good reason for Us to continue processing it. +- **Request the transfer of Your Personal Data.** We will provide to You, or to a third-party You have chosen, Your Personal Data in a structured, commonly used, machine-readable format. Please note that this right only applies to automated information which You initially provided consent for Us to use or where We used the information to perform a contract with You. +- **Withdraw Your consent.** You have the right to withdraw Your consent on using your Personal Data. If You withdraw Your consent, We may not be able to provide You with access to certain specific functionalities of the Service. + +#### Exercising of Your GDPR Data Protection Rights + +You may exercise Your rights of access, rectification, cancellation and opposition by contacting Us. Please note that we may ask You to verify Your identity before responding to such requests. If You make a request, We will try our best to respond to You as soon as possible. + +You have the right to complain to a Data Protection Authority about Our collection and use of Your Personal Data. For more information, if You are in the European Economic Area (EEA), please contact Your local data protection authority in the EEA. + +### Children's Privacy + +Our Service does not address anyone under the age of 13. We do not knowingly collect personally identifiable information from anyone under the age of 13. If You are a parent or guardian and You are aware that Your child has provided Us with Personal Data, please contact Us. If We become aware that We have collected Personal Data from anyone under the age of 13 without verification of parental consent, We take steps to remove that information from Our servers. + +If We need to rely on consent as a legal basis for processing Your information and Your country requires consent from a parent, We may require Your parent's consent before We collect and use that information. + +### Links to Other Websites + +Our Service may contain links to other websites that are not operated by Us. If You click on a third party link, You will be directed to that third party's site. We strongly advise You to review the Privacy Policy of every site You visit. + +We have no control over and assume no responsibility for the content, privacy policies or practices of any third party sites or services. + +Changes to this Privacy Policy + +We may update Our Privacy Policy from time to time. We will notify You of any changes by posting the new Privacy Policy on this page. + +We will let You know via email and/or a prominent notice on Our Service, prior to the change becoming effective and update the "Last updated" date at the top of this Privacy Policy. + +You are advised to review this Privacy Policy periodically for any changes. Changes to this Privacy Policy are effective when they are posted on this page. + +#### Contact Us + +If you have any questions about this Privacy Policy, You can contact us: + +- By email: info@gridmine.com +- By visiting this page on our website: [www.gridmine.com](https://www.privacypolicies.com/live/www.gridmine.com) +- By phone number: +41(0)41 561 0105 +- By mail: Baarerstrasse 5, 6300 Zug, Switzerland + +Privacy Policy for {{< param replacables.company_name >}} \ No newline at end of file diff --git a/content/en/docs/sx/DeveloperGuideSX/_index.md b/content/en/docs/sx/DeveloperGuideSX/_index.md new file mode 100644 index 00000000..1fda3304 --- /dev/null +++ b/content/en/docs/sx/DeveloperGuideSX/_index.md @@ -0,0 +1,54 @@ +--- +title: "Developer Guide" +linkTitle: "Developer Guide" +weight: 2 +description: >- + {{< param replacables.brand_name >}} SX Developer Guide +--- + +## Overview + +**{{< param replacables.brand_name >}}** is a container level solution for building highly scalable, cross-network sync or async applications. + +Using the {{< param replacables.brand_name >}} SX platform to run and manage stream processing containers utilizing {{< param replacables.brand_name >}} messaging infrastructure significantly reduces the cost of deploying enterprise application and offers standardized data streaming between workflow steps. This will simplify the development and as result create a platform with agile data processing and ease of integration. + +## Getting started with Stream Processors + +Take a look at this library for creating Stream Processors on top of Kafka and running them inside {{< param replacables.brand_name >}} platform: +[{{< param replacables.brand_name_lower >}}-SX](https://pypi.org/project/{{< param replacables.brand_name_lower >}}-sx/) + +## Example of a Stream Processor + +Below you can find an example application that is using {{< param replacables.brand_name_lower >}}-sx python library functions to count the number of words in incoming messages and then sending the result to *twitter_feed_wc* Kafka topic. +```python +import json +from {{< param replacables.brand_name_lower >}}_sx.core import app +from {{< param replacables.brand_name_lower >}}_sx.utils import sx_producer + + +def process(message): + message_new = dict() + message_new['text'] = message['text'] + message_new['word_count'] = len(message['text'].split(' ')) + message_new_json = json.dumps(message_new) + print(message_new_json) + sx_producer.send(topic="twitter_feed_wc", value=message_new_json.encode('utf-8')) + + +app.process = process + +``` + +## Creating Docker Container + +Below is an example of a dockerfile to create a Docker image for the Twitter Word Count application shown in the previous section. The user is free to use whatever base python image +and then add {{< param replacables.brand_name >}} module and other libraries. + +``` +FROM python:3.9-alpine +#RUN pip install -i https://test.pypi.org/simple/ {{< param replacables.brand_name_lower >}}-sx==0.0.8 --extra-index-url https://pypi.org/simple/ {{< param replacables.brand_name_lower >}}-sx +RUN pip install {{< param replacables.brand_name_lower >}}-sx +COPY twitter_word_count.py app.py +``` + +After the user have built an image and pushed it to some Docker image regitry, he can run it in {{< param replacables.brand_name >}} SX UI. diff --git a/content/en/docs/sx/_index.md b/content/en/docs/sx/_index.md new file mode 100644 index 00000000..f76b4131 --- /dev/null +++ b/content/en/docs/sx/_index.md @@ -0,0 +1,57 @@ +--- +title: "{{< param replacables.brand_name >}} SX" +linkTitle: "{{< param replacables.brand_name >}} SX" +weight: 103 +description: > + The following section provides a short overview of key features, concepts and architecture of {{< param replacables.brand_name >}} SX. +--- + +## Overview + +{{< param replacables.brand_name >}} SX is a **streaming automation solution for the {{< param replacables.brand_name >}} Platform**. It utilizes Apache Kafka, the distributed message broker used by thousands of companies, to enable data processing across the data mesh. + +**{{< param replacables.brand_name >}} SX drastically simplifies the creation of data pipelines and deployment of data streams**, speeding up the time it takes to build stream processing applications. + +**It automates sourcing, streaming, and data management**, and widely reduces the need for engineers’ involvement in topics management, DevOps, and DataOps. + +### What is Stream-Processing + +Stream processing is a data management technique that involves ingesting a continuous data stream to quickly analyze, filter, transform or enhance the data in real time. Apache Kafka is the most popular open-source stream-processing software for collecting, processing, storing, and analyzing data at scale. + +Most known for its excellent performance and fault tolerance, Kafka is designed to deliver high throughput and at the same time maintain low latency for real-time data feeds. It can handle thousands of messages per second with an average latency of 5–15ms. + +Kafka serves as an ideal platform for building real-time streaming applications such as online streaming analytics or real-time message queue. + +Apache Kafka has several advantages over other tools. Some notable benefits are: + +- Building data pipelines. +- Leveraging real-time data streams. +- Enabling operational metrics. +- Data integration across countless sources. + + +### Common Struggles For Companies Trying to Implement Kafka as an Integration Pattern +Now, while Kafka is great for building scalable and high-performance streaming applications, it’s actually hard to implement and maintain. + +1. For one thing, the system is large and complex, which is why most companies fail to meet their goals. +2. On top of that, integrating client systems with Kafka brings additional challenges that can be difficult even for experienced teams, because there are many different technical complexities that could potentially cause hiccups in your integration strategy. -> Data schema, supported protocol and serialization are just some of the examples. +3. As a result, Kafka requires a dedicated team with advanced knowledge and varying skill sets to handle its adoption — engineers, DevOps specialists, DataOps engineers, and GitOps experts. +4. Moreover, due to the complexity of the applications, especially the concern of scalability, it can take a significant time to build each application. + +There are many steps involved: from defining and writing business logic, setting up Kafka and integrating it with other services, to automating and deploying the applications. + +### How Does {{< param replacables.brand_name >}} SX Address And Solve These Issues? + +{{< param replacables.brand_name >}} SX takes streaming automation to a whole new level. And the way it works is simple. It removes the complexity of Kafka connections, integrations, setups, automation, deployments and gives the end user the opportunity to focus on building client applications instead of losing time learning how to manage Kafka. + +But how exactly does {{< param replacables.brand_name >}} SX solve the common issues and pitfalls mentioned above? By simplifying all processes: + +- It is easy to adop and therefore has a low learning curve: Users can start working with {{< param replacables.brand_name >}} SX and experience first results within an hour. +- It removes the all complexities of Kafka: Engineers focus strictly on business logic for processing messages. The {{< param replacables.brand_name >}} SX python package takes care of configuration, Kafka connections, error handling, logging, and functions to interact with other services inside {{< param replacables.brand_name >}}. +- It is flexible. {{< param replacables.brand_name >}} SX allows using different underlying images and install additional components or pip modules. +- It enables connecting services code automatically to Streams and Topics. +- It helps you to quickly iterate on your service architecture. With {{< param replacables.brand_name >}} SX, once the images are deployed and the services are running, results are displayed right away. +- It takes care of all the underlying core processes. This means that you don’t need to worry about any technical or operational considerations. +- It is highly scalable and provides flexibility to up- or down-scale at any time, adjusted to the user’s needs and the number of topic partitions. + +With the experience and knowledge gained over the past 7 years, the {{< param replacables.brand_name >}} Labs team has built an out-of-the-box module that lets developers concentrate on coding while taking care of all the complex processes that come with stream-processing data integrations. diff --git a/content/en/docs/sx/architecture.md b/content/en/docs/sx/architecture.md new file mode 100644 index 00000000..c14ce568 --- /dev/null +++ b/content/en/docs/sx/architecture.md @@ -0,0 +1,57 @@ +--- +title: "Architecture" +linkTitle: "Architecture" +weight: 6 +description: > + {{< param replacables.brand_name >}} SX Architecture. +--- + +## {{< param replacables.brand_name >}} SX Architecture Principles + +Stream processing is a technique for processing large volumes of data +in real-time as it is generated or received. One way to implement a +stream processing architecture is to use Docker containers for +individual workflow steps and Apache Kafka for the data pipeline. + +Docker is a platform for creating, deploying, and running containers, +which are lightweight and portable units of software that can be run +on any system with a compatible container runtime. By using Docker +containers for each step in the stream processing workflow, developers +can easily package and deploy their code, along with any dependencies, +in a consistent and reproducible way. This can help to improve the +reliability and scalability of the stream processing system. + +Apache Kafka is a distributed streaming platform that can be used to +build real-time data pipelines and streaming applications. It provides +a publish-subscribe model for sending and receiving messages, and can +handle very high throughput and low latency. By using Kafka as the +backbone of the data pipeline, developers can easily scale their +stream processing system to handle large volumes of data and handle +failover scenarios. + +Overall, by using Docker containers for the individual workflow steps +and Apache Kafka for the data pipeline, developers can create a stream +processing architecture that is both scalable and reliable. This +architecture can be used for a wide range of use cases, including +real-time analytics, event-driven architectures, and data integration. + +Below is the high-level architecture diagram of {{< param replacables.brand_name >}} SX: + +![streamzero_sx_architecture](/images/streamzero_sx_architecture.png) + + +## Required Infrastructure + +The following are the infrastructure components required for a {{< param replacables.brand_name >}} SX installation + +| Component | Description | +|-------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| Apache Kafka | Apache Kafka serves as the backbone to pass events and operational data within a {{< param replacables.brand_name >}} SX Installation. | +| PostgreSQL | Postgres is used as the database for the {{< param replacables.brand_name >}} SX Management Application. | +| Consul | Consul is the configuration store used by the {{< param replacables.brand_name >}} SX platform. It is also used by the services to store their configurations. | +| MinIO | Minio provides the platform internal storage for scripts and assets used by the Services. | +| Elasticsearch | Elasticsearch is used as a central store for all operational data. Thereby making the data easiliy searchable. | +| Kibana | Kibana is used to view and query the data stored in Elasticsearch. | +| {{< param replacables.brand_name >}} Management UI | {{< param replacables.brand_name >}} Management UI is the main UI used for all activities on the {{< param replacables.brand_name >}} FX platform. | +| {{< param replacables.brand_name >}} FX-Router | The Route container is responsible for listening to events flowing through the system and forwarding the events to the appropriate micro-services that you create. | +| {{< param replacables.brand_name >}} FX-Executor | The executor container(s) is where the code gets executed. | diff --git a/content/en/docs/sx/containers_purpose.md b/content/en/docs/sx/containers_purpose.md new file mode 100644 index 00000000..86c7a4d9 --- /dev/null +++ b/content/en/docs/sx/containers_purpose.md @@ -0,0 +1,21 @@ +--- +title: "Containers + Purpose" +linkTitle: "Containers & Purpose" +weight: 5 +description: > + {{< param replacables.brand_name >}} SX Containers + Purpose. +--- + +## Creating a Docker Container + +Below is an example of a dockerfile to create a Docker image for some {{< param replacables.brand_name >}} SX application. The user is free to choose what base python image +to use and then add {{< param replacables.brand_name >}} module and other libraries. + +``` +FROM python:3.9-alpine +#RUN pip install -i https://test.pypi.org/simple/ {{< param replacables.brand_name_lower >}}-sx==0.0.8 --extra-index-url https://pypi.org/simple/ {{< param replacables.brand_name_lower >}}-sx +RUN pip install {{< param replacables.brand_name_lower >}}-sx +COPY app.py utils.py +``` + +After the user have built an image and pushed it to a Docker image regitsry, they can run it in {{< param replacables.brand_name >}} SX Management UI. diff --git a/content/en/docs/sx/integrations_guide.md b/content/en/docs/sx/integrations_guide.md new file mode 100644 index 00000000..688465e8 --- /dev/null +++ b/content/en/docs/sx/integrations_guide.md @@ -0,0 +1,18 @@ +--- +title: "Integrations Guide" +linkTitle: "Integrations Guide" +weight: 2 +description: > + {{< param replacables.brand_name >}} SX Integrations Guide. +--- + +## How does it Work? + +There are two main approaches to implementing the external notifications support. + +* Implementation within a {{< param replacables.brand_name >}} SX container +* Implementation in an Exit Gateway + +The 2nd option is used in platforms which are behind a firewall and therefore require the gateway to be outside the firewall for accessing external services. In these cases the adapter runs as a separate container. + +Irrespective of the infrastructure implementation the service internal API (as illustrated above) does not change. diff --git a/content/en/docs/sx/solutions.md b/content/en/docs/sx/solutions.md new file mode 100644 index 00000000..3fbe6b66 --- /dev/null +++ b/content/en/docs/sx/solutions.md @@ -0,0 +1,29 @@ +--- +title: "Solutions snippets / explain problem solved / link to relevant use case" +linkTitle: "Solutions" +weight: 4 +description: > + The following section provides {{< param replacables.brand_name >}} SX solutions snippets / explain problem solved / link to relevant use case. +--- + +## Twitter message processing example + +The first example application is using {{< param replacables.brand_name >}}-sx python library to implement stream processor to count the number of words in incoming messages. The messages are queried from Twitter API with specific filter condition and then fed to the processor. The results are sent to a Kafka topic. +```python +import json +from {{< param replacables.brand_name_lower >}}_sx.core import app +from {{< param replacables.brand_name_lower >}}_sx.utils import sx_producer + + +def process(message): + message_new = dict() + message_new['text'] = message['text'] + message_new['word_count'] = len(message['text'].split(' ')) + message_new_json = json.dumps(message_new) + print(message_new_json) + sx_producer.send(topic="twitter_feed_wc", value=message_new_json.encode('utf-8')) + + +app.process = process + +``` \ No newline at end of file diff --git a/content/en/docs/sx/user_guide.md b/content/en/docs/sx/user_guide.md new file mode 100644 index 00000000..8b40383a --- /dev/null +++ b/content/en/docs/sx/user_guide.md @@ -0,0 +1,34 @@ +--- +title: "User Guide" +linkTitle: "User Guide" +weight: 3 +description: > + {{< param replacables.brand_name >}} SX User Guide. +--- + +# {{< param replacables.brand_name >}} SX Management UI + +## Create a Stream Adapter + +After a developer has built an image of a stream processing task and stored it to a container register, we can configure and launch it with {{< param replacables.brand_name >}} Management UI. + +On left side menu, open Stream Adapters menu and select "Stream Adapter Definition". +Fill in the details. + +![create_stream_adapter_ui](/images/SX_create_stream_adapter.png) + + +Go to the "List Stream Adapters" page. You should find your the Stream Adapter that we created on the list. You can start the container by clicking the "Run"-button. The download and start-up of the image can take few minutes. + +![list_stream_adapter_ui](/images/SX_list_stream_adapters.png) + + +When the Stream Adapter is running you can find it in the list of running adapters. + +![list_running_adapters_ui](/images/SX_running_adapters.png) + + +{{< param replacables.brand_name >}} also has a list of all the Kafka topics that are currently attached to Stream Adapters or available to Stream Adapters. + +![list_topics_ui](/images/SX_list_topics.png) + diff --git a/content/en/docs/terms_conditions.md b/content/en/docs/terms_conditions.md new file mode 100644 index 00000000..a21b7743 --- /dev/null +++ b/content/en/docs/terms_conditions.md @@ -0,0 +1,128 @@ +--- +title: "Terms & Conditions" +linkTitle: "Terms & Conditions" +tags: [terms and conditions] +categories: [] +weight: 110 +description: >- + {{< param replacables.company_name >}} Terms and Conditions. +--- + +Last updated: February 12, 2021 + +Please read these terms and conditions carefully before using Our Service. + +### Interpretations and Definitions + +#### Interpretation + +The words of which the initial letter is capitalized have meanings defined under the following conditions. The following definitions shall have the same meaning regardless of whether they appear in singular or in plural. + +### Definitions + +For the purposes of these Terms and Conditions: + +- **Affiliate** means an entity that controls, is controlled by or is under common control with a party, where "control" means ownership of 50% or more of the shares, equity interest or other securities entitled to vote for election of directors or other managing authority. +- **Country** refers to: Switzerland +- **Company** (referred to as either "the Company", "We", "Us" or "Our" in this Agreement) refers to {{< param replacables.company_name >}}, {{< param replacables.company_address >}} . +- **Device** means any device that can access the Service such as a computer, a cellphone or a digital tablet. +- **Service** refers to the Website. +- **Terms and Conditions** (also referred as "Terms") mean these Terms and Conditions that form the entire agreement between You and the Company regarding the use of the Service. +- **Third-party Social Media Service** means any services or content (including data, information, products or services) provided by a third-party that may be displayed, included or made available by the Service. +- **Website** refers to {{< param replacables.company_name >}}, accessible from [{{< param replacables.company_website >}}](https://www.privacypolicies.com/live/{{< param replacables.company_website >}}) +- **You** means the individual accessing or using the Service, or the company, or other legal entity on behalf of which such individual is accessing or using the Service, as applicable. + +### Acknowledgment + +These are the Terms and Conditions governing the use of this Service and the agreement that operates between You and the Company. These Terms and Conditions set out the rights and obligations of all users regarding the use of the Service. + +Your access to and use of the Service is conditioned on Your acceptance of and compliance with these Terms and Conditions. These Terms and Conditions apply to all visitors, users and others who access or use the Service. + +By accessing or using the Service You agree to be bound by these Terms and Conditions. If You disagree with any part of these Terms and Conditions then You may not access the Service. + +You represent that you are over the age of 18. The Company does not permit those under 18 to use the Service. + +Your access to and use of the Service is also conditioned on Your acceptance of and compliance with the Privacy Policy of the Company. Our Privacy Policy describes Our policies and procedures on the collection, use and disclosure of Your personal information when You use the Application or the Website and tells You about Your privacy rights and how the law protects You. Please read Our Privacy Policy carefully before using Our Service. + +### Intellectual Property + +The Service and its original content (excluding Content provided by You or other users), features and functionality are and will remain the exclusive property of the Company and its licensors. + +The Service is protected by copyright, trademark, and other laws of both the Country and foreign countries. + +Our trademarks and trade dress may not be used in connection with any product or service without the prior written consent of the Company. + +### Links to Other Websites + +Our Service may contain links to third-party web sites or services that are not owned or controlled by the Company. + +The Company has no control over, and assumes no responsibility for, the content, privacy policies, or practices of any third party web sites or services. You further acknowledge and agree that the Company shall not be responsible or liable, directly or indirectly, for any damage or loss caused or alleged to be caused by or in connection with the use of or reliance on any such content, goods or services available on or through any such web sites or services. + +We strongly advise You to read the terms and conditions and privacy policies of any third-party web sites or services that You visit. + +### Termination + +We may terminate or suspend Your access immediately, without prior notice or liability, for any reason whatsoever, including without limitation if You breach these Terms and Conditions. + +Upon termination, Your right to use the Service will cease immediately. + +### Limitation of Liability + +Notwithstanding any damages that You might incur, the entire liability of the Company and any of its suppliers under any provision of this Terms and Your exclusive remedy for all of the foregoing shall be limited to the amount actually paid by You through the Service or 100 USD if You haven't purchased anything through the Service. + +To the maximum extent permitted by applicable law, in no event shall the Company or its suppliers be liable for any special, incidental, indirect, or consequential damages whatsoever (including, but not limited to, damages for loss of profits, loss of data or other information, for business interruption, for personal injury, loss of privacy arising out of or in any way related to the use of or inability to use the Service, third-party software and/or third-party hardware used with the Service, or otherwise in connection with any provision of this Terms), even if the Company or any supplier has been advised of the possibility of such damages and even if the remedy fails of its essential purpose. + +Some states do not allow the exclusion of implied warranties or limitation of liability for incidental or consequential damages, which means that some of the above limitations may not apply. In these states, each party's liability will be limited to the greatest extent permitted by law. + +### "AS IS" and "AS AVAILABLE" Disclaimer + +The Service is provided to You "AS IS" and "AS AVAILABLE" and with all faults and defects without warranty of any kind. To the maximum extent permitted under applicable law, the Company, on its own behalf and on behalf of its Affiliates and its and their respective licensors and service providers, expressly disclaims all warranties, whether express, implied, statutory or otherwise, with respect to the Service, including all implied warranties of merchantability, fitness for a particular purpose, title and non-infringement, and warranties that may arise out of course of dealing, course of performance, usage or trade practice. Without limitation to the foregoing, the Company provides no warranty or undertaking, and makes no representation of any kind that the Service will meet Your requirements, achieve any intended results, be compatible or work with any other software, applications, systems or services, operate without interruption, meet any performance or reliability standards or be error free or that any errors or defects can or will be corrected. + +Without limiting the foregoing, neither the Company nor any of the company's provider makes any representation or warranty of any kind, express or implied: (i) as to the operation or availability of the Service, or the information, content, and materials or products included thereon; (ii) that the Service will be uninterrupted or error-free; (iii) as to the accuracy, reliability, or currency of any information or content provided through the Service; or (iv) that the Service, its servers, the content, or e-mails sent from or on behalf of the Company are free of viruses, scripts, trojan horses, worms, malware, timebombs or other harmful components. + +Some jurisdictions do not allow the exclusion of certain types of warranties or limitations on applicable statutory rights of a consumer, so some or all of the above exclusions and limitations may not apply to You. But in such a case the exclusions and limitations set forth in this section shall be applied to the greatest extent enforceable under applicable law. + +### Governing Law + +The laws of the Country, excluding its conflicts of law rules, shall govern this Terms and Your use of the Service. Your use of the Application may also be subject to other local, state, national, or international laws. + +### Disputes Resolution + +If You have any concern or dispute about the Service, You agree to first try to resolve the dispute informally by contacting the Company. + +### For European Union (EU) Users + +If You are a European Union consumer, you will benefit from any mandatory provisions of the law of the country in which you are resident in. + +### United States Legal Compliance + +You represent and warrant that (i) You are not located in a country that is subject to the United States government embargo, or that has been designated by the United States government as a "terrorist supporting" country, and (ii) You are not listed on any United States government list of prohibited or restricted parties. + +### Severability and Waiver + +#### Severability + +If any provision of these Terms is held to be unenforceable or invalid, such provision will be changed and interpreted to accomplish the objectives of such provision to the greatest extent possible under applicable law and the remaining provisions will continue in full force and effect. + +#### Waiver + +Except as provided herein, the failure to exercise a right or to require performance of an obligation under this Terms shall not effect a party's ability to exercise such right or require such performance at any time thereafter nor shall be the waiver of a breach constitute a waiver of any subsequent breach. + +### Translation Interpretation + +These Terms and Conditions may have been translated if We have made them available to You on our Service. You agree that the original English text shall prevail in the case of a dispute. + +### Changes to These Terms and Conditions + +We reserve the right, at Our sole discretion, to modify or replace these Terms at any time. If a revision is material We will make reasonable efforts to provide at least 30 days' notice prior to any new terms taking effect. What constitutes a material change will be determined at Our sole discretion. + +By continuing to access or use Our Service after those revisions become effective, You agree to be bound by the revised terms. If You do not agree to the new terms, in whole or in part, please stop using the website and the Service. + +### Contact Us + +If you have any questions about these Terms and Conditions, You can contact us: + +- By email: {{< param replacables.company_email >}} +- By phone number: {{< param replacables.company_phone >}} + +Terms and Conditions for {{< param replacables.company_name >}}