Let's see how easy it is to use Eliot.
To install Eliot and the other tools we'll use in this example, run the following in your shell:
$ pip install eliot eliot-tree requests
You can also install it using Conda:
$ conda install -c conda-forge eliot eliot-tree requests
This will install:
- Eliot itself.
- eliot-tree, a tool that lets you visualize Eliot logs easily.
requests
, a HTTP client library we'll use in the example code below. You don't need it for real Eliot usage, though.
We're going to add logging code to the following script, which checks if a list of links are valid URLs:
import requests
def check_links(urls):
for url in urls:
try:
response = requests.get(url)
response.raise_for_status()
except Exception as e:
raise ValueError(str(e))
try:
check_links(["http://eliot.readthedocs.io", "http://nosuchurl"])
except ValueError:
print("Not all links were valid.")
To add logging to this program, we do two things:
- Tell Eliot to log messages to file called "linkcheck.log" by using
eliot.to_file()
. - Create two actions using
eliot.start_action()
. Actions succeed when theeliot.start_action()
context manager finishes successfully, and fail when an exception is raised.
.. literalinclude:: ../../examples/linkcheck.py :emphasize-lines: 2,3,7,10
Let's run the code:
$ python linkcheck.py
Not all the links were valid.
We can see the resulting log file is composed of JSON messages, one per line:
$ cat linkcheck.log
{"action_status": "started", "task_uuid": "b1cb58cf-2c2f-45c0-92b2-838ac00b20cc", "task_level": [1], "timestamp": 1509136967.2066844, "action_type": "check_links", "urls": ["http://eliot.readthedocs.io", "http://nosuchurl"]}
...
So far these logs seem similar to the output of regular logging systems: individual isolated messages.
But unlike those logging systems, Eliot produces logs that can be reconstructed into a tree, for example using the eliot-tree
utility:
$ eliot-tree linkcheck.log
b1cb58cf-2c2f-45c0-92b2-838ac00b20cc
└── check_links/1 ⇒ started
├── timestamp: 2017-10-27 20:42:47.206684
├── urls:
│ ├── 0: http://eliot.readthedocs.io
│ └── 1: http://nosuchurl
├── download/2/1 ⇒ started
│ ├── timestamp: 2017-10-27 20:42:47.206933
│ ├── url: http://eliot.readthedocs.io
│ └── download/2/2 ⇒ succeeded
│ └── timestamp: 2017-10-27 20:42:47.439203
├── download/3/1 ⇒ started
│ ├── timestamp: 2017-10-27 20:42:47.439412
│ ├── url: http://nosuchurl
│ └── download/3/2 ⇒ failed
│ ├── errno: None
│ ├── exception: requests.exceptions.ConnectionError
│ ├── reason: HTTPConnectionPool(host='nosuchurl', port=80): Max retries exceeded with url: / (Caused by NewConnec…
│ └── timestamp: 2017-10-27 20:42:47.457133
└── check_links/4 ⇒ failed
├── exception: builtins.ValueError
├── reason: HTTPConnectionPool(host='nosuchurl', port=80): Max retries exceeded with url: / (Caused by NewConnec…
└── timestamp: 2017-10-27 20:42:47.457332
Notice how:
- Eliot tells you which actions succeeded and which failed.
- Failed actions record their exceptions.
- You can see just from the logs that the
check_links
action caused thedownload
action.
You can learn more by reading the rest of the documentation, including:
- The :doc:`motivation behind Eliot <introduction>`.
- How to generate :doc:`actions <generating/actions>`, :doc:`standalone messages <generating/messages>`, and :doc:`handle errors <generating/errors>`.
- How to integrate or migrate your :doc:`existing stdlib logging messages <generating/migrating>`.
- How to output logs :doc:`to a file or elsewhere <outputting/output>`.
- Using :doc:`asyncio or Trio coroutines <generating/asyncio>`, :doc:`threads and processes <generating/threads>`, or :doc:`Twisted <generating/twisted>`.
- Using Eliot for :doc:`scientific computing <scientific-computing>`.