Skip to content

Commit

Permalink
0.8.2 manual
Browse files Browse the repository at this point in the history
  • Loading branch information
AlexanderBand committed May 1, 2021
1 parent d9b598f commit 31f748e
Show file tree
Hide file tree
Showing 21 changed files with 632 additions and 2,589 deletions.
109 changes: 109 additions & 0 deletions source/api.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,109 @@
.. _doc_krill_using_api:

Using the API
=============

The Krill API is a primarily JSON based REST-like HTTPS API with `bearer token
<https://swagger.io/docs/specification/authentication/bearer-authentication/>`_
based authentication.

Getting Help
------------

- Consult the `Interactive API documentation <http://redocly.github.io/redoc/?url=https://raw.githubusercontent.com/NLnetLabs/krill/v0.8.1/doc/openapi.yaml>`_ (courtesy of `ReDoc <https://github.com/Redocly/redoc>`_)
- Follow the API links in the :ref:`Krill CLI documentation<doc_krill_cli>`, e.g. API Call: :krill_api:`GET /v1/cas <list_cas>`
- Check out the API hints built-in to the :ref:`Krill CLI<doc_krill_cli>`, e.g.

.. parsed-literal::
$ :ref:`krillc list<cmd_krillc_list>` --api
GET:
https://<your.domain>/api/v1/cas
Headers:
Authorization: Bearer \*\*\*\*\*
Generating Client Code
----------------------

The `OpenAPI Generator <https://openapi-generator.tech/>`_ can generate Krill
API client code in many languages from the `Krill v0.7.3 OpenAPI specification <https://github.com/NLnetLabs/krill/blob/v0.7.3/doc/openapi.yaml>`_.3

Sample Application
------------------

Below is an example of how to write a small Krill client application in Python 3
using a Krill API client library produced by the OpenAPI Generator. To try out
this sample you'll need Docker and Python 3.

1. Save the following as :file:`/tmp/krill_test.py`, replacing `<YOUR XXX>`
values with the correct access token and domain name for your Krill server. This
example assumes that your Krill instance API endpoint is available on port 443
using a valid TLS certificate.

.. code-block:: python3
# Import the OpenAPI generated Krill client library
import krill_api
from krill_api import *
# Create a configuration for the client library telling it how to connect to
# the Krill server
krill_api_config = krill_api.Configuration()
krill_api_config.access_token = '<YOUR KRILL API TOKEN>'
krill_api_config.host = "https://{}/api/v1".format('<YOUR KRILL FQDN>')
krill_api_config.verify_ssl = True
krill_api_config.assert_hostname = False
krill_api_config.cert_file = None
# Create a Krill API client
krill_api_client = krill_api.ApiClient(krill_api_config)
# Get the client helper for the Certificate Authority set of Krill API endpoints
krill_ca_api = CertificateAuthoritiesApi(krill_api_client)
# Query Krill for the list of configured CAs
print(krill_ca_api.list_cas())
2. Run the following commands in a shell to generate a Krill client library:

.. code-block:: bash
# prepare a working directory
GENDIR=/tmp/gen
VENVDIR=/tmp/venv
mkdir -p $GENDIR
# fetch the Krill OpenAPI specification document
wget -O $GENDIR/openapi.yaml https://raw.githubusercontent.com/NLnetLabs/krill/v0.7.3/doc/openapi.yaml
# use the OpenAPI Generator to generate a Krill client library from the krill
# OpenAPI specification
docker run --rm -v $GENDIR:/local \
openapitools/openapi-generator-cli generate \
-i /local/openapi.yaml \
-g python \
-o /local/out \
--additional-properties=packageName=krill_api
# install the generated library where your Python 3 can find it
python3 -m venv $VENVDIR
source $VENVDIR/bin/activate
pip3 install wheel
pip3 install $GENDIR/out/
3. Run the sample application:

.. code-block:: bash
$ python3 /tmp/krill_test.py
{'cas': [{'handle': 'ca'}]}
.. Tip:: To learn more about using the generated client library, consult the
documentation in `$GENDIR/out/README.md`.

.. Warning::

Future improvements to the Krill OpenAPI specification may necessitate that
you re-generate your client library and possibly also alter your client
program to match any changed class and function names.
90 changes: 43 additions & 47 deletions source/architecture.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,9 @@ how to make your setup redundant and how to save and restore backups.
Used Disk Space
---------------

Krill stores all of its data under the ``DATA_DIR``. For users who will operate
a CA under an RIR / NIR parent the following sub-directories are relevant:
Krill stores all of its data under the ``DATA_DIR`` specified in the
configuration file. For users who will operate a CA under an RIR / NIR parent
the following sub-directories are relevant:

+-----------------+------------------------------------------------------------+
| Directory | Contents |
Expand All @@ -35,8 +36,7 @@ a CA under an RIR / NIR parent the following sub-directories are relevant:
``data_dir/rfc8181`` and ``data_dir/rfc6492`` for storing all
protocol messages exchanged between your CAs and their parent
and repository. If they are still present on your system, you
can safely remove them and save space - potentially quite a bit
of space.
can safely remove them and potentially save quite some space.

Archiving
"""""""""
Expand All @@ -60,12 +60,11 @@ We have no tooling for this yet, but we have an `issue
Saving State Changes
--------------------

You can skip this section if you're not interested in the gory details. However,
understanding this section will help to explain how backup and restore works in
Krill, and why a standby fail-over node can be used, but Krill's locking and
storage mechanism needs to be changed in order to make
`multiple active nodes <https://github.com/NLnetLabs/krill/issues/20>`_
work.
You can skip this section if you're not interested in all the minute details. It
is intended to explain how backup and restore works in Krill, and why a standby
fail-over node can be used, but Krill's locking and storage mechanism needs to
be changed in order to make `multiple active nodes
<https://github.com/NLnetLabs/krill/issues/20>`_ work.

State changes in Krill are tracked using *events*. Krill CA(s) and Publication
Servers are versioned. They can only be changed by applying an *event* for a
Expand All @@ -76,18 +75,17 @@ always be reconstituted by applying all past events. This concept is called
so-called *aggregates*.

Events are not applied directly. Rather, users of Krill and background jobs will
send their intent to make a change through the API, which then translates
this into a so-called *command*. Krill will then *lock* the target aggregate
and send the command to it. This locking mechanism is not aware of any
clustering, and it's a primary reason why Krill cannot run as an active-active
cluster yet.

Upon receiving a command the aggregate (your CA etc.) will do some work. In some
cases a command can have a side-effect. For example it may instruct your CA to
create a new key pair, after receiving entitlements from its parent. The key pair
is random — applying a command again would result in a new random key pair.
send their intent to make a change through the API, which then translates this
into a so-called *command*. Krill will then lock the target aggregate and send
the command to it. This locking mechanism is not aware of any clustering, and
it's a primary reason why Krill cannot run as an active-active cluster yet.

Upon receiving a command the aggregate will do some work. In some cases a
command can have a side-effect. For example it may instruct your CA to create a
new key pair, after receiving entitlements from its parent. The key pair is
random — applying a command again would result in a new random key pair.
Remember that commands are not re-applied to aggregates, only their resulting
events are. Thus in this example there would be an event caused that contains
events are. Thus, in this example there would be an event caused that contains
the resulting key pair.

After receiving the command, the aggregate will return one of the following:
Expand All @@ -98,7 +96,7 @@ After receiving the command, the aggregate will return one of the following:
exist.

When Krill encounters such an error, it will store the command with some
meta-information like the time the command was issued, and a summary of the
meta-information like the time the command was issued and a summary of the
error, so that it can be seen in the history. It will then unlock the
aggregate, so that the next command can be sent to it.
2. No error, zero events
Expand All @@ -113,7 +111,7 @@ After receiving the command, the aggregate will return one of the following:
will now apply and persist the changes in the following order:

* Each event is stored. If an event already exists for a version, then
then the update is aborted. Because Krill cannot run as a cluster, and
the update is aborted. Because Krill cannot run as a cluster, and
it uses locking to ensure that updates are done in sequence, this will
only fail on the first event if a user tried to issue concurrent updates
to the same CA.
Expand All @@ -126,29 +124,30 @@ After receiving the command, the aggregate will return one of the following:
the time that the command was executed. And when `multiple users
<https://github.com/NLnetLabs/krill/issues/294>`_ will be supported,
this will also include *who* made a change.
* Finally the version information file for the aggregate is updated to
* Finally, the version information file for the aggregate is updated to
indicate its current version, and command sequence counter.

.. Warning:: Krill will crash, **by design**, if there is any failure in saving
any of the above files to disk. If Krill cannot persist its state
it should not try to carry on. It could lead to disjoints between
in-memory and on-disk state that are impossible to fix. Therefore,
crashing and forcing an operator to look at the system is the only
sensible thing Krill can now do. Fortunately, this should not
happen unless there is a serious system failure.
.. Note:: Krill will crash, **by design**, if there is any failure in saving
any of the above files to disk. If Krill cannot persist its state
it should not try to carry on. It could lead to disjoints between
in-memory and on-disk state that are impossible to fix. Therefore,
crashing and forcing an operator to look at the system is the only
sensible thing Krill can now do. Fortunately, this should not
happen unless there is a serious system failure.

Loading State at Startup
------------------------

Krill will rebuild its internal state whenever it starts. If it finds that there
are surplus events or commands compared to the latest information state for any
of the aggregates, then it will assume that they are present because, either
Krill stopped in the middle of writing a transaction of changes to disk, or your
of the aggregates it will assume that they are present because, either Krill
stopped in the middle of writing a transaction of changes to disk, or your
backup was taken in the middle of a transaction. Such surplus files are backed
up to a subdirectory called ``surplus`` under the relevant data directory, i.e.
``data_dir/pubd/0/surplus`` if you are using the Krill Publication Server and
``data_dir/cas/<your-ca-name>/surplus`` for each of your CAs.

.. _recover_state_startup:

Recover State at Startup
------------------------
Expand All @@ -160,9 +159,9 @@ state if:
* the environment variable: ``KRILL_FORCE_RECOVER`` is set
* the configuration file contains ``always_recover_data = true``

Under normal circumstances, i.e. when there is no data corruption, performing
this recovery will not be necessary. It can also take significant time due to
all the checks performed. So, we do **not recommend** forcing this.
Under normal circumstances performing this recovery will not be necessary. It
can also take significant time due to all the checks performed. So, we do **not
recommend** forcing recovery when there is no data corruption.

Krill will try the following checks and recovery attempts:

Expand Down Expand Up @@ -190,14 +189,14 @@ always verify your ROAs and/or delegations to child CAs in such cases.
Of course, it's best to avoid data corruption in the first place. Please monitor
available disk space, and make regular backups.

Backup / Restore
----------------
Backup and Restore
------------------

Backing up Krill is as simple as backing up its data directory. There is no need
to stop Krill during the backup. To restore put back your data directory and
make sure that you refer to it in the configuration file that you use for your
Krill instance. As described above, if Krill finds that the backup contain an
incomplete transaction, it will just fall back to the state prior to it.
Krill instance. As described above, if Krill finds that the backup contains an
incomplete transaction, it will fall back to the state prior to it.

.. Warning:: You may want to **encrypt** your backup, because the
``data_dir/ssl`` directory contains your private keys in clear
Expand Down Expand Up @@ -228,20 +227,17 @@ you can do the following:
Krill will then perform the data migrations, rebuild its state, and then exit
before doing anything else.

Krill Downgrades
----------------

Downgrading Krill data is not supported. So, downgrading can only be achieved
by installing a previous version of Krill and restoring a backup from before
your upgrade.
.. Note:: Downgrading Krill data is not supported. Downgrading can only be
achieved by installing a previous version of Krill and restoring a
backup that matches this version.

.. _proxy_and_https:

Proxy and HTTPS
---------------

Krill uses HTTPS and refuses to do plain HTTP. By default Krill will generate a
2048 bit RSA key and self-signed certificate in :file:`/ssl` in the data
2048 bit RSA key and self-signed certificate in ``/ssl`` in the data
directory when it is first started. Replacing the self-signed certificate with a
TLS certificate issued by a CA works, but has not been tested extensively. By
default Krill will only be available under ``https://localhost:3000``.
Expand Down
4 changes: 0 additions & 4 deletions source/docker.rst
Original file line number Diff line number Diff line change
@@ -1,9 +1,5 @@
.. _doc_krill_running_docker:

.. Important:: Docker is not supported for Release Candidate builds. Furthermore
we do not have a Docker image for the dedicated Krill Publication Server
yet. See `this issue <https://github.com/NLnetLabs/krill/issues/490>`_

Running with Docker
===================

Expand Down

0 comments on commit 31f748e

Please sign in to comment.