Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cleaned up format warnings when building sphinx-guides [ref #6997] #6998

Merged
merged 5 commits into from Jul 1, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
2 changes: 1 addition & 1 deletion doc/sphinx-guides/source/admin/troubleshooting.rst
Expand Up @@ -20,7 +20,7 @@ There are several types of dataset locks. Locks can be managed using the locks A

It's normal for the ingest process described in the :doc:`/user/tabulardataingest/ingestprocess` section of the User Guide to take some time but if hours or days have passed and the dataset is still locked, you might want to inspect the locks and consider deleting some or all of them. It is recommended to restart the application server if you are deleting an ingest lock, to make sure the ingest job is no longer running in the background. Ingest locks are idetified by the label ``Ingest`` in the ``reason`` column of the ``DatasetLock`` table in the database.

A dataset is locked with a lock of type ``finalizePublication`` while the persistent identifiers for the datafiles in the dataset are registered or updated, and/or while the physical files are being validated by recalculating the checksums and verifying them against the values stored in the database, before the publication process can be completed (Note that either of the two tasks can be disabled via database options - see :doc:`config`). If a dataset has been in this state for a long period of time, for hours or longer, it is somewhat safe to assume that it is stuck (for example, the process may have been interrupted by an application server restart, or a system crash), so you may want to remove the lock (to be safe, do restart the application server, to ensure that the job is no longer running in the background) and advise the user to try publishing again. See :doc:`dataverses-datasets` for more information on publishing.
A dataset is locked with a lock of type ``finalizePublication`` while the persistent identifiers for the datafiles in the dataset are registered or updated, and/or while the physical files are being validated by recalculating the checksums and verifying them against the values stored in the database, before the publication process can be completed (Note that either of the two tasks can be disabled via database options - see :doc:`/installation/config`). If a dataset has been in this state for a long period of time, for hours or longer, it is somewhat safe to assume that it is stuck (for example, the process may have been interrupted by an application server restart, or a system crash), so you may want to remove the lock (to be safe, do restart the application server, to ensure that the job is no longer running in the background) and advise the user to try publishing again. See :doc:`dataverses-datasets` for more information on publishing.

If any files in the dataset fail the validation above the dataset will be left locked with a ``DatasetLock.Reason=FileValidationFailed``. The user will be notified that they need to contact their Dataverse support in order to address the issue before another attempt to publish can be made. The admin will have to address and fix the underlying problems (by either restoring the missing or corrupted files, or by purging the affected files from the dataset) before deleting the lock and advising the user to try to publish again. The goal of the validation framework is to catch these types of conditions while the dataset is still in DRAFT.

Expand Down
2 changes: 1 addition & 1 deletion doc/sphinx-guides/source/api/apps.rst
@@ -1,7 +1,7 @@
Apps
====

The introduction of Dataverse APIs has fostered the development of a variety of software applications that are listed in the :doc:`/admin/integrations`, :doc:`/admin/external-tools`, and :doc:`/admin/reporting-tools` sections of the Admin Guide.
The introduction of Dataverse APIs has fostered the development of a variety of software applications that are listed in the :doc:`/admin/integrations`, :doc:`/admin/external-tools`, and :doc:`/admin/reporting-tools-and-queries` sections of the Admin Guide.

The apps below are open source and demonstrate how to use Dataverse APIs. Some of these apps are built on :doc:`/api/client-libraries` that are available for Dataverse APIs in Python, Javascript, R, and Java.

Expand Down
4 changes: 2 additions & 2 deletions doc/sphinx-guides/source/developers/testing.rst
Expand Up @@ -125,7 +125,7 @@ different people. For our purposes, an integration test can have to flavors:
- Operate on an installation of Dataverse that is running and able to talk to both PostgreSQL and Solr.
- Written using REST Assured.

2. Be a `Testcontainers <https://testcontainers.org>`_ Test:
2. Be a `Testcontainers <https://testcontainers.org>`__ Test:

- Operates any dependencies via the Testcontainers API, using containers.
- Written as a JUnit test, using all things necessary to test.
Expand Down Expand Up @@ -258,7 +258,7 @@ Writing and Using a Testcontainers Test
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Most scenarios of integration testing involve having dependent services running.
This is where `Testcontainers <https://www.testcontainers.org>`_ kicks in by
This is where `Testcontainers <https://www.testcontainers.org>`__ kicks in by
providing a JUnit interface to drive them before and after executing your tests.

Test scenarios are endless. Some examples are migration scripts, persistance,
Expand Down
37 changes: 20 additions & 17 deletions doc/sphinx-guides/source/installation/config.rst
Expand Up @@ -250,7 +250,7 @@ Dataverse can alternately store files in a Swift or S3-compatible object store,
The following sections describe how to set up various types of stores and how to configure for multiple stores.

Multi-store Basics
+++++++++++++++++
++++++++++++++++++

To support multiple stores, Dataverse now requires an id, type, and label for each store (even for a single store configuration). These are configured by defining two required jvm options:

Expand All @@ -259,16 +259,16 @@ To support multiple stores, Dataverse now requires an id, type, and label for ea
./asadmin $ASADMIN_OPTS create-jvm-options "\-Ddataverse.files.<id>.type=<type>"
./asadmin $ASADMIN_OPTS create-jvm-options "\-Ddataverse.files.<id>.label=<label>"

Out of the box, Dataverse is configured to use local file storage in the 'file' store by default. You can add additional stores and, as a superuser, configure specific dataverses to use them (by editing the 'General Information' for the dataverse as described in the :doc:`dataverses-datasets` section).
Out of the box, Dataverse is configured to use local file storage in the 'file' store by default. You can add additional stores and, as a superuser, configure specific dataverses to use them (by editing the 'General Information' for the dataverse as described in the :doc:`/admin/dataverses-datasets` section).

Note that the "\-Ddataverse.files.directory", if defined, continues to control where temporary files are stored (in the /temp subdir of that directory), independent of the location of any 'file' store defined above.

If you wish to change which store is used by default, you'll need to delete the existing default storage driver and set a new one using jvm options.

.. code-block::
.. code-block:: none

./asadmin $ASADMIN_OPTS delete-jvm-options "-Ddataverse.files.storage-driver-id=file"
./asadmin $ASADMIN_OPTS create-jvm-options "-Ddataverse.files.storage-driver-id=<id>"
./asadmin $ASADMIN_OPTS delete-jvm-options "-Ddataverse.files.storage-driver-id=file"
./asadmin $ASADMIN_OPTS create-jvm-options "-Ddataverse.files.storage-driver-id=<id>"

It is also possible to set maximum file upload size limits per store. See the :ref:`:MaxFileUploadSizeInBytes` setting below.

Expand Down Expand Up @@ -416,23 +416,23 @@ Please make note of the following details:

- **Endpoint URL** - consult the documentation of your service on how to find it.

* Example: https://play.minio.io:9000
* Example: https://play.minio.io:9000

- **Region:** Optional, but some services might use it. Consult your service documentation.

* Example: *us-east-1*
* Example: *us-east-1*

- **Access key ID and secret access key:** Usually you can generate access keys within the user profile of your service.

* Example:
* Example:

- ID: *Q3AM3UQ867SPQQA43P2F*
- ID: *Q3AM3UQ867SPQQA43P2F*

- Key: *zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG*
- Key: *zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG*

- **Bucket name:** Dataverse will fail opening and uploading files on S3 if you don't create one.

* Example: *dataverse*
* Example: *dataverse*

Manually Set Up Credentials File
################################
Expand Down Expand Up @@ -494,7 +494,8 @@ Second: Configure Dataverse to use S3 Storage
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

To set up an S3 store, you must define the id, type, and label as for any store:
.. code-block:: none

.. code-block:: bash

./asadmin $ASADMIN_OPTS create-jvm-options "\-Ddataverse.files.<id>.type=s3"
./asadmin $ASADMIN_OPTS create-jvm-options "\-Ddataverse.files.<id>.label=<label>"
Expand Down Expand Up @@ -526,9 +527,9 @@ Lastly, go ahead and restart your Payara server. With Dataverse deployed and the
S3 Storage Options
##################

========================================= ================== ================================================================== =============
=========================================== ================== ================================================================== =============
JVM Option Value Description Default value
========================================= ================== ================================================================== =============
=========================================== ================== ================================================================== =============
dataverse.files.storage-driver-id <id> Enable <id> as the default storage driver. ``file``
dataverse.files.<id>.bucket-name <?> The bucket name. See above. (none)
dataverse.files.<id>.download-redirect ``true``/``false`` Enable direct download or proxy through Dataverse. ``false``
Expand All @@ -540,7 +541,7 @@ dataverse.files.<id>.custom-endpoint-region <?> Only used when
dataverse.files.<id>.path-style-access ``true``/``false`` Use path style buckets instead of subdomains. Optional. ``false``
dataverse.files.<id>.payload-signing ``true``/``false`` Enable payload signing. Optional ``false``
dataverse.files.<id>.chunked-encoding ``true``/``false`` Disable chunked encoding. Optional ``true``
========================================= ================== ================================================================== =============
=========================================== ================== ================================================================== =============

Reported Working S3-Compatible Storage
######################################
Expand Down Expand Up @@ -1068,7 +1069,7 @@ See also these related database settings below:
- :ref:`:Authority`
- :ref:`:Shoulder`

.. _doi.baseurlstringnext
.. _doi.baseurlstringnext:

doi.baseurlstringnext
+++++++++++++++++++++
Expand Down Expand Up @@ -1579,7 +1580,7 @@ Note that this will override the default behaviour for the "Support" menu option
:MetricsUrl
+++++++++++

Make the metrics component on the root dataverse a clickable link to a website where you present metrics on your Dataverse installation, perhaps one of the community-supported tools mentioned in the :doc:`/admin/reporting-tools` section of the Admin Guide.
Make the metrics component on the root dataverse a clickable link to a website where you present metrics on your Dataverse installation, perhaps one of the community-supported tools mentioned in the :doc:`/admin/reporting-tools-and-queries` section of the Admin Guide.

``curl -X PUT -d http://metrics.dataverse.example.edu http://localhost:8080/api/admin/settings/:MetricsUrl``

Expand All @@ -1599,6 +1600,8 @@ Alongside the ``:StatusMessageHeader`` you need to add StatusMessageText for the

``curl -X PUT -d "This appears in a popup." http://localhost:8080/api/admin/settings/:StatusMessageText``

.. _:MaxFileUploadSizeInBytes:

:MaxFileUploadSizeInBytes
+++++++++++++++++++++++++

Expand Down
8 changes: 4 additions & 4 deletions doc/sphinx-guides/source/installation/prerequisites.rst
Expand Up @@ -211,7 +211,7 @@ Solr launches asynchronously and attempts to use the ``lsof`` binary to watch fo

# yum install lsof

Finally, you need to tell Solr to create the core "collection1" on startup:
Finally, you need to tell Solr to create the core "collection1" on startup::

echo "name=collection1" > /usr/local/solr/solr-7.7.2/server/solr/collection1/core.properties

Expand Down Expand Up @@ -243,15 +243,15 @@ It is **very important** not to allow direct access to the Solr API from outside

If you're running your Dataverse instance across multiple service hosts you'll want to remove the jetty.host argument (``-j jetty.host=127.0.0.1``) from the startup command line, but make sure Solr is behind a firewall and only accessible by the Dataverse web application host(s), by specific ip address(es).

We additionally recommend that the Solr service account's shell be disabled, as it isn't necessary for daily operation:
We additionally recommend that the Solr service account's shell be disabled, as it isn't necessary for daily operation::

# usermod -s /sbin/nologin solr

For Solr upgrades or further configuration you may temporarily re-enable the service account shell:
For Solr upgrades or further configuration you may temporarily re-enable the service account shell::

# usermod -s /bin/bash solr

or simply prepend each command you would run as the Solr user with "sudo -u solr":
or simply prepend each command you would run as the Solr user with "sudo -u solr"::

# sudo -u solr command

Expand Down