Skip to content

Commit

Permalink
updated documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
matthewhegarty committed Jan 10, 2024
1 parent 4467dd9 commit 71fce64
Show file tree
Hide file tree
Showing 4 changed files with 16 additions and 11 deletions.
14 changes: 8 additions & 6 deletions docs/advanced_usage.rst
Expand Up @@ -243,17 +243,18 @@ to ``True``, which will mean the process will exit at the first row which has er
resource = BookResource()
self.resource.import_data(self.dataset, raise_errors=True)

The above process will exit with a row number and error::
The above process will exit with a row number and error (formatted for clarity)::

import_export.exceptions.RowError: 2: {'published': ['Value could not be parsed using defined date formats.']}
ImportError: 2: {'published': ['Value could not be parsed using defined date formats.']}
(OrderedDict({'id': 2, 'name': 'The Hobbit', 'published': 'x'}))

To iterate over all validation errors produced from an import, pass ``False`` to ``raise_errors``::

result = self.resource.import_data(self.dataset, raise_errors=False)
for row in result.invalid_rows:
print(f"--- row {row.number} ---")
for field, error in row.error.error_dict.items():
print(f"{field}: {error}")
print(f"{field}: {error} ({row.values})")

If using the :ref:`Admin UI<admin-integration>`, errors are presented to the user during import (see below).

Expand Down Expand Up @@ -281,17 +282,18 @@ The ``raise_errors`` parameter can be used during programmatic import to halt th
raise_errors=True
)

The above process will exit with a row number and error::
The above process will exit with a row number and error (formatted for clarity)::

import_export.exceptions.RowError: 2: [<class 'decimal.ConversionSyntax'>]
ImportError: 1: [<class 'decimal.ConversionSyntax'>]
(OrderedDict({'id': 1, 'name': 'Lord of the Rings', 'price': '1x'}))

To iterate over all generic errors produced from an import, pass ``False`` to ``raise_errors``::

result = self.resource.import_data(self.dataset, raise_errors=False)
for row in result.error_rows:
print(f"--- row {row.number} ---")
for field, error in row.error.error_dict.items():
print(f"{field}: {error}")
print(f"{field}: {error} ({error.row})")

Field level validation
----------------------
Expand Down
2 changes: 1 addition & 1 deletion import_export/exceptions.py
Expand Up @@ -23,4 +23,4 @@ def __init__(self, error, number=None, row=None):
self.row = row

def __str__(self):
return f"{self.number}: {self.error}"
return f"{self.number}: {self.error} ({self.row})"
9 changes: 6 additions & 3 deletions import_export/resources.py
Expand Up @@ -793,9 +793,12 @@ def import_data(
:param use_transactions: If ``True`` the import process will be processed
inside a transaction.
:param collect_failed_rows: If ``True`` the import process will collect
failed rows. This can be useful for debugging purposes but will cause
higher memory usage for larger datasets.
:param collect_failed_rows:
If ``True`` the import process will create a new dataset object comprising
failed rows and errors.
This can be useful for debugging purposes but will cause higher memory usage
for larger datasets.
See :attr:`~import_export.results.Result.failed_dataset`.
:param rollback_on_validation_errors: If both ``use_transactions`` and
``rollback_on_validation_errors`` are set to ``True``, the import process will
Expand Down
2 changes: 1 addition & 1 deletion import_export/results.py
Expand Up @@ -170,7 +170,7 @@ def __init__(self, *args, **kwargs):
self.invalid_rows = []
#: The collection of rows which had generic errors.
self.error_rows = []
#: A custom Dataset containing only failed rows.
#: A custom Dataset containing only failed rows and associated errors.
self.failed_dataset = Dataset()
self.totals = OrderedDict(
[
Expand Down

0 comments on commit 71fce64

Please sign in to comment.