Skip to content

Latest commit

 

History

History
134 lines (99 loc) · 6.14 KB

import_workflow.rst

File metadata and controls

134 lines (99 loc) · 6.14 KB

Import data workflow

This document describes the import data workflow in detail, with hooks that enable customization of the import process. The central aspect of the import process is a resource's ~import_export.resources.Resource.import_data method which is explained below.

import_data(dataset, dry_run=False, raise_errors=False)

The ~import_export.resources.Resource.import_data method of ~import_export.resources.Resource is responsible for importing data from a given dataset.

dataset is required and expected to be a tablib.Dataset with a header row.

dry_run is a Boolean which determines if changes to the database are made or if the import is only simulated. It defaults to False.

raise_errors is a Boolean. If True, import should raise errors. The default is False, which means that eventual errors and traceback will be saved in Result instance.

This is what happens when the method is invoked:

  1. First, a new ~import_export.results.Result instance, which holds errors and other information gathered during the import, is initialized.

    Then, an ~import_export.instance_loaders.InstanceLoader responsible for loading existing instances is initialized. A different ~import_export.instance_loaders.BaseInstanceLoader can be specified via ~import_export.resources.ResourceOptions's instance_loader_class attribute. A ~import_export.instance_loaders.CachedInstanceLoader can be used to reduce number of database queries. See the source for available implementations.

  2. The ~import_export.resources.Resource.before_import hook is called. By implementing this method in your resource, you can customize the import process.
  3. Each row of the to-be-imported dataset is processed according to the following steps:

    #. The ~import_export.resources.Resource.before_import_row hook is called to allow for row data to be modified before it is imported

    1. ~import_export.resources.Resource.get_or_init_instance is called with current ~import_export.instance_loaders.BaseInstanceLoader and current row of the dataset, returning an object and a Boolean declaring if the object is newly created or not.

      If no object can be found for the current row, ~import_export.resources.Resource.init_instance is invoked to initialize an object.

      As always, you can override the implementation of ~import_export.resources.Resource.init_instance to customize how the new object is created (i.e. set default values).

    2. ~import_export.resources.Resource.for_delete is called to determine if the passed instance should be deleted. In this case, the import process for the current row is stopped at this point.
    3. If the instance was not deleted in the previous step, ~import_export.resources.Resource.import_obj is called with the instance as current object, row as current row and dry run.

      ~import_export.resources.Resource.import_field is called for each field in ~import_export.resources.Resource skipping many-to-many fields. Many-to-many fields are skipped because they require instances to have a primary key and therefore assignment is postponed to when the object has already been saved.

      ~import_export.resources.Resource.import_field in turn calls ~import_export.fields.Field.save, if Field.attribute is set and Field.column_name exists in the given row.

    4. It then is determined whether the newly imported object is different from the already present object and if therefore the given row should be skipped or not. This is handled by calling ~import_export.resources.Resource.skip_row with original as the original object and instance as the current object from the dataset.

      If the current row is to be skipped, row_result.import_type is set to IMPORT_TYPE_SKIP.

    5. If the current row is not to be skipped, ~import_export.resources.Resource.save_instance is called and actually saves the instance when dry_run is not set.

      There are two hook methods (that by default do nothing) giving you the option to customize the import process:

      • ~import_export.resources.Resource.before_save_instance
      • ~import_export.resources.Resource.after_save_instance

      Both methods receive instance and dry_run arguments.

    6. ~import_export.resources.Resource.save_m2m is called to save many to many fields.
    7. ~import_export.results.RowResult is assigned with a diff between the original and the imported object fields, as well as and import_type attribute which states whether the row is new, updated, skipped or deleted.

      If an exception is raised during row processing and ~import_export.resources.Resource.import_data was invoked with raise_errors=False (which is the default) the particular traceback is appended to ~import_export.results.RowResult as well.

      If either the row was not skipped or the ~import_export.resources.Resource is configured to report skipped rows, the ~import_export.results.RowResult is appended to the ~import_export.results.Result

    8. The ~import_export.resources.Resource.after_import_row hook is called
  4. The ~import_export.results.Result is returned.

Transaction support

If transaction support is enabled, whole import process is wrapped inside transaction and rollbacked or committed respectively. All methods called from inside of import_data (create / delete / update) receive False for dry_run argument.