These APIs will return as soon as possible, deferring a blocking wait until the last moment. Nevertheless, they can block for a long time awaiting results.
cognite.client._api.entity_matching.EntityMatchingAPI.fit
cognite.client._api.entity_matching.EntityMatchingAPI.refit
cognite.client._api.entity_matching.EntityMatchingAPI.retrieve
cognite.client._api.entity_matching.EntityMatchingAPI.retrieve_multiple
cognite.client._api.entity_matching.EntityMatchingAPI.list
cognite.client._api.entity_matching.EntityMatchingAPI.delete
cognite.client._api.entity_matching.EntityMatchingAPI.update
cognite.client._api.entity_matching.EntityMatchingAPI.predict
cognite.client._api.diagrams.DiagramsAPI.detect
cognite.client._api.diagrams.DiagramsAPI.convert
The Vision API enable extraction of information from imagery data based on their visual content. For example, you can can extract features such as text, asset tags or industrial objects from images using this service.
Quickstart
Start an asynchronous job to extract information from image files stored in CDF:
from cognite.client import CogniteClient
from cognite.client.data_classes.contextualization import VisionFeature
c = CogniteClient()
extract_job = c.vision.extract(
features=[VisionFeature.ASSET_TAG_DETECTION, VisionFeature.PEOPLE_DETECTION],
file_ids=[1, 2],
)
The returned job object, extract_job
, can be used to retrieve the status of the job and the prediction results once the job is completed. Wait for job completion and get the parsed results:
extract_job.wait_for_completion()
for item in extract_job.items:
predictions = item.predictions
# do something with the predictions
Save the prediction results in CDF as Annotations:
extract_job.save_predictions()
Note
Prediction results are stored in CDF as Annotations using the images.*
annotation types. In particular, text detections are stored as images.TextRegion
, asset tag detections are stored as images.AssetLink
, while other detections are stored as images.ObjectDetection
.
Tweaking the parameters of a feature extractor:
from cognite.client.data_classes.contextualization import FeatureParameters, TextDetectionParameters
extract_job = c.vision.extract(
features=VisionFeature.TEXT_DETECTION,
file_ids=[1, 2],
parameters=FeatureParameters(text_detection_parameters=TextDetectionParameters(threshold=0.9))
# or
# parameters = {"textDetectionParameters": {"threshold": 0.9}}
)
cognite.client._api.vision.VisionAPI.extract
cognite.client._api.vision.VisionAPI.get_extract_job
cognite.client.data_classes.contextualization
cognite.client.data_classes.annotation_types.images
cognite.client.data_classes.annotation_types.primitives