diff --git a/genindex.html b/genindex.html index 83d3b8a16..002b06d5a 100644 --- a/genindex.html +++ b/genindex.html @@ -222,90 +222,6 @@
coremltools.converters.keras._keras_converter._convert_to_spec(model, input_names=None, output_names=None, image_input_names=None, input_name_shape_dict={}, is_bgr=False, red_bias=0.0, green_bias=0.0, blue_bias=0.0, gray_bias=0.0, image_scale=1.0, class_labels=None, predicted_feature_name=None, model_precision='float32', predicted_probabilities_output='', add_custom_layers=False, custom_conversion_functions=None, custom_objects=None, input_shapes=None, output_shapes=None, respect_trainable=False, use_float_arraytype=False)¶Convert a Keras model to Core ML protobuf specification (.mlmodel).
-A trained Keras neural network model which can be one of the following:
-a Keras model object
a string with the path to a Keras model file (h5)
a tuple of strings, where the first is the path to a Keras model
-architecture (.json file), the second is the path to its weights -stored in h5 file.
-Optional name(s) that can be given to the inputs of the Keras model. -These names will be used in the interface of the Core ML models to refer -to the inputs of the Keras model. If not provided, the Keras inputs -are named to [input1, input2, …, inputN] in the Core ML model. When -multiple inputs are present, the input feature names are in the same -order as the Keras inputs.
-Optional name(s) that can be given to the outputs of the Keras model. -These names will be used in the interface of the Core ML models to refer -to the outputs of the Keras model. If not provided, the Keras outputs -are named to [output1, output2, …, outputN] in the Core ML model. -When multiple outputs are present, output feature names are in the same -order as the Keras inputs.
-Input names to the Keras model (a subset of the input_names -parameter) that can be treated as images by Core ML. All other inputs -are treated as MultiArrays (N-D Arrays).
-Optional Dictionary of input tensor names and their corresponding shapes expressed -as a list of ints
-Flag indicating the channel order the model internally uses to represent -color images. Set to True if the internal channel order is BGR, -otherwise it will be assumed RGB. This flag is applicable only if -image_input_names is specified. To specify a different value for each -image input, provide a dictionary with input names as keys. -Note that this flag is about the models internal channel order. -An input image can be passed to the model in any color pixel layout -containing red, green and blue values (e.g. 32BGRA or 32ARGB). This flag -determines how those pixel values get mapped to the internal multiarray -representation.
-Bias value to be added to the red channel of the input image. -Defaults to 0.0 -Applicable only if image_input_names is specified. -To specify different values for each image input provide a dictionary with input names as keys.
-Bias value to be added to the blue channel of the input image. -Defaults to 0.0 -Applicable only if image_input_names is specified. -To specify different values for each image input provide a dictionary with input names as keys.
-Bias value to be added to the green channel of the input image. -Defaults to 0.0 -Applicable only if image_input_names is specified. -To specify different values for each image input provide a dictionary with input names as keys.
-Bias value to be added to the input image (in grayscale). Defaults -to 0.0 -Applicable only if image_input_names is specified. -To specify different values for each image input provide a dictionary with input names as keys.
-Value by which input images will be scaled before bias is added and -Core ML model makes a prediction. Defaults to 1.0. -Applicable only if image_input_names is specified. -To specify different values for each image input provide a dictionary with input names as keys.
-Class labels (applies to classifiers only) that map the index of the -output of a neural network to labels in a classifier.
-If the provided class_labels is a string, it is assumed to be a -filepath where classes are parsed as a list of newline separated -strings.
-Name of the output feature for the class labels exposed in the Core ML -model (applies to classifiers only). Defaults to ‘classLabel’
-Precision at which model will be saved. Currently full precision (float) and half precision -(float16) models are supported. Defaults to ‘_MLMODEL_FULL_PRECISION’ (full precision).
-Name of the neural network output to be interpreted as the predicted -probabilities of the resulting classes. Typically the output of a -softmax function. Defaults to the first output blob.
-If True, then unknown Keras layer types will be added to the model as -‘custom’ layers, which must then be filled in as postprocessing.
-A dictionary with keys corresponding to names of custom layers and values -as functions taking a Keras custom layer and returning a parameter dictionary -and list of weights.
-Dictionary that includes a key, value pair of {‘<function name>’: <function>} -for custom objects such as custom loss in the Keras model. -Provide a string of the name of the custom function as a key. -Provide a function as a value.
-If True, then Keras layers that are marked ‘trainable’ will -automatically be marked updatable in the Core ML model.
-If true, the datatype of input/output multiarrays is set to Float32 instead -of double.
-Model in Core ML format.
-Examples
-# Make a Keras model
->>> model = Sequential()
->>> model.add(Dense(num_channels, input_dim = input_dim))
-
-# Convert it with default input and output names
->>> import coremltools
->>> coreml_model = coremltools.converters.keras.convert(model)
-
-# Saving the Core ML model to a file.
->>> coreml_model.save('my_model.mlmodel')
-Converting a model with a single image input.
->>> coreml_model = coremltools.converters.keras.convert(model, input_names =
-... 'image', image_input_names = 'image')
-Core ML also lets you add class labels to models to expose them as -classifiers.
->>> coreml_model = coremltools.converters.keras.convert(model, input_names = 'image',
-... image_input_names = 'image', class_labels = ['cat', 'dog', 'rat'])
-Class labels for classifiers can also come from a file on disk.
->>> coreml_model = coremltools.converters.keras.convert(model, input_names =
-... 'image', image_input_names = 'image', class_labels = 'labels.txt')
-Provide customized input and output names to the Keras inputs and outputs -while exposing them to Core ML.
->>> coreml_model = coremltools.converters.keras.convert(model, input_names =
-... ['my_input_1', 'my_input_2'], output_names = ['my_output'])
-coremltools.converters.keras._keras_converter._get_layer_converter_fn(layer)¶Get the right converter function for Keras
-coremltools.converters.keras._keras_converter._load_keras_model(model_network_path, model_weight_path, custom_objects=None)¶Load a keras model from disk
-Path where the model network path is (json file)
-Path where the model network weights are (hd5 file)
-A dictionary of layers or other custom classes -or functions used by the model
-coremltools.converters.keras._keras_converter.convert(model, input_names=None, output_names=None, image_input_names=None, input_name_shape_dict={}, is_bgr=False, red_bias=0.0, green_bias=0.0, blue_bias=0.0, gray_bias=0.0, image_scale=1.0, class_labels=None, predicted_feature_name=None, model_precision='float32', predicted_probabilities_output='', add_custom_layers=False, custom_conversion_functions=None, input_shapes=None, output_shapes=None, respect_trainable=False, use_float_arraytype=False)¶coremltools.converters.keras._keras2_converter._convert_training_info(model, builder, output_features)¶Convert the training information from the given Keras ‘model’ into the Core -ML in ‘builder’.
-model – keras.model.Sequential -The source Keras model.
builder – NeutralNetworkBuilder -The target model that will gain the loss and optimizer.
output_features – list of tuples, (str, datatype) -The set of tensor names that are output from the layers in the Keras -model.
coremltools.converters.keras._keras2_converter._get_layer_converter_fn(layer, add_custom_layers=False)¶Get the right converter function for Keras
-coremltools.converters.keras._keras2_converter._load_keras_model(model_network_path, model_weight_path, custom_objects=None)¶Load a keras model from disk
-Path where the model network path is (json file)
-Path where the model network weights are (hd5 file)
-A dictionary of layers or other custom classes -or functions used by the model
-coremltools.converters._converters_entry._determine_source(model, source, outputs)¶Infer source (which can be auto) to the precise framework.
-coremltools.converters._converters_entry._validate_inputs(model, exact_source, inputs, outputs, classifier_config, **kwargs)¶Validate and process model, inputs, outputs, classifier_config based on -exact_source (which cannot be auto)
-coremltools.converters._converters_entry.convert(model, source='auto', inputs=None, outputs=None, classifier_config=None, minimum_deployment_target=None, convert_to='nn_proto', **kwargs)¶coremltools.converters.onnx._converter._make_coreml_input_features(graph, onnx_coreml_input_shape_map, disable_coreml_rank5_mapping=False)¶If “disable_coreml_rank5_mapping” is False, then:
-ONNX shapes to CoreML static shapes mapping -length==1: [C] -length==2: [B,C] -length==3: [C,H,W] or [Seq,B,C] -length==4: [B,C,H,W]
-If “disable_coreml_rank5_mapping” is True, then -onnx shapes are mapped “as is” to CoreML.
-coremltools.converters.onnx._converter._transform_coreml_dtypes(builder, inputs, outputs)¶Make sure ONNX input/output data types are mapped to the equivalent CoreML types
-coremltools.converters.onnx._converter.convert(model, mode=None, image_input_names=[], preprocessing_args={}, image_output_names=[], deprocessing_args={}, class_labels=None, predicted_feature_name='classLabel', add_custom_layers=False, custom_conversion_functions={}, onnx_coreml_input_shape_map={}, minimum_ios_deployment_target='12')¶_is_valid_number_type(obj)¶Checks if the object is a valid number type.
-The object to check.
-_is_valid_text_type(obj)¶Checks if the object is a valid text type.
-The object to check.
-_validate_label_types(labels)¶Ensure the label types matched the expected types.
-The spec.
-The list of labels.
-add_samples(data_points, labels)¶Utilities for the entire package.
-coremltools.models.utils._convert_neural_network_weights_to_fp16(full_precision_model)¶Utility function to convert a full precision (float) MLModel to a -half precision MLModel (float16).
-Model which will be converted to half precision. Currently conversion -for only neural network models is supported. If a pipeline model is -passed in then all embedded neural network models embedded within -will be converted.
-The converted half precision MLModel
-coremltools.models.utils._element_equal(x, y)¶Performs a robust equality test between elements.
-coremltools.models.utils._get_custom_layer_names(spec)¶Returns a list of className fields which appear in the given protobuf spec
-coremltools.models.utils._get_custom_layers(spec)¶Returns a list of all neural network custom layers in the spec.
-coremltools.models.utils._get_input_names(spec)¶Returns a list of the names of the inputs to this model. -:param spec: The model protobuf specification -:return: list of str A list of input feature names
-coremltools.models.utils._get_model(spec)¶Utility to get the model and the data.
-coremltools.models.utils._get_nn_layers(spec)¶Returns a list of neural network layers if the model contains any.
-A model protobuf specification.
-list of all layers (including layers from elements of a pipeline
-coremltools.models.utils._has_custom_layer(spec)¶Returns true if the given protobuf specification has a custom layer, and false otherwise.
-coremltools.models.utils._is_macos()¶Returns True if current platform is MacOS, False otherwise.
-coremltools.models.utils._macos_version()¶Returns macOS version as a tuple of integers, making it easy to do proper -version comparisons. On non-Macs, it returns an empty tuple.
-coremltools.models.utils._python_version()¶Return python version as a tuple of integers
-coremltools.models.utils._replace_custom_layer_name(spec, oldname, newname)¶Substitutes newname for oldname in the className field of custom layers. If there are no custom layers, or no -layers with className=oldname, then the spec is unchanged.
-coremltools.models.utils._sanitize_value(x)¶Performs cleaning steps on the data so various type comparisons can -be performed correctly.
-coremltools.models.utils.convert_double_to_float_multiarray_type(spec)¶_check_fp16_weight_param_exists(layers)¶Checks if the network has at least one weight_param which is in FP16 format
-List of layers.
-_check_fp16_weight_params_lstms(lstm_wp, has_peephole=True)¶Checks if a lstm layer has at least one weight_param which is in FP16 format
-add_acos(name, input_name, output_name)¶coremltools.models.neural_network.builder._fill_tensor_fields(tensor_field, ranks=None, shapes=None)¶Fill the tensor fields. -ranks - None or a list of integers with the same length of number of inputs/outputs -shapes - None or a list of shapes the same length of number of inputs/outputs. Each shape is a list or tuple
-coremltools.models.neural_network.builder._get_lstm_weight_fields(lstm_wp)¶Get LSTM weight fields. -lstm_wp: _NeuralNetwork_pb2.LSTMWeightParams
-coremltools.models.neural_network.quantization_utils._convert_1bit_array_to_byte_array(arr)¶Convert bit array to byte array.
-Bits as a list where each element is an integer of 0 or 1
-1D numpy array of type uint8
-coremltools.models.neural_network.quantization_utils._decompose_bytes_to_bit_arr(arr)¶Unpack bytes to bits
-Byte Stream, as a list of uint8 values
-Decomposed bit stream as a list of 0/1s of length (len(arr) * 8)
-coremltools.models.neural_network.quantization_utils._dequantize_nn_spec(spec)¶Dequantize weights in NeuralNetwork type mlmodel specifications.
-coremltools.models.neural_network.quantization_utils._get_kmeans_lookup_table_and_weight(nbits, w, init='k-means++', tol=0.01, n_init=1, rand_seed=0)¶Generate K-Means lookup table given a weight parameter field
-Number of bits for quantization
-Weight as numpy array
-Lookup table, numpy array of shape (1 << nbits, );
-Quantized weight of type numpy.uint8
-coremltools.models.neural_network.quantization_utils._get_linear_lookup_table_and_weight(nbits, wp)¶Generate a linear lookup table.
-Number of bits to represent a quantized weight value
-Weight blob to be quantized
-Lookup table of shape (2^nbits, )
-Decomposed bit stream as a list of 0/1s of length (len(arr) * 8)
-coremltools.models.neural_network.quantization_utils._quantize_channelwise_linear(weight, nbits, axis=0, symmetric=False)¶Linearly quantize weight blob.
-Weight to be quantized.
-Number of bits per weight element
-Axis of the weight blob to compute channel-wise quantization, can be 0 or 1
-If true, set quantization range to be symmetrical to 0. -Otherwise, set quantization range to be the minimum and maximum of -weight parameters.
-quantized weight as float numpy array, with the same shape as weight
-per channel scale
-per channel bias
-coremltools.models.neural_network.quantization_utils._quantize_nn_spec(nn_spec, nbits, qm, **kwargs)¶Quantize weights in NeuralNetwork type mlmodel specifications.
-coremltools.models.neural_network.quantization_utils._quantize_wp(wp, nbits, qm, axis=0, **kwargs)¶Quantize the weight blob
-Weight parameters
-Number of bits
-Quantization mode
-callable function)Python callable representing a look-up table
-Per-channel scale
-Per-channel bias
-Lookup table
-Quantized weight of same shape as wp, with dtype numpy.uint8
-coremltools.models.neural_network.quantization_utils._quantize_wp_field(wp, nbits, qm, shape, axis=0, **kwargs)¶Quantize WeightParam field in Neural Network Protobuf
-WeightParam field
-Number of bits to be quantized
-Quantization mode
-Tensor shape held by wp
-Axis over which quantization is performed on, can be either 0 or 1
-callable function)Python callable representing a LUT table function
-coremltools.models.neural_network.quantization_utils.activate_int8_int8_matrix_multiplications(spec, selector=None)¶