Skip to content

Commit

Permalink
feat(aiplatform): update the api
Browse files Browse the repository at this point in the history
#### aiplatform:v1

The following keys were added:
- resources.projects.resources.locations.resources.pipelineJobs.methods.create.parameters.preflightValidations (Total Keys: 2)
- schemas.CloudAiNlLlmProtoServiceCandidate.properties.groundingMetadata.$ref (Total Keys: 1)
- schemas.CloudAiNlLlmProtoServiceFact (Total Keys: 4)
- schemas.CloudAiNlLlmProtoServiceGenerateMultiModalResponse.properties.facts (Total Keys: 2)
- schemas.CloudAiNlLlmProtoServicePartVideoMetadata.properties.modelLevelMetaData.$ref (Total Keys: 1)
- schemas.CloudAiNlLlmProtoServicePartVideoMetadataModelLevelMetadata (Total Keys: 6)
- schemas.GoogleCloudAiplatformV1CreatePipelineJobRequest.properties.preflightValidations.type (Total Keys: 1)
- schemas.GoogleCloudAiplatformV1IndexPrivateEndpoints.properties.pscAutomatedEndpoints (Total Keys: 3)
- schemas.GoogleCloudAiplatformV1PscAutomatedEndpoints (Total Keys: 5)

#### aiplatform:v1beta1

The following keys were deleted:
- schemas.GoogleCloudAiplatformV1beta1CreateExtensionDeploymentOperationMetadata (Total Keys: 3)

The following keys were added:
- resources.projects.resources.locations.resources.pipelineJobs.methods.create.parameters.preflightValidations (Total Keys: 2)
- schemas.CloudAiNlLlmProtoServiceCandidate.properties.groundingMetadata.$ref (Total Keys: 1)
- schemas.CloudAiNlLlmProtoServiceFact (Total Keys: 4)
- schemas.CloudAiNlLlmProtoServiceGenerateMultiModalResponse.properties.facts (Total Keys: 2)
- schemas.CloudAiNlLlmProtoServicePartVideoMetadata.properties.modelLevelMetaData.$ref (Total Keys: 1)
- schemas.CloudAiNlLlmProtoServicePartVideoMetadataModelLevelMetadata (Total Keys: 6)
- schemas.GoogleCloudAiplatformV1beta1CreatePipelineJobRequest.properties.preflightValidations.type (Total Keys: 1)
- schemas.GoogleCloudAiplatformV1beta1IndexPrivateEndpoints.properties.pscAutomatedEndpoints (Total Keys: 3)
- schemas.GoogleCloudAiplatformV1beta1PscAutomatedEndpoints (Total Keys: 5)
  • Loading branch information
yoshi-automation committed Jan 30, 2024
1 parent cde60cc commit 901407b
Show file tree
Hide file tree
Showing 20 changed files with 334 additions and 51 deletions.
2 changes: 1 addition & 1 deletion docs/dyn/aiplatform_v1.projects.locations.endpoints.html
Original file line number Diff line number Diff line change
Expand Up @@ -1017,7 +1017,7 @@ <h3>Method Details</h3>
&quot;deployedModelId&quot;: &quot;A String&quot;, # ID of the Endpoint&#x27;s DeployedModel that served this explanation.
&quot;explanations&quot;: [ # The explanations of the Model&#x27;s PredictResponse.predictions. It has the same number of elements as instances to be explained.
{ # Explanation of a prediction (provided in PredictResponse.predictions) produced by the Model on a given instance.
&quot;attributions&quot;: [ # Output only. Feature attributions grouped by predicted outputs. For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining. If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.
&quot;attributions&quot;: [ # Output only. Feature attributions grouped by predicted outputs. For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining. By default, we provide Shapley values for the predicted class. However, you can configure the explanation request to generate Shapley values for any other classes too. For example, if a model predicts a probability of `0.4` for approving a loan application, the model&#x27;s decision is to reject the application since `p(reject) = 0.6 &gt; p(approve) = 0.4`, and the default Shapley values would be computed for rejection decision and not approval, even though the latter might be the positive class. If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.
{ # Attribution that explains a particular prediction output.
&quot;approximationError&quot;: 3.14, # Output only. Error of feature_attributions caused by approximation used in the explanation method. Lower value means more precise attributions. * For Sampled Shapley attribution, increasing path_count might reduce the error. * For Integrated Gradients attribution, increasing step_count might reduce the error. * For XRAI attribution, increasing step_count might reduce the error. See [this introduction](/vertex-ai/docs/explainable-ai/overview) for more information.
&quot;baselineOutputValue&quot;: 3.14, # Output only. Model predicted output if the input instance is constructed from the baselines of all the features defined in ExplanationMetadata.inputs. The field name of the output is determined by the key in ExplanationMetadata.outputs. If the Model&#x27;s predicted output has multiple dimensions (rank &gt; 1), this is the value in the output located by output_index. If there are multiple baselines, their output values are averaged.
Expand Down

Large diffs are not rendered by default.

6 changes: 3 additions & 3 deletions docs/dyn/aiplatform_v1.projects.locations.featurestores.html

Large diffs are not rendered by default.

49 changes: 49 additions & 0 deletions docs/dyn/aiplatform_v1.projects.locations.indexEndpoints.html

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@ <h3>Method Details</h3>
&quot;explanations&quot;: [ # Explanations of predictions. Each element of the explanations indicates the explanation for one explanation Method. The attributions list in the EvaluatedAnnotationExplanation.explanation object corresponds to the predictions list. For example, the second element in the attributions list explains the second element in the predictions list.
{ # Explanation result of the prediction produced by the Model.
&quot;explanation&quot;: { # Explanation of a prediction (provided in PredictResponse.predictions) produced by the Model on a given instance. # Explanation attribution response details.
&quot;attributions&quot;: [ # Output only. Feature attributions grouped by predicted outputs. For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining. If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.
&quot;attributions&quot;: [ # Output only. Feature attributions grouped by predicted outputs. For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. Attribution.output_index can be used to identify which output this attribution is explaining. By default, we provide Shapley values for the predicted class. However, you can configure the explanation request to generate Shapley values for any other classes too. For example, if a model predicts a probability of `0.4` for approving a loan application, the model&#x27;s decision is to reject the application since `p(reject) = 0.6 &gt; p(approve) = 0.4`, and the default Shapley values would be computed for rejection decision and not approval, even though the latter might be the positive class. If users set ExplanationParameters.top_k, the attributions are sorted by instance_output_value in descending order. If ExplanationParameters.output_indices is specified, the attributions are stored by Attribution.output_index in the same order as they appear in the output_indices.
{ # Attribution that explains a particular prediction output.
&quot;approximationError&quot;: 3.14, # Output only. Error of feature_attributions caused by approximation used in the explanation method. Lower value means more precise attributions. * For Sampled Shapley attribution, increasing path_count might reduce the error. * For Integrated Gradients attribution, increasing step_count might reduce the error. * For XRAI attribution, increasing step_count might reduce the error. See [this introduction](/vertex-ai/docs/explainable-ai/overview) for more information.
&quot;baselineOutputValue&quot;: 3.14, # Output only. Model predicted output if the input instance is constructed from the baselines of all the features defined in ExplanationMetadata.inputs. The field name of the output is determined by the key in ExplanationMetadata.outputs. If the Model&#x27;s predicted output has multiple dimensions (rank &gt; 1), this is the value in the output located by output_index. If there are multiple baselines, their output values are averaged.
Expand Down

0 comments on commit 901407b

Please sign in to comment.