Skip to content

Releases: googleapis/python-aiplatform

v1.36.4

16 Nov 20:34
03f787c
Compare
Choose a tag to compare

1.36.4 (2023-11-16)

Features

  • Add numeric_restricts to MatchingEngineIndex find_neighbors() for querying (6c1f2cc)
  • Add remove_datapoints() to MatchingEngineIndex. (b86a404)
  • Add upsert_datapoints() to MatchingEngineIndex to support streaming update index. (7ca484d)
  • LLM - include error code into blocked response from TextGenerationModel, ChatModel, CodeChatMode, and CodeGenerationModel. (1f81cf2)
  • Populate Ray Cluster dashboard_address from proto field (dd4b852)
  • add CountTokens API, ComputeTokens API, and ModelContainerSpec features (ba2fb39)

Bug Fixes

  • Add check for empty encryption_spec_key_name for MatchingEngineIndexEndpoint create. (7740132)
  • Fix server error due to no encryption_spec_key_name in MatchingEngineIndex create_tree_ah_index and create_brute_force_index (595b580)

Miscellaneous Chores

v1.36.3

14 Nov 22:30
caf044d
Compare
Choose a tag to compare

1.36.3 (2023-11-14)

Features

  • Add option to not use default tensorboard (a25c669)
  • Add preview HyperparameterTuningJob which can be run on persistent resource (0da8c53)
  • Add Featurestore Bigtable Serving, Feature Registry v1, November bulk GAPIC release (9f46f7c)

Documentation

  • Fix documentation for obsolete link to GCS formatting (95184de)

Miscellaneous Chores

v1.36.2

10 Nov 22:26
45d599b
Compare
Choose a tag to compare

1.36.2 (2023-11-10)

Features

  • Add encryption_spec_key_name to MatchingEngineIndex create_tree_ah_index and (1a9e36f)
  • Add encryption_spec_key_name, enable_private_service_connect,project_allowlist to MatchingEngineIndexEndpoint create. (750e17b)
  • Add index_update_method to MatchingEngineIndex create() (dcb6205)
  • Expose max_retry_cnt parameter for Ray on Vertex BigQuery write (568907c)
  • LLM - Grounding - Added support for the disable_attribution grounding parameter (91e985a)
  • LLM - Support model evaluation when tuning chat models (ChatModel, CodeChatModel) (755c3f9)
  • LVM - Added multi-language support for ImageGenerationModel (791eff5)

Bug Fixes

  • Async call bug in CodeChatModel.send_message_async method (fcf05cb)

Documentation

  • Add Bigframes remote training example to vertexai README (8b993b3)
  • Update the documentation for the tabular_dataset class (6f40f1b)

Miscellaneous Chores

v1.36.1

07 Nov 17:24
9c4decc
Compare
Choose a tag to compare

1.36.1 (2023-11-07)

Features

  • Add per_crowding_attribute_neighbor_count, approx_num_neighbors, fraction_leaf_nodes_to_search_override, and return_full_datapoint to MatchingEngineIndexEndpoint find_neighbors (33c551e)
  • Add profiler support to tensorboard uploader sdk (be1df7f)
  • Add support for per_crowding_attribute_num_neighbors approx_num_neighborsto MatchingEngineIndexEndpoint match() (e5c20c3)
  • Add support for per_crowding_attribute_num_neighbors approx_num_neighborsto MatchingEngineIndexEndpoint match() (53d31b5)
  • Add support for per_crowding_attribute_num_neighbors approx_num_neighborsto MatchingEngineIndexEndpoint match() (4e357d5)
  • Enable grounding to ChatModel send_message and send_message_async methods (d4667f2)
  • Enable grounding to TextGenerationModel predict and predict_async methods (b0b4e6b)
  • LLM - Added support for the enable_checkpoint_selection tuning evaluation parameter (eaf4420)
  • LLM - Added tuning support for the *-bison-32k models (9eba18f)
  • LLM - Released CodeChatModel tuning to GA (621af52)

Bug Fixes

  • Correct class name in system test (b822b57)

Documentation

  • Clean up RoV create_ray_cluster docstring (1473e19)

Miscellaneous Chores

v1.36.0

31 Oct 19:33
f9feda7
Compare
Choose a tag to compare

1.36.0 (2023-10-31)

Features

  • Add preview count_tokens method to CodeGenerationModel (96e7f7d)
  • Allow the users to use extra serialization arguments for objects. (ffbd872)
  • Also support unhashable objects to be serialized with extra args (77a741e)
  • LLM - Added count_tokens support to ChatModel (preview) (01989b1)
  • LLM - Added new regions for tuning and tuned model inference (3d43497)
  • LLM - Added support for async streaming (760a025)
  • LLM - Added support for multiple response candidates in code chat models (598d57d)
  • LLM - Added support for multiple response candidates in code generation models (0c371a4)
  • LLM - Enable tuning eval TensorBoard without evaluation data (eaf5d81)
  • LLM - Released CodeGenerationModel tuning to GA (87dfe40)
  • LLM - Support accelerator_type in tuning (98ab2f9)
  • Support experiment autologging when using persistent cluster as executor (c19b6c3)
  • Upgrade BigQuery Datasource to use write() interface (7944348)

Bug Fixes

  • Adding setuptools to dependencies for Python 3.12 and above. (afd540d)
  • Fix Bigframes tensorflow serializer dependencies (b4cdb05)
  • LLM - Fixed the async streaming (41bfcb6)
  • LLM - Make tuning use the global staging bucket if specified (d9ced10)
  • LVM - Fixed negative prompt in ImageGenerationModel (cbe3a0d)
  • Made the Endpoint prediction client initialization lazy (eb6071f)
  • Make sure PipelineRuntimeConfigBuilder is created with the right arguments (ad19838)
  • Make sure the models list is populated before indexing (f1659e8)
  • Raise exception for RoV BQ Write for too many rate limit exceeded (7e09529)
  • Rollback BigQuery Datasource to use do_write() interface (dc1b82a)

v1.35.0

10 Oct 16:04
83224d0
Compare
Choose a tag to compare

1.35.0 (2023-10-10)

Features

  • Add serializer.register_custom_command() (639cf10)
  • Install Bigframes sklearn dependencies automatically (7aaffe5)
  • Install Bigframes tensorflow dependencies automatically (e58689b)
  • Install Bigframes torch dependencies automatically (1d65347)
  • LLM - Added support for multiple chat response candidates (587df74)
  • LLM - Added support for multiple text generation response candidates (c3ae475)

Bug Fixes

  • Duplicate logs in Colab (9b75259)
  • LLM - Fixed tuning and evaluation when explicit credentials are specified (188dffe)
  • Resolve Artifact Registry tags when creating PipelineJob (f04ca35)
  • Resolve Artifact Registry tags when creating PipelineJob (06bf487)

Documentation

  • Add probabilistic inference to TiDE and L2L model code samples. (efe88f9)

v1.34.0

02 Oct 20:29
a36daa7
Compare
Choose a tag to compare

1.34.0 (2023-10-02)

Features

  • Add Model Garden support to vertexai.preview.from_pretrained (f978200)
  • Enable vertexai preview persistent cluster executor (0ae969d)
  • LLM - Added the count_tokens method to the preview TextGenerationModel and TextEmbeddingModel classes (6a2f2aa)
  • LLM - Improved representation for blocked responses (222f222)
  • LLM - Released ChatModel tuning to GA (7d667f9)

Bug Fixes

  • Create PipelineJobSchedule in same project and location as associated PipelineJob by default (c22220e)

Documentation

  • Add documentation for the preview namespace (69a67f2)

v1.33.1

20 Sep 19:04
afd0461
Compare
Choose a tag to compare

1.33.1 (2023-09-20)

Bug Fixes

  • Lightning trainer fails to be unwrapped in remote training (8271301)

v1.33.0

19 Sep 00:38
910c22a
Compare
Choose a tag to compare

1.33.0 (2023-09-18)

Features

  • Add Custom Job support to from_pretrained (8b0add1)
  • Added async prediction and explanation support to the Endpoint class (e9eb159)
  • LLM - Added support for async prediction methods (c9c9f10)
  • LLM - CodeChat - Added support for context (f7feeca)
  • Release Ray on Vertex SDK Preview (3be36e6)

Bug Fixes

  • Handle Ray image parsing error (41a3a83)
  • Vizier - Fixed field existence checks for child params in to_proto(). (d516931)

v1.32.0

06 Sep 14:57
5dba09b
Compare
Choose a tag to compare

1.32.0 (2023-09-05)

Features

  • LLM - Added stop_sequences parameter to streaming methods and CodeChatModel (d62bb1b)
  • LLM - Improved the handling of temperature and top_p in streaming (6566529)
  • Support bigframes sharded parquet ingestion at remote deserialization (Tensorflow) (a8f85ec)
  • Release Vertex SDK Preview (c60b9ca)
  • Allow setting default service account (d11b8e6)

Bug Fixes

  • Fix feature update since no LRO is created (468e6e7)
  • LLM - CodeGenerationModel now supports safety attributes (c2c8a5e)
  • LLM - Fixed batch prediction on tuned models (2a08535)
  • LLM - Fixed the handling of the TextEmbeddingInput.task_type parameter. (2e3090b)
  • Make statistics Optional for TextEmbedding. (7eaa1d4)