Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: support async deploy and undeploy model instance #192

Merged
merged 5 commits into from
Dec 19, 2022

Conversation

Phelan164
Copy link
Contributor

@Phelan164 Phelan164 commented Nov 27, 2022

Because

  • deploy and undeploy model instances take a long time. These methods should be run in async mode

This commit

  • support deploy and undeploy model instance asynchronous using temporal

@codecov
Copy link

codecov bot commented Nov 28, 2022

Codecov Report

Base: 4.26% // Head: 3.59% // Decreases project coverage by -0.67% ⚠️

Coverage data is based on head (a1794c9) compared to base (f22262c).
Patch coverage: 0.63% of modified lines in pull request are covered.

Additional details and impacted files
@@           Coverage Diff            @@
##            main    #192      +/-   ##
========================================
- Coverage   4.26%   3.59%   -0.68%     
========================================
  Files          5       6       +1     
  Lines       3542    3758     +216     
========================================
- Hits         151     135      -16     
- Misses      3359    3597     +238     
+ Partials      32      26       -6     
Flag Coverage Δ
unittests 3.59% <0.63%> (-0.68%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
pkg/handler/handler.go 0.00% <0.00%> (ø)
pkg/service/worker.go 0.00% <0.00%> (ø)
pkg/service/service.go 16.59% <50.00%> (-3.58%) ⬇️

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@Phelan164 Phelan164 changed the title feat: support async deploy and underlay model feat: support async deploy and undeploy model Nov 28, 2022
@Phelan164 Phelan164 changed the title feat: support async deploy and undeploy model feat: support async deploy and undeploy model instance Nov 28, 2022
@Phelan164 Phelan164 force-pushed the add-long-run-operation-task branch 2 times, most recently from 6e2ba2b to e73bca0 Compare December 16, 2022 05:51
@Phelan164 Phelan164 merged commit ed36dc7 into main Dec 19, 2022
@Phelan164 Phelan164 deleted the add-long-run-operation-task branch December 19, 2022 05:45
Phelan164 pushed a commit that referenced this pull request Dec 23, 2022
Features

- support async deploy and undeploy model instance (#192) (ed36dc7)
- support semantic segmentation (#203) (f22262c)

Bug Fixes

- model instance state update to unspecified state (#206) (14c87d5)
- panic error with nil object (#208) (a342113)
xiaofei-du pushed a commit to instill-ai/instill-core that referenced this pull request Dec 25, 2022
🤖 I have created a release *beep* *boop*
---

## Product Updates

### Announcement 📣

* VDP is officially renamed to `Versatile Data Pipeline`.

We realise that as a general ETL infrastructure, VDP is capable of processing all kinds of unstructured data, and we should not limit its usage to only visual data. That's why we replace the word Visual with Versatile. Besides, the term Data Preparation is a bit misleading, users often think it has something to do with data labelling or cleaning. The term Data Pipeline is definitely more precise to capture the core concept of VDP.

### Features ✨
* support new task Instance segmentation. Check out the [Streamlit example](https://github.com/instill-ai/vdp/tree/main/examples/streamlit/instance_segmentation) 

## VDP ([0.3.0-alpha](v0.2.6-alpha...v0.3.0-alpha))


### Features

* support Instance segmentation task [0476f59](0476f59) 
* add console e2e test into vdp ([#148](#148)) ([a779a11](a779a11))
* add instance segmentation example ([#167](#167))


### Bug Fixes

* fix wrong triton environment when deploying HuggingFace models ([#150](#150)) ([b2fda36](b2fda36))
* use COCO RLE format for instance segmentation ([4d10e46](4d10e46))
* update model output protocol ([e6ea88d](e6ea88d))

## Pipeline-backend ([0.9.3-alpha](https://github.com/instill-ai/pipeline-backend/releases/tag/v0.9.3-alpha))

### Bug Fixes

* fix pipeline trigger model hanging (instill-ai/pipeline-backend#80) ([7ba58e5](instill-ai/pipeline-backend@7ba58e5))

## Connector-backend ([0.7.2-alpha](https://github.com/instill-ai/connector-backend/releases/tag/v0.7.2-alpha))

### Bug Fixes
* fix connector empty description update ([0bc3086](instill-ai/connector-backend@0bc3086))

## Model-backend ([0.10.0-alpha](https://github.com/instill-ai/model-backend/releases/tag/v0.10.0-alpha))

### Features
* support instance segmentation task (instill-ai/model-backend#183) ([d28cfdc](instill-ai/model-backend@d28cfdc))
* support async deploy and undeploy model instance (instill-ai/model-backend#192) ([ed36dc7](instill-ai/model-backend@ed36dc7))
* support semantic segmentation (instill-ai/model-backend#203) ([f22262c](instill-ai/model-backend@f22262c))

### Bug Fixes

* allow updating emtpy description for a model (instill-ai/model-backend#177) ([100ec84](instill-ai/model-backend@100ec84))
* HuggingFace batching bug in preprocess model ([b1582e8](instill-ai/model-backend@b1582e8))
* model instance state update to unspecified state (instill-ai/model-backend#206) ([14c87d5](instill-ai/model-backend@14c87d5))
* panic error with nil object (instill-ai/model-backend#208) ([a342113](instill-ai/model-backend@a342113))


## Console

### Features
* extend the time span of our user cookie (instill-ai/console#289) ([76a6f99](instill-ai/console@76a6f99))
* finish integration test and make it stable (instill-ai/console#281) ([3fd8d21](instill-ai/console@3fd8d21))
* replace prism.js with code-hike (instill-ai/console#292) ([cb61708](instill-ai/console@cb61708))
* unify the gap between elements in every table (instill-ai/console#291) ([e743820](instill-ai/console@e743820))
* update console request URL according to new protobuf (instill-ai/console#287) ([fa7ecc3](instill-ai/console@fa7ecc3))
* add hg model id field at model_instance page (instill-ai/console#300) ([31a6eab](instill-ai/console@31a6eab))
* cleanup connector after test (instill-ai/console#295) ([f9c8e4c](instill-ai/console@f9c8e4c))
* disable html report (instill-ai/console#297) ([689f50d](instill-ai/console@689f50d))
* enhance the warning of the resource id field (instill-ai/console#303) ([6c4aa4f](instill-ai/console@6c4aa4f))
* make playwright output dot on CI (instill-ai/console#293) ([e5c2958](instill-ai/console@e5c2958))
* support model-backend async long run operation (instill-ai/console#309) ([f795ce8](instill-ai/console@f795ce8))
* update e2e test (instill-ai/console#313) ([88bf0cd](instill-ai/console@88bf0cd))
update how we test model detail page (instill-ai/console#310) ([04c83a1](instill-ai/console@04c83a1))
* wipe out all data after test (instill-ai/console#296) ([e4085dd](instill-ai/console@e4085dd))

### Bug Fixes
* fix pipeline e2e not stable (instill-ai/console#285) ([a26e599](instill-ai/console@a26e599))
* fix set-cookie api route issue due to wrong domain name (instill-ai/console#284) ([c3efcdd](instill-ai/console@c3efcdd))

---
This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please).
Phelan164 pushed a commit that referenced this pull request Apr 24, 2023
🤖 I have created a release *beep* *boop*
---


##
[0.11.0-alpha](v0.16.0-alpha...v0.11.0-alpha)
(2023-04-24)


### Features

* add codebase for model grpc service
([4defa3e](4defa3e))
* add confidence score for ocr output
([#167](#167))
([e915452](e915452))
* add credential definition
([#109](#109))
([92d3391](92d3391))
* add gRPC Gateway and GetModel API
([#7](#7))
([bff6fc9](bff6fc9))
* add model initialization module
([#332](#332))
([aa753a5](aa753a5))
* add private endpoint and gRPC test cases
([#306](#306))
([bb3c193](bb3c193))
* add release stage for model definition
([#153](#153))
([4e13ba5](4e13ba5))
* add support for text generation tasks
([#252](#252))
([767ec45](767ec45))
* add text to image task
([#239](#239))
([421eb1a](421eb1a))
* **controller:** add model state monitoring with controller
([#323](#323))
([4397826](4397826))
* create model from GitHub
([#61](#61))
([cf763cb](cf763cb))
* handle oom
([#163](#163))
([4db1c45](4db1c45))
* remove model instance
([#320](#320))
([15e1b62](15e1b62))
* support artivc
([#102](#102))
([b8e21a4](b8e21a4))
* support async deploy and undeploy model instance
([#192](#192))
([ed36dc7](ed36dc7))
* support creating a HuggingFace model
([#113](#113))
([1577d87](1577d87))
* support instance segmentation task
([#183](#183))
([d28cfdc](d28cfdc))
* support model caching
([#317](#317))
([d15ffba](d15ffba))
* support model name when creating model
([#25](#25))
([7d799b7](7d799b7))
* support ocr task
([#150](#150))
([7766c6f](7766c6f))
* support semantic segmentation
([#203](#203))
([f22262c](f22262c))
* support url/base64 content prediction
([#34](#34))
([a88ddfd](a88ddfd))


### Bug Fixes

* add link for guideline create Conda environment file
([7ee8e06](7ee8e06))
* add writeonly to description
([f59d98f](f59d98f))
* allow updating emtpy description for a model
([#177](#177))
([100ec84](100ec84))
* bug usage storage
([#103](#103))
([975fdc1](975fdc1))
* clone repository and make folder
([ac79386](ac79386))
* **config:** use private port for mgmt-backend
([#307](#307))
([3264e2b](3264e2b))
* correct version when making inference
([#31](#31))
([c918e77](c918e77))
* create a subfolder in model-repository if needed
([#290](#290))
([7f8d78b](7f8d78b))
* fix build and go version
([#9](#9))
([f8d4346](f8d4346))
* fix client stream server recv wrong file length interval
([#143](#143))
([0e06f7c](0e06f7c))
* fix config path
([a8cf2c0](a8cf2c0))
* fix creating subfolder
([105a11a](105a11a))
* fix duration configuration bug
([ee4a310](ee4a310))
* fix keypoint model payload parser
([#249](#249))
([461d54a](461d54a))
* fix list long-run operation error
([#220](#220))
([472696d](472696d))
* fix subfolder creation
([#292](#292))
([0b6ec3f](0b6ec3f))
* fix unload model issue causing Triton server OOM
([#42](#42))
([fb4d1d1](fb4d1d1))
* fix usage client nil issue when mgmt-backend not ready
([#241](#241))
([4290159](4290159))
* fix variable name
([#293](#293))
([a7995dd](a7995dd))
* HuggingFace batching bug in preprocess model
([b1582e8](b1582e8))
* init config before logger
([9d3fb4a](9d3fb4a))
* keep format for empty inference output
([#258](#258))
([e2a2e48](e2a2e48))
* list models and model instances pagination
([#304](#304))
([1f19ed4](1f19ed4))
* logic when essemble or not
([ab8e7c1](ab8e7c1))
* model configuration response in integration test
([0225c1e](0225c1e))
* model definition in list model and missing zero in output
([#121](#121))
([a90072d](a90072d))
* model instance state update to unspecified state
([#206](#206))
([14c87d5](14c87d5))
* panic error with nil object
([#208](#208))
([a342113](a342113))
* pass the context between package layers
([#345](#345))
([e6e7f2f](e6e7f2f))
* post process for unspecified task output
([ad88068](ad88068))
* post process ocr task
([e387154](e387154))
* postgres host
([a322165](a322165))
* refactor JSON schema
([f24db48](f24db48))
* refactor model definition and model JSON schema
([#73](#73))
([0cce154](0cce154))
* regexp zap logger with new protobuf package
([8b9c463](8b9c463))
* return list of models in list method
([b88ebd7](b88ebd7))
* status code when deploy model error
([#111](#111))
([31d3f11](31d3f11))
* trigger image with 4 channel
([#141](#141))
([7445f5f](7445f5f))
* update db schema, protobuf generated files and create model, version
in upload api
([7573e54](7573e54))
* update description for GitHub model from user input
([#173](#173))
([821dab3](821dab3))
* update docker compose file for building dev image
([#29](#29))
([83cba09](83cba09))
* update model definitions and tasks in usage collection
([#100](#100))
([c593087](c593087))
* update predict for essemble model
([016f11c](016f11c))
* update version order when get model version list
([#38](#38))
([83c054a](83c054a))
* wrong logic when checking user account and service account
([7058db6](7058db6))


### Miscellaneous Chores

* release 0.11.0-alpha
([d592acb](d592acb))
* release 0.3.2-alpha
([9f8cd91](9f8cd91))
* release 0.4.2-alpha
([fc5a14a](fc5a14a))
* release 0.7.2-alpha
([17529d6](17529d6))
* release 0.7.3-alpha
([9033c50](9033c50))
* release v0.5.1-alpha
([895056d](895056d))
* release v0.6.1-alpha
([f18dc30](f18dc30))
* release v0.6.2-alpha
([4365f32](4365f32))

---
This PR was generated with [Release
Please](https://github.com/googleapis/release-please). See
[documentation](https://github.com/googleapis/release-please#release-please).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
1 participant