Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: fix pipeline trigger model hanging #80

Merged
merged 7 commits into from
Oct 23, 2022
Merged

fix: fix pipeline trigger model hanging #80

merged 7 commits into from
Oct 23, 2022

Conversation

Phelan164
Copy link
Contributor

Because

  • Trigger model timeout 5 seconds is not enough for making model inference
  • Goroutines use sync.WaitGroup which does not decrease the counter due to exception errors

This commit

@codecov
Copy link

codecov bot commented Oct 23, 2022

Codecov Report

Base: 16.75% // Head: 16.21% // Decreases project coverage by -0.53% ⚠️

Coverage data is based on head (4cd08a9) compared to base (8307ac7).
Patch coverage: 0.00% of modified lines in pull request are covered.

Additional details and impacted files
@@            Coverage Diff             @@
##             main      instill-ai/pipeline-backend#80      +/-   ##
==========================================
- Coverage   16.75%   16.21%   -0.54%     
==========================================
  Files           4        4              
  Lines         752      777      +25     
==========================================
  Hits          126      126              
- Misses        604      629      +25     
  Partials       22       22              
Flag Coverage Δ
unittests 16.21% <0.00%> (-0.54%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
pkg/service/service.go 5.94% <0.00%> (-0.34%) ⬇️

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

}()
for err := range errors {
if err != nil {
logger.Error(fmt.Sprintf("[connector-backend] Error trigger model instance got error %v", err.Error()))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
logger.Error(fmt.Sprintf("[connector-backend] Error trigger model instance got error %v", err.Error()))
logger.Error(fmt.Sprintf("[model-backend] Error trigger model instance got error %v", err.Error()))

@pinglin pinglin merged commit a692d2b into main Oct 23, 2022
@pinglin pinglin deleted the fix-hangout branch October 23, 2022 16:08
@pinglin pinglin changed the title fix: pipeline trigger model hangout fix: fix pipeline trigger model hanging Oct 25, 2022
pinglin pushed a commit that referenced this pull request Oct 25, 2022
Because

- Trigger model timeout 5 seconds is not enough for making model inference
- Goroutines use sync.WaitGroup which does not decrease the counter due to exception errors

This commit

- Increase timeout to 30 seconds
- defer wg.Done() and end function when have any error
- close #79
- close #77
xiaofei-du pushed a commit to instill-ai/instill-core that referenced this pull request Dec 25, 2022
🤖 I have created a release *beep* *boop*
---

## Product Updates

### Announcement 📣

* VDP is officially renamed to `Versatile Data Pipeline`.

We realise that as a general ETL infrastructure, VDP is capable of processing all kinds of unstructured data, and we should not limit its usage to only visual data. That's why we replace the word Visual with Versatile. Besides, the term Data Preparation is a bit misleading, users often think it has something to do with data labelling or cleaning. The term Data Pipeline is definitely more precise to capture the core concept of VDP.

### Features ✨
* support new task Instance segmentation. Check out the [Streamlit example](https://github.com/instill-ai/vdp/tree/main/examples/streamlit/instance_segmentation) 

## VDP ([0.3.0-alpha](v0.2.6-alpha...v0.3.0-alpha))


### Features

* support Instance segmentation task [0476f59](0476f59) 
* add console e2e test into vdp ([#148](#148)) ([a779a11](a779a11))
* add instance segmentation example ([#167](#167))


### Bug Fixes

* fix wrong triton environment when deploying HuggingFace models ([#150](#150)) ([b2fda36](b2fda36))
* use COCO RLE format for instance segmentation ([4d10e46](4d10e46))
* update model output protocol ([e6ea88d](e6ea88d))

## Pipeline-backend ([0.9.3-alpha](https://github.com/instill-ai/pipeline-backend/releases/tag/v0.9.3-alpha))

### Bug Fixes

* fix pipeline trigger model hanging (instill-ai/pipeline-backend#80) ([7ba58e5](instill-ai/pipeline-backend@7ba58e5))

## Connector-backend ([0.7.2-alpha](https://github.com/instill-ai/connector-backend/releases/tag/v0.7.2-alpha))

### Bug Fixes
* fix connector empty description update ([0bc3086](instill-ai/connector-backend@0bc3086))

## Model-backend ([0.10.0-alpha](https://github.com/instill-ai/model-backend/releases/tag/v0.10.0-alpha))

### Features
* support instance segmentation task (instill-ai/model-backend#183) ([d28cfdc](instill-ai/model-backend@d28cfdc))
* support async deploy and undeploy model instance (instill-ai/model-backend#192) ([ed36dc7](instill-ai/model-backend@ed36dc7))
* support semantic segmentation (instill-ai/model-backend#203) ([f22262c](instill-ai/model-backend@f22262c))

### Bug Fixes

* allow updating emtpy description for a model (instill-ai/model-backend#177) ([100ec84](instill-ai/model-backend@100ec84))
* HuggingFace batching bug in preprocess model ([b1582e8](instill-ai/model-backend@b1582e8))
* model instance state update to unspecified state (instill-ai/model-backend#206) ([14c87d5](instill-ai/model-backend@14c87d5))
* panic error with nil object (instill-ai/model-backend#208) ([a342113](instill-ai/model-backend@a342113))


## Console

### Features
* extend the time span of our user cookie (instill-ai/console#289) ([76a6f99](instill-ai/console@76a6f99))
* finish integration test and make it stable (instill-ai/console#281) ([3fd8d21](instill-ai/console@3fd8d21))
* replace prism.js with code-hike (instill-ai/console#292) ([cb61708](instill-ai/console@cb61708))
* unify the gap between elements in every table (instill-ai/console#291) ([e743820](instill-ai/console@e743820))
* update console request URL according to new protobuf (instill-ai/console#287) ([fa7ecc3](instill-ai/console@fa7ecc3))
* add hg model id field at model_instance page (instill-ai/console#300) ([31a6eab](instill-ai/console@31a6eab))
* cleanup connector after test (instill-ai/console#295) ([f9c8e4c](instill-ai/console@f9c8e4c))
* disable html report (instill-ai/console#297) ([689f50d](instill-ai/console@689f50d))
* enhance the warning of the resource id field (instill-ai/console#303) ([6c4aa4f](instill-ai/console@6c4aa4f))
* make playwright output dot on CI (instill-ai/console#293) ([e5c2958](instill-ai/console@e5c2958))
* support model-backend async long run operation (instill-ai/console#309) ([f795ce8](instill-ai/console@f795ce8))
* update e2e test (instill-ai/console#313) ([88bf0cd](instill-ai/console@88bf0cd))
update how we test model detail page (instill-ai/console#310) ([04c83a1](instill-ai/console@04c83a1))
* wipe out all data after test (instill-ai/console#296) ([e4085dd](instill-ai/console@e4085dd))

### Bug Fixes
* fix pipeline e2e not stable (instill-ai/console#285) ([a26e599](instill-ai/console@a26e599))
* fix set-cookie api route issue due to wrong domain name (instill-ai/console#284) ([c3efcdd](instill-ai/console@c3efcdd))

---
This PR was generated with [Release Please](https://github.com/googleapis/release-please). See [documentation](https://github.com/googleapis/release-please#release-please).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
2 participants