Skip to content

Conversation

robotrapta
Copy link
Member

@robotrapta robotrapta commented Nov 11, 2022

New wait argument to submit_image_query which makes it wait (poll) for a confident result.

Uses the detector's default confidence threshold - looks it up if necessary.

Is "wait" the best parameter name? Maybe delay or wait_sec or wait_until? block?
Also would this be better as its own method? submit_image_query is likely to explode into a complex method.
But maybe that's okay?

@robotrapta robotrapta marked this pull request as ready for review November 12, 2022 01:44
@robotrapta robotrapta changed the title Blocking submit Wait for confident result (Blocking submit) Nov 12, 2022
```Python
from groundlight import Groundlight
gl = Groundlight()
d = gl.create_detector("door", query="Is the door open?") # define with natural language
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

technically its just one line of code to build the model :) the other line is to use the model you already built.

UserGuide.md Outdated
## Managing confidence levels and latency

How to build a computer vision system in 5 lines of python code:
Groundlight gives you a simple way to control the trade-off of latency against accuracy. The longer you can wait for an answer to your image query, the better accuracy you will get. In particular, if the ML models are unsure of the best response, they will escalate the image query to a real-time human monitor to review them. Your code can easily wait for this delayed response.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

talk about escalating to more advanced and larger models too and eventually to real-time human monitor or to a task-specific expert ?

UserGuide.md Outdated
How to build a computer vision system in 5 lines of python code:
Groundlight gives you a simple way to control the trade-off of latency against accuracy. The longer you can wait for an answer to your image query, the better accuracy you will get. In particular, if the ML models are unsure of the best response, they will escalate the image query to a real-time human monitor to review them. Your code can easily wait for this delayed response.

The desired confidence level is set as the escalation threshold on your detector. This determines what is the minimum confidence score for the ML system to provide before the image query is escalated to a human monitor.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

query is escalated. (no additional detail required)

while time.time() - start_time < wait:
current_confidence = img_query.result.confidence
if current_confidence is None:
logging.debug(f"Image query with None confidence implies human label (for now)")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is really confusing and needs to be documented further up as well.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Completely agree. In fact, we should write a wrapper which hides this.

You can see the confidence score returned for the image query:

```Python
print(f"The confidence is {image_query.result.confidence}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the None result here will be confusing...

UserGuide.md Outdated
print(f"The answer is {image_query.result}")
```

Or if you want to run as fast as possible, set `wait=0`. This way you will only get the ML results, without waiting for human review. Image queries which are below the desired confidence level still get escalated to human review, and the results are incorporated as training data to improve your ML model, but your code will not wait for that to happen.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

escalation isn't always humans...

Copy link
Contributor

@positavi positavi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

scattering of comments but really just two things:

escalation isn't always about going straight to human. it could be to additional models, edge to cloud, etc. even though today is just one level, let's sell the dream here.

we need to either explain "confidence=None" or just start sending 100%. or 99%. or something like that.

updated verbage
@robotrapta robotrapta merged commit 714c786 into main Nov 14, 2022
@mjvogelsong mjvogelsong deleted the blocking-submit branch April 20, 2023 16:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants