Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chat addition #17

Merged
merged 14 commits into from
Oct 27, 2023
Merged

Chat addition #17

merged 14 commits into from
Oct 27, 2023

Conversation

comath
Copy link
Contributor

@comath comath commented Aug 31, 2023

Todo

  • Turn limits in the Challenge creation UI
  • Multiturn front end
  • Grading updates to support multi-turn
  • View updates to support multi-turn

@comath comath mentioned this pull request Aug 31, 2023
@@ -16,6 +17,22 @@

log = getLogger(__name__)

class LLMVChatPair(db.Model):
__tablename__ = 'llmv_chat_pair'
Copy link
Collaborator

@canyon289 canyon289 Sep 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's pair mean in this context? Prompt generation pair?

history = llmv_generation.history
history = [h.json() for h in llmv_generation.history]
log.debug(f'Found history "{history}" ')
else:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the block that handles multi model setups?

@llm_verifications.route('/chat_limit/<challenge_id>', methods=['GET'])
@authed_only
def chat_limit(challenge_id):
"""Define a route for for showing users their answer submissions."""
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"""Define a route for for showing users their answer submissions."""
"""Define a route for showing users their answer submissions."""

@@ -37,8 +38,6 @@ def upgrade(op=None):
sa.Column("model_id", sa.Integer(), nullable=False),
sa.Column("user_id", sa.Integer(), nullable=True),
sa.Column("team_id", sa.Integer(), nullable=True),
sa.Column("text", sa.Text(), nullable=False),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved to table that supports multiturn above if I understand correctly?

@@ -51,16 +88,10 @@ def generate_for_challenge():
# Combine the pre-prompt and user-provided prompt with a space between them.
prompt = request.json["prompt"]
log.debug(f'pre-prompt {preprompt} and user-provided-prompt: "{prompt}"')
idempotency_uuid = str(uuid4())
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just curious why is this needed?

llmv_models.py Outdated
id = db.Column(db.Integer, primary_key=True)
generation_id = db.Column(db.Integer, db.ForeignKey('llmv_generation.id', ondelete='CASCADE'))
prompt = db.Column(db.Text)
generation = db.Column(db.Text)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider making these nullable in case the model fails for some reason and a generation is not returned

@@ -16,6 +17,22 @@

log = getLogger(__name__)

class LLMVChatPair(db.Model):
__tablename__ = 'llmv_chat_pair'
__table_args__ = {'extend_existing': True}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

llmv_routes.py Outdated Show resolved Hide resolved
llmv_routes.py Outdated
if 'generation_id' in request.json:
log.info("Found old generation id, using that")
llmv_generation = LLMVGeneration.query.filter_by(id=request.json['generation_id']).first_or_404()
if llmv_generation.status != "pending":
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Style nit: All these conditions return the same response. Consider collapsing unless the plan is to change the returned json

            if (llmv_generation.status != "pending" or 
               llmv_generation.account_id != get_current_user().id or 
               llmv_generation.challenge_id != challenge.id):

               response = {'success': False, 'data': {'text': "This challenge is complete.", "id": -1}}
               return jsonify(response)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should have custom error messages.

llmv_routes.py Outdated Show resolved Hide resolved
@@ -49,6 +48,16 @@ def upgrade(op=None):
sa.ForeignKeyConstraint(["user_id"], ["users.id"]),
sa.PrimaryKeyConstraint("id"),
)
op.create_table(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Its good to see migrations. Will a contest ever be migrated though? It feels like a new db would be started each time an event is run meaning no migration would need to take place?

Cant hurt to have it, just asking

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is where the tables are created, the llm_model file just reads the established tables.

remote_llm.py Outdated Show resolved Hide resolved
remote_llm.py Outdated Show resolved Hide resolved
@canyon289
Copy link
Collaborator

canyon289 commented Oct 25, 2023

@comath I fixed the conflict

comath and others added 6 commits October 26, 2023 14:00
Co-authored-by: Ravin Kumar <7213793+canyon289@users.noreply.github.com>
Co-authored-by: Ravin Kumar <7213793+canyon289@users.noreply.github.com>
Co-authored-by: Ravin Kumar <7213793+canyon289@users.noreply.github.com>
@canyon289 canyon289 changed the title [Draft] Chat addition Chat addition Oct 27, 2023
const result = await response.json();
this.chat_limit = result.data.chat_limit;
if (this.chat_limit > 1) {
this.is_single_turn = false;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why two bools? Cant single turn vs multi turn be differentiated with just one?

json={"uuid": idempotency_uuid,'prompt': prompt, "system" : preprompt, "model": model, "history": history})
except (requests.Timeout, requests.ConnectionError) as error:
log.debug(error)
# try again, with idempotency key
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should something happen here? Connection error is just handled twice?

@canyon289
Copy link
Collaborator

From static read LGTM, Merge it and fixups can happen on main

@canyon289
Copy link
Collaborator

Merging so I can continue fixing things up on main

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants