Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(): remove llm chain for chatopenai #19

Merged
merged 10 commits into from
Jul 28, 2023
Merged

feat(): remove llm chain for chatopenai #19

merged 10 commits into from
Jul 28, 2023

Conversation

mattzcarey
Copy link
Contributor

No description provided.

@github-actions
Copy link

github-actions bot commented Jul 28, 2023

LOGAF Level 3 - /home/runner/work/GenossGPT/GenossGPT/genoss/llm/hf_hub/base_hf_hub.py

  1. The generate_embedding method raises a NotImplementedError with a message indicating it's not used. If this is the case, consider removing the method or explaining why it's necessary in a comment.
  2. Add docstrings to the generate_answer and generate_embedding methods to explain their purpose and functionality.

LOGAF Level 3 - /home/runner/work/GenossGPT/GenossGPT/genoss/llm/fake_llm.py

  1. The generate_embedding method uses a hardcoded size of 128 for the FakeEmbeddings model. Consider making this a parameter or constant to increase flexibility.
  2. Add docstrings to the generate_answer and generate_embedding methods to explain their purpose and functionality.

LOGAF Level 3 - /home/runner/work/GenossGPT/GenossGPT/genoss/llm/openai/openai_llm.py

  1. The api_key is exposed. Consider using a secure method to store and retrieve sensitive data like API keys.
  2. Add docstrings to the generate_answer and generate_embedding methods to explain their purpose and functionality.

🔒📚🔧


Powered by Code Review GPT

demo/main.py Outdated
]

for msg in st.session_state.messages:
for msg in st.session_state.messages[1:]: # Skip the system message when displaying
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should probably just add an if with a different display.

Something like :

if msg["role"] == "system":
    st.markdown(f"System: *{msg['content']}*")
    continue

If you really don't want to display it don't but go with a if logic anyway.

Comment on lines +13 to +16
if TYPE_CHECKING:
from langchain.schema import BaseMessage

from genoss.entities.chat.message import Message
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not clear to me why you are doing this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fix-ruff did this otherwise it errored. See other comment

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it still do the same if you remove TYPE_CHECKING and go with a normal import ?

genoss/llm/base_genoss.py Show resolved Hide resolved
Comment on lines 25 to 29
def _parseMessagesAsChatMessage(self, messages: list[Message]) -> list[BaseMessage]:
new_messages: list[BaseMessage] = []
for message in messages:
new_messages.append(ChatMessage(content=message.content, role=message.role))
return new_messages
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like it's a good move to leave LLMChain. This method should probably be outside this class tho and reused through various LLMs.

Another thing. Python use snake_case and not CamelCase or pascalCase for functions.

Finally you may want to use python list comprehension.

  return [
    ChatMessage(content=message.content, role=message.role)
    for message in messages
  ]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Understand and agree most of this but what do you mean by keeping LLM chain?

Calling the LLM directly makes more sense than using a chain IMO. Reduce some latency and remove complexity.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like it's a good move to leave LLMChain

XD I meant move away from it not keep it. I guess it's french synonymy. :P

Comment on lines +13 to +15
if TYPE_CHECKING:
from genoss.entities.chat.message import Message

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you have any reason to do that TYPE_CHECKING thing ?
I think we should avoid this unless we have a circular import issue.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix-ruff did this. Otherwise ruff errored. Happy if you have another option?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you probably should just do the import without the TYPE_CHECKING.

@StanGirard StanGirard merged commit 023be34 into main Jul 28, 2023
2 checks passed
@StanGirard StanGirard deleted the feat/use-chat-llm branch July 28, 2023 16:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants