Skip to content

Integrate AI Writing Feedback and Autocomplete with Transformers #47

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

MostlyKIGuess
Copy link
Member

Changes;

  • Added AIFeedback.get_feedback() to generate detailed writing feedback using MBZUAI/LaMini-Flan-T5-248M.
  • Extended AIFeedback with get_autocomplete() (implementation pending) to generate writing suggestions.
  • Integration enables AI copilot-like behavior with inline suggestions.
  • Updated error handling for CUDA fallbacks.
  • Added transformers as a dependency in setup.py under install_requires.

Note:

  • Needs to install transformers library, I am not sure how to add it as a dependency.
  • AI feedback is obtained by clicking Idea symbol, autocomplete from the search symbol.

Video:

2025-02-04.15-59-18.mp4

- Added AIFeedback.get_feedback() to generate detailed writing feedback using MBZUAI/LaMini-Flan-T5-248M.
- Extended AIFeedback with get_autocomplete() (implementation pending) to generate writing suggestions.
- Integration enables AI copilot-like behavior with inline suggestions.
- Updated error handling for CUDA fallbacks.
- Added transformers as a dependency in setup.py under install_requires.

This update improves the writing experience by leveraging state-of-the-art NLP models.
@quozl
Copy link
Collaborator

quozl commented Feb 4, 2025

Missing newline at end of file.

How does this work when there is no internet connection? The application must not hang. What test have you used?

@MostlyKIGuess
Copy link
Member Author

Missing newline at end of file.

How does this work when there is no internet connection? The application must not hang. What test have you used?

I am using a 738 Million parameters model, it uses around 1 GB of GPU if available or like 10 seconds if on CPU.

Not sure how it holds up on action OLPC laptop.

Also this was done because I saw a summer project regarding this which was hard and I find really think it was hard.. we are planning to make sugar API first, which is the latest repo in my respositories now, and then use that instead. Rather than fully local.

There's a docker image on that if you would like to try it.

@quozl
Copy link
Collaborator

quozl commented Feb 5, 2025

I mean what test of no internet connection. Also, any changes to dependencies; does something need to be installed first? README.md lists some, but you did not change it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants