-
Notifications
You must be signed in to change notification settings - Fork 43.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix split_text chunking bug #2088
Conversation
Asked the team to merge out of band |
@vzla0094 we aren't merging into stable, can you change the base branch back to master? |
I'm not ready to merge this as is due to code quality. It looks unpythonic. Code should self-document. We don't say And there is surely a Pythonic way to chunk a string using something out of itertools maybe, or using a generator. |
Closing this as I think #2062 is doing this better |
Hey @p-i-, I've checked both solutions by applying the changes to the stable branch, and neither fixed it.
|
Sure, go ahead. And as @p-i- already mentioned, in rewriting the PR, using existing functionality from the standard library is preferable over DIY implementations. :) |
This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request. |
Conflicts have been resolved! 🎉 A maintainer will review the pull request shortly. |
Just pushed an update removing the comments and restructuring also. Still don't think it's really easy to understand tho, but what do you guys think? feel free to push modifications or I could also use some third party library for chunking like Funcy. Not a python dev, just trying out things, code works tho but please feel free to point me in the right direction |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is looking better :)
autogpt/processing/text.py
Outdated
paragraphs = text.split("\n") | ||
current_length = 0 | ||
current_chunk = [] | ||
|
||
def split_long_paragraph(paragraph: str, max_length: int) -> List[str]: | ||
return [ | ||
paragraph[i : i + max_length] for i in range(0, len(paragraph), max_length) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure how much of a difference it makes for the performance of the LLM, but could you try splitting it on a whitespace (or other non-word) character instead of exactly on the max_length
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ping ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
after a thorough review of the split_text function I found out we can simplify it a lot using textwrap
. No need to have this split_long_paragraph function anymore.
Please check the new revised split_text. Also you might want to check the tests I added
@s0meguy1 I'm not sure about any other failing function but this |
@vzla0094 doesn't look too hard to refactor |
@Pwuts it does look like an easy fix hahah but don't want to risk having to spend more time on this if case some edge case arises lol. If this one is merged I might find some time tomorrow to do the quick fix of the other one:) |
This PR splits the text based on character count, not token count. It also splits in the middle of a sentence. Can I recommend that you take a look at #2542 , which solves these issues? |
If you want, you can just merge this PR, after all, vzla0094 put some work into it, and then I'll just adjust my PR to make the additional changes on top of it. |
Whatever's best for everyone 🤷♂️ but yeah I think you might want to use the tests at least. Nice job on your PR btw |
This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request. |
This is a mass message from the AutoGPT core team. For more details (and for infor on joining our Discord), please refer to: |
The split_text function on the master branch has chunking. This issue should no longer exist. Please sync to the latest and retry. |
@vzla0094 sorry, we didn't get to cherry-picking the tests yet, just didn't have the time. They also don't conform to the test structure used in the rest of the project, and the rest of the PR is obsolete by now. As such, we can't merge it. :/ I'm going to close this PR, with a big thanks for your efforts and the inspiration that your solution provided. You are welcome to submit a PR implementing tests for the text processing module that is currently in |
Background
Handle long paragraphs in
split_text
function by splitting them into smaller chunks, ensuring that no chunk exceeds themax_length
.Fixes: #1820, #1211, #796, #38
Changes
split_text
function to handle paragraphs longer thanmax_length
by splitting them into smaller chunkssub_paragraphs
of lengthmax_length
current_chunk
and updatingcurrent_length
Documentation
Test Plan
split_text
function with different input text scenarios, including long paragraphs and varyingmax_length
valuesmax_length
PR Quality Checklist