New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add download_with_index method #41257
Closed
psoldier
wants to merge
30
commits into
rails:main
from
psoldier:active_storage_add_dowload_with_index
Closed
Add download_with_index method #41257
psoldier
wants to merge
30
commits into
rails:main
from
psoldier:active_storage_add_dowload_with_index
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Follow-up to rails#40960. This fixes a few different visual issues with links and table rows when using dark mode. Co-authored-by: Chris Seelus <chris@imeos.com>
Before this commit, Rails test Rake tasks only load the test files, and the tests only run in an at_exit hook via minitest/autorun. This prevents conditionally running tasks only when tests pass, or even more simply in the right order. As a simple example, if you have: task default: [:test, :rubocop] The rubocop task will run after the test task loads the test files but before the tests actually run. This commit changes the test Rake tasks to shell out to the test runner as a new process. This diverges from previous behavior because now, any changes made in the Rakefile or other code loaded by Rake won't be available to the child process. However this brings the behavior of `rake test` closer to the behavior of `rails test`. Co-authored-by: Adrianna Chang <adrianna.chang@shopify.com>
If you need to download files by chunks, you could send a block to the download method. However, if for some reason the execution of this block failed, there was no way to resume the processing from the last successfully processed chunk. To take advantage of those previous chunks, it was necessary to re-implement the reading of files by chunks, using the `download_chunk` method, keeping a record of the execution point and updating offsets (something like a local pagination of chunks) With the `download_with_index` method we will receive the chunk and (optionally) the current index in each block. In case of failures, you only need to restart the execution with the index ```ruby a_tb_file.download_with_index(last_successfully_processed_chunk_index) do |chunk, index| # ... last_successfully_processed_chunk_index = index end ```
Ensure test rake commands run immediately
[ci skip] Fix typo in configuration guide
The one provided wouldn't actually change the value in the logs. Based on https://github.com/rails/rails/blob/48ca9e2557d18a01fb3f6002645363f8dea28019/actionpack/lib/action_dispatch/http/filter_parameters.rb#L26
Webpacker guide code typos
Supply an explanatory example for the Regexp type of filter, which is mentioned in the documentation of the initializer but without any illustration.
Provide a working example for a filter_parameters lambda
Clarify compilation notes [ci skip]
In the docs [here](https://apidock.com/rails/v6.0.0/ActiveSupport/MessageVerifier) under the `Making messages expire` section, it was a little unclear what is `doowad` & `parcel` which ideally be a +string+ or a variable but in the complete documentation, we haven't used the reference of any variables except +token+, so these 2 should be a string.
Use string in the example[ci skip]
Add example of the regexp parameter filter type [ci skip]
Tweak dark mode CSS
…riants [ActiveStorage] Add ability to use pre-defined variants
Deserialize enum value to original hash key
Prevent the raw_params method from throwing an exception if the argument auth is blank. Add tests for the raw_params method Fix typo Fix rubocop offenses
…-not-raise-an-exception Fix raw params method to not raise an exception Fixes rails#41279.
If you need to download files by chunks, you could send a block to the download method. However, if for some reason the execution of this block failed, there was no way to resume the processing from the last successfully processed chunk. To take advantage of those previous chunks, it was necessary to re-implement the reading of files by chunks, using the `download_chunk` method, keeping a record of the execution point and updating offsets (something like a local pagination of chunks) With the `download_with_index` method we will receive the chunk and (optionally) the current index in each block. In case of failures, you only need to restart the execution with the index ```ruby a_tb_file.download_with_index(last_successfully_processed_chunk_index) do |chunk, index| # ... last_successfully_processed_chunk_index = index end ``` Fix #download_chunk for blob instances was missing
…b.com/psoldier/rails into active_storage_add_dowload_with_index
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
If you need to download files by chunks, you could send a block to the download method.
However, if for some reason the execution of this block failed, there was no way to resume the processing from the last successfully processed chunk.
To take advantage of those previous chunks, it was necessary to re-implement the reading of files by chunks, using the
download_chunk
method, keeping a record of the execution point and updating offsets (something like a local pagination of chunks)With the
download_with_index
method we will receive the chunk and (optionally) the current index in each block.In case of failures, you only need to restart the execution with the index