Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stream large binary file over declarative HTTP client. #2906

Closed
soufianenassih opened this issue Mar 10, 2020 · 10 comments
Closed

Stream large binary file over declarative HTTP client. #2906

soufianenassih opened this issue Mar 10, 2020 · 10 comments
Labels
closed: question The issue is a question

Comments

@soufianenassih
Copy link

we need to upload large files "nightly GraalVm artifacts" by using GitHub release API and the declarative micronaut HTTP client we try MediaType.MULTIPART_FORM_DATA and the @body as MultipartBody but without success because GitHub upload needs a stream of binary data.

after that, we try MediaType.APPLICATION_OCTET_STREAM and it works as expected with byte[] the only limitation with this approach is that it needs an array of bytes, and for our situation loading a large file to memory is not a good idea.

Question

Are they any examples to stream large files using declarative HTTP client or any intention to support InputStream for the MediaType.APPLICATION_OCTET_STREAM

Actuale usage

@Client("github")
public interface GithubUploadClient {

    @Post(value = "/repos/${github.owner}/{github.repo}/releases/{releaseId}/assets", produces = MediaType.APPLICATION_OCTET_STREAM)
    Asset uploadAsset(@QueryValue("name") String name, @Positive long releaseId, @Body byte[] content);

}

Maybe supporting this on the future

@Client("github")
public interface GithubUploadClient {

    @Post(value = "/repos/${github.owner}/{github.repo}/releases/{releaseId}/assets", produces = MediaType.APPLICATION_OCTET_STREAM)
    Asset uploadAsset(@QueryValue("name") String name, @Positive long releaseId, @Body InputStream content);

}
@graemerocher
Copy link
Contributor

If you must work with an input stream you can probably do something like:

Flowable bodyReader = Flowable.create(emitter -> {
                try (InputStream in = getInputStream()) {
                    byte[] buffer = new byte[1024];
                    int len;
                    while ((len = in.read(buffer)) != -1) {
                        if (buffer.length == len) {
                            emitter.onNext(buffer);
                        } else {
                            emitter.onNext(Arrays.copyOf(buffer, len));
                        }
                    }
                    emitter.onComplete();
                } catch (Throwable e) {
                    emitter.onError(e);
                }
            }, BackpressureStrategy.BUFFER).subscribeOn(Schedulers.io());

Then declare:

@Body Flowable<byte[]> content

And pass the above flowable.

@soufianenassih
Copy link
Author

soufianenassih commented Mar 11, 2020

thanks for your quick feedback, after trying your suggestion we figure out that GitHub upload API does not accept streams it expects the asset data in its raw binary form as a body.

any suggestions on how can we send a binary file as a body without load it all into memory using micronaut declarative client.

thanks again

@jameskleeh
Copy link
Contributor

jameskleeh commented Mar 12, 2020

Looks like there is a library from the rxjava folks to handle converting an inputstream to a byte array observable https://github.com/ReactiveX/RxJavaString.

I don't see anywhere in that document that says the api doesn't accept streams. Even if you buffered the whole file in memory it will still be streamed because it cannot send the entire file in a single chunk. You likely need to choose a media type that matches the files contents.

@jameskleeh jameskleeh added the closed: question The issue is a question label Mar 12, 2020
@graemerocher
Copy link
Contributor

@soufianenassih my suggestion should work the same, just change the content type to application/zip or whatever you type are uploading

@denebgarza
Copy link

I'm having a similar issue, but rather than streaming to Github I'm trying to stream from one micronaut service to another.

I created a Flowable<byte[]> which I'm passing in to my declarative client as a @Body param.

How is it supposed to be consumed on the receiving service? I have the matching controller implementation, and am subscribing to the flowable but receiving nothing. I verified the InputStream is reading data correctly.

@fstolar-vendavo
Copy link

fstolar-vendavo commented Feb 24, 2023

I am doing the same except i am using reactor library. The problem is I am trying to restream the data from a @controller endpoint to @client. It works as expected for small files but I am limited to 10mbs.
I already configured micronaut as follows but it does not work
services:
read-idle-timeout: 5m
write-idle-timeout: 5m
max-content-length: 7.5GB
max-request-size: 7.5GB
client:
max-content-length: 2147483647

@graemerocher
Copy link
Contributor

provide examples of what doesn't work and maybe we can help

@fstolar-vendavo
Copy link

fstolar-vendavo commented Feb 24, 2023

Thanks for quick response :)
So I just provide what i can:
I have a controller with an endpoint consuming Flux as a body using octet stream mime type.
I want to restream these data using micronaut http client to another micro service which also is written the same way as mine endpoint is. It is actually acting more like a proxy having not any business logic.
I set up http: client: max-content-length to maximum int size but the request always fails on 413 too large request max size is 10mb. I tested that it's not a problem of the controller's endpoint but at a time when the http connection is opened using the client. Here is the exception:

io.micronaut.http.exceptions.ContentLengthExceededException: The content length [1073741824] exceeds the maximum allowed content length [10485760]
at io.micronaut.http.server.netty.DefaultHttpContentProcessor.lambda$fireExceedsLength$0(DefaultHttpContentProcessor.java:101)
at java.base/java.util.Optional.ifPresent(Optional.java:178)
at io.micronaut.http.server.netty.DefaultHttpContentProcessor.fireExceedsLength(DefaultHttpContentProcessor.java:100)
at io.micronaut.http.server.netty.DefaultHttpContentProcessor.onUpstreamMessage(DefaultHttpContentProcessor.java:80)
at io.micronaut.http.server.netty.DefaultHttpContentProcessor.onUpstreamMessage(DefaultHttpContentProcessor.java:39)
at io.micronaut.core.async.processor.SingleThreadedBufferingProcessor.doOnNext(SingleThreadedBufferingProcessor.java:56)

import io.micronaut.http.MediaType
import io.micronaut.http.annotation.*
import reactor.core.publisher.Flux
import reactor.core.publisher.Mono

@Controller("api/inbound/datasets")
class InboundDatasetsController(
    private val dataApiClient: DataApiClient
) {

    @Post("/upload/{datasetType}")
    @Consumes(MediaType.APPLICATION_OCTET_STREAM)
    fun uploadDataset(
        @PathVariable datasetType: String,
        @Body body: Flux<ByteArray>
    ): Mono<InboundDataset> {
        return dataApiClient.uploadFile(datasetType, body)
    }
}

@Client("\${service.data-api.url}")
interface DataApiClient {
    @Post(uri = "/v5/datasets/{datasetType}")
    @Produces("application/octet-stream")
    fun uploadFile(@PathVariable datasetType: String, @Body datasetFile: Flux<ByteArray>): Mono<InboundDataset>
}

@graemerocher
Copy link
Contributor

so for these kinds of proxying use cases involving the client today you have to write a filter https://docs.micronaut.io/latest/guide/#proxyClient

Certainly it would be nice to be able to use a controller but due to the way buffering works in Netty it makes it hard to implement.

@fstolar-vendavo
Copy link

Thank you very much for your help. Big father of Micronaut 😇 Will definitely rewrite it then.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
closed: question The issue is a question
Projects
None yet
Development

No branches or pull requests

5 participants