Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Streaming a response to the client as it's being rendered? #1181

Open
grishka opened this issue Jul 12, 2020 · 2 comments
Open

Streaming a response to the client as it's being rendered? #1181

grishka opened this issue Jul 12, 2020 · 2 comments

Comments

@grishka
Copy link

grishka commented Jul 12, 2020

The template engine I'm using is capable of outputting into a Writer. I'd like to use that to avoid having the client to wait while the entire template is being rendered into memory and then the result transferred over the network. Additionally, I'd like my code doing that to not be weird.

Weird solution that works: render a template into response.raw().getWriter() in a route handler. Whatever is returned from the route handler is then ignored.

But then I'd like to return an object, like Spark's ModelAndView but a bit better, from a route, and have the rest handled behind the scenes to avoid unnecessary duplication. Nice solutions that don't work:

  • An after filter doesn't work because I can't seem to find a way to get the object returned by the route. response.body() returns null, but even if it did return something, that would still be a string according to its signature.
  • A ResponseTransformer isn't that nice because you have to specify one for each route, but in addition to that, you can't get to the Response from there to use the writer; you take an object returned by the route and return a string. I don't want a string.
  • TemplateEngine has the exact same problems as ResponseTransformer, it expects you to render everything in one go with the returned object as its only input.

Ideally, I'd like the ability to register a global "response transformer" that's like an after filter that only runs when an object of a specific type is returned by a route, and takes the request, the response, and the returned object. Something like this:

get("/test", (req, resp)->{
    return new RenderedTemplateResponse("index").with("hello", "world");
});

responseTypeTransformer(RenderedTemplateResponse.class, (req, resp, obj)->{
    obj.renderToResponse(req, resp);
    return "";
});

// Method:
public static <T> void responseTypeTransformer(Class<T> cls, ResponseTypeTransformer<T> transformer);
// Interface:
@FunctionalInterface
public interface ResponseTypeTransformer<T>{
    public Object transform(Request request, Response response, T object);
}

But that's ideally and that's something that doesn't exist. Is there a way to achieve what I want with the existing implementation without getting too hacky?

@grishka
Copy link
Author

grishka commented Jul 12, 2020

Actually, the solution that "works" does so with a quirk. If you write something into the response from a route handler, then by the time it all reaches the after/after-after filters, the response has already been sent and you can't modify it.

@grishka
Copy link
Author

grishka commented Jun 15, 2021

Okay, it turns out the necessary infrastructure is already in place, but it's not exposed to outside. With the magic and curse of java reflection, I did this:

	private static void setupCustomSerializer(){
		try{
			Method m=Spark.class.getDeclaredMethod("getInstance");
			m.setAccessible(true);
			Service svc=(Service) m.invoke(null);
			Field serverFld=svc.getClass().getDeclaredField("server");
			serverFld.setAccessible(true);
			EmbeddedJettyServer server=(EmbeddedJettyServer) serverFld.get(svc);
			Field handlerFld=server.getClass().getDeclaredField("handler");
			handlerFld.setAccessible(true);
			JettyHandler handler=(JettyHandler) handlerFld.get(server);
			Field filterFld=handler.getClass().getDeclaredField("filter");
			filterFld.setAccessible(true);
			MatcherFilter matcher=(MatcherFilter) filterFld.get(handler);
			Field serializerChainFld=matcher.getClass().getDeclaredField("serializerChain");
			serializerChainFld.setAccessible(true);
			SerializerChain chain=(SerializerChain) serializerChainFld.get(matcher);
			Field rootFld=chain.getClass().getDeclaredField("root");
			rootFld.setAccessible(true);
			Serializer serializer=(Serializer) rootFld.get(chain);
			ExtendedStreamingSerializer mySerializer=new ExtendedStreamingSerializer();
			mySerializer.setNext(serializer);
			rootFld.set(chain, mySerializer);
		}catch(Exception x){
			x.printStackTrace();
		}
	}

ExtendedStreamingSerializer is my class that extends spark.serialization.Serializer, the two abstract methods are pretty self-explanatory.

So now this feature request becomes "let applications plug into this serializer infrastructure without hacking their way through private fields".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant