New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC: Micro Response Type #369
Comments
Hello @rauchg, I'm using micro to serve a |
The suggested approach seems totally more natural to me. I feel better using it and looks safer. Probably because:
I'm not sure, is it bad that we can't continue doing some work after sending a response anymore? React is a delight and one reason is because it's javascript, it always resonates with me when we get closer to the language and try to prevent making new APIs on top of it when they can be achieved the former way. 👏 |
+1 for "single way to return a response".
That would be bad. But I think we can still use Promise to do works asynchronously, it just looks not so straightforward as module.exports = function (req, res) {
send(res, 200, 'hi')
// do something
} |
Hey @rauchg, I'm still trying to fully understand your approach here. I think I am/was struggling because this is an incomplete picture — seeing the changes inside My first (and current) hesitation is that you can't/shouldn't issue new So, the only way I can see this being done is by passing around new class Response {
constructor(opts) {
this.body = null;
this.status = opts.status;
this.headers = new Map();
if (opts.headers) this.setHeaders();
if (opts.body !== void 0) this.setBody(opts.body);
}
setHeaders(obj) {
for (let k in obj) {
this.headers.set(k.toLowerCase(), obj[k]);
}
}
setStatus(code) {
this.status = code;
}
setBody(any) {
if (any == null) {
this.setStatus(204);
// } else if ( stream | buffer | object ) {
// set content-type && content-length headers
}
this.body = any; // JSON.stringify for object
}
end(res) {
res.statusCode = this.status || 200;
this.headers.forEach((val, key) => {
res.setHeader(key, val);
});
res.end(this.body || '');
}
}
module.exports = function (body, status, headers) {
return new Response({ status, body, headers });
};
module.exports.from = function (res) {
return new Response(res);
}
This offloads the dynamic body-type handler from I'm still not sure if this is what you have in mind, but it's the only way my peanut can make sense of it... so far 🙃 |
+1. To be clear I like the proposal in the way it makes clear what is Personally I still like the const { ValidationError } = require('objection')
const errorHandler = async (req, res, err) => {
if (err instanceof ValidationError) {
// if it failed against the JSONSchema
send(res, 400, `Validation Error. ${Object.keys(err.data)}`)
} else {
...
}
} I can also make error patterns in this way to catch a few errors (Eg: Personally I both like and dislike that we need to add
I am not sure if I should add this line too, but here we go: I'm new to the Node world, and when I first started with Express I did not like the way it worked. Micro on the other side was simple, to do Thank you guys for improving micro! |
So, to understand,if the 'throw' behaviour is deprecated now the correct way would be return res('something').setStatus(403) ? Oh, as side note, I think the example about the middleware should be a function that return a function that accept |
@ramiel I don't think so. It just make it clear that it is now a feature of the "userland". Erro handling like this const handleErrors = fn => async (req, res) => {
try {
return await fn(req, res)
} catch (err) {
const statusCode = err.statusCode || 500
const message = err.message || 'Internal Server Error'
console.error(err)
return send(res, statusCode, message)
}
} would still be a thing and we would change it to: const res = require('micro/res')
const handleErrors = fn => async (req) => {
try {
return await fn(req)
} catch (err) {
const statusCode = err.statusCode || 500
const message = err.message || 'Internal Server Error'
console.error(err)
return res(message, statusCode)
}
} But again, it is a specific implementation and not a enforced pattern of micro itself. |
Absolutely @paulogdm. You just wrote the userland implementation of |
@ramiel for the most part, I've learned through my own usage of Even if you are working with third-party libraries that throw. e.g.: const res = require('micro/res');
const json = require('micro/json');
module.exports = (req) => {
try {
await validateInput(await json(req));
} catch (err) {
return res(`Validation Failed: ${err.code}`, 400)
}
return res({ ok: true });
} |
The only part that is not fully clear to me is whether in the example above we should allow: return { ok: true } Or if we should always enforce the |
@rauchg Enforcing the Although, that can be the way our eyes are used to at the moment. So by allowing to return Just like we return an object from a normal helper function, this function is just JavaScript and we do the HTTP magic behind the scene. Apart from that, it's good to keep in mind, many users will use
We should warn the user with a nice error in development in my opinion :) |
Good point. I was thinking about throwing if you return |
I use micro for a Slack bot and often call
I like the proposed method but I'm not sure how it fits with that particular use case? |
@Zertz you could use module.exports = async req => {
try {
await doesThisRequestLookValid();
return res('OK', 200);
} finally {
await doActualWork();
}
} Or create a high order function which wraps your |
That feels slightly awkward at the moment though I do like the idea of a response basically being a return value, just like calling a function. |
I'm a fan of the proposed changes. This would further simplify micro and push the syntax sugar (e.g. For transition, most of the functionality could be offered as a userland shim between versions. This would ensure current userland code isn't immediately deprecated and can continue to be used via the inclusion of a version bridge. This would be easy to test - simply extracting current tests for this functionality into the corresponding packages. Apps written in vanilla micro will become even more simple than they are today given the proposed single method of response. I think this is a good thing. |
I'm using restify now. Made a GraphQL API server with it recently and it's beautiful. |
This makes Every http-server/http-client is a function with this signature: I am really excited to see this feature merged into master and I would like to help. |
@alisabzevari thanks for your interest and nice comment. If you want to start implementing this, we can provide guidance. We'd also love to port |
I'd love to help out on this.
Count me in for anything.
…On Fri, Dec 14, 2018, 8:03 PM Guillermo Rauch ***@***.*** wrote:
@alisabzevari <https://github.com/alisabzevari> thanks for your interest
and nice comment. If you want to start implementing this, we can provide
guidance. We'd also love to port micro to TypeScript and have great @types
<https://github.com/types> support.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#369 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAJBTMprEjpK6nmKWjtLHQs3nyAOkkELks5u5AQbgaJpZM4VpvcI>
.
|
Thanks @rapzo! @alisabzevari please share your email addresses and I'll set up a Slack channel to work on this project. |
I believe my email is public here.
Slack? What about spectrum? :-P
Anyways, I'm there with the same handler.
Thanks for the opportunity.
…On Fri, Dec 14, 2018, 8:12 PM Guillermo Rauch ***@***.*** wrote:
Thanks @rapzo <https://github.com/rapzo>! @alisabzevari
<https://github.com/alisabzevari> please share your email addresses and
I'll set up a Slack channel to work on this project.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#369 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAJBTCT0KHQt_znhLFCIeV8vl-UQQe5bks5u5AYbgaJpZM4VpvcI>
.
|
I would love to help. |
Any update on this from ZEIT folks? I'm curious because this sounds like a great evolution of the API. Also curious... in the body of @rauchg's writeup there's an example for CORS handling. Where you end up writing a "middleware" like this: withCORS(req =>
res(200, 'Hi there')
) Doesn't this kind of eliminate the benefits of "no middleware"? To me it seems like it would be nice if Micro instead encouraged doing something like: req => {
const r = res(200, 'Hi there')
cors(r)
return r
} This way even these middleware stay as simple functions, instead of dealing with the confusion of having to For example, by avoiding middleware it can be easy to add tracing to specific logic: req => {
const r = res(200, 'Hi there')
trace('prettify', () => {
r = prettify(r)
})
r = cors(r)
return r
} I'm thinking about this these days because trying to add tracing to Express-like middleware is near impossible. And even with Koa it's not solved, since you end up accidentally over-counting the durations because middleware If so, then it seems like const res = somethingElse(...) Because I bring all of this up, not to say that you couldn't do the "middleware" approach. But I'd think that Micro's docs should actively discourage people from doing it, because it eliminates some of the gains it has made. And it would be unfortunate if the ecosystem of NPM packages were still being made in the Express/Koa-like fashion. What do you think? |
Middlewares or here Filters (based on the server as a function article) are mostly used for cross-cutting concerns which are not a part of the business logic in a handler. IMO functionalities like authentication, response deserialization based on request media-type, logging, error handling etc which are all application agnostic are good examples of Filters. You don't want to pollute your handler with these concerns. Also, you don't want to involve them when you write unit tests for your handlers because you want to focus on the business logic to test. Adding to Gillermo's comment, I think middlewares are bad idea because it obscures the chain of responsibility. It is not easy to understand which middlewares will be involved when the server is handling a request. You have to dig into the code to find out which middlewares will be effective for a request handler. It can be defined globally in a different file, it can be defined for a top-level router or specifically for the route handler that you are focusing on. Although you can mitigate the problem by always specifying all the middlewares specific to the handler but using Filters you can still have more general and composable functionalities and not break the chain of responsibility and have decoupled layers for your application. But in the end, you are free to use the middlewares as you described. |
@alisabzevari can you give a small code sample of what you consider a "filter" vs. a "middleware"? I'm not sure exactly what you're arguing for otherwise, because it's hard for me to tell what you think is the ideal way to do things like auth, CORS, error handling, etc. What I'm arguing is that it's possible that "composition", like: withCORS(req =>
res(200, 'Hi there')
) …being the standard way the community coalesces around adding functionality will still be harmful. Since it makes it very hard to customize those behaviors—for me the use case is adding tracing, but it could be others. What you have is essentially middleware for the things you decide to compose. Whereas something more imperative like: (req, res) => {
cors(req, res)
send(res, 200, 'Hi there')
} …doesn't suffer from that issue. And it also benefits from being able to rely entirely on Node.js built-ins for requests and responses, instead of creating yet another abstraction that isn't core the platform? It seems similar to me to React's DX, which folks might be familiar with. The older pattern of using HOCs to compose up behaviors is very hard to reason about. Whereas using the more imperative Hooks inline when you need them is much easier to follow. |
A Filter is a higher order function which accepts a handler, decorates the handler with some functionality and returns a new handler. The following types define the type RequestHandler = Request => Promise<Response>
type Filter = RequestHandler => RequestHandler
// This will be the CORS filter:
function withCors(next) {
return async (req) => {
const res = await next(req);
return res.setHeaders({
'Access-Control-Allow-Origin': '*'
});
}
}
const myHandler = req => res(200, 'Hi there');
const decoratedHandler = withCors(myHandler); The benefit of this implementation is that |
Okay, that's what I figured, thanks! But I'm not sure I see the benefit to declaring things that way... const server = micro(withCors(req => {
return res(200, 'Hi there')
})) What is gained from the composition? and from the immutability? Trying to figure out where the Whereas you achieve the exact same thing with: const server = micro((req, res) => {
send(res, 200, 'Hi there')
cors(res)
}) …but it's much easier to reason about, and toggle on/off functionality. Or if immutability has real benefits, then: const server = micro(req => {
let r = res(200, 'Hi there')
r = cors(r)
return r
}) @rauchg what are your thoughts on this? I was kind of excited to think that Micro offered what no other routing system did: a clear, imperative approach to layering logic together to handle a request and write a response. And with things like |
Thinking about this more... One thing that is nice about Micro right now is that it just uses the native For example right now Micro assumes that you want non-pretty-printed JSON in production, but lots of APIs (eg. Stripe) prefer to make the default pretty-printed instead. To achieve this currently, you'd need to write your own |
@ianstormtaylor In the same way that you'd write some import { res, Body } from 'micro';
import { OutgoingHttpHeaders } from "http";
function apiRes(body: Body, statusCode?: number, headers?: OutgoingHttpHeaders) {
if (typeof body === "object" || typeof body === "number") {
body = JSON.stringify(body, null, 2);
headers = headers || {};
headers["Content-Type"] = "application/json; charset=utf-8";
}
return res(body, statusCode, headers);
} |
I'm extremely late to the party on this, but I love this direction. At npm, we did this with spife and it worked out very nicely. (Implementation here). In particular: we used Essentially, this looked like: server.get('/hello/:target', handler)
async function handler (req, { target = 'world' }) {
return {
message: `hello ${target}`
[Symbol.for('status')]: 418, // teapot
[Symbol.for('headers')]: {
'x-clacks-overhead': 'gnu/terry pratchett'
}
}
} However, the tradeoff with symbols are:
So, YMMV with Symbols. OTOH, a wrapper type can be useful. With entropic (& boltzmann, which is written on top of micro) we used the Fetch API's In general: with regards to doing work after the response is sent, either approach will look something like this: async function handler (request) {
doDeferredWork().catch(err => {
// log the error
})
return {something: 1}
}
async function doDeferredWork() {
// do deferred work here.
} And sending back event stream data can look something like this: const { Readable } = require('stream')
async function handler (request) {
return new Readable({
objectMode: true,
read () {
this.push({'hello': 'world'})
}
})
}
// or
async function *handler (request) {
for (const xs of [1,2,3]) {
await sleep(1)
yield xs
}
} |
Hey Guys, love to see TypeScript support. Does it makes sense to split typescript rewrite (part of #383) and discussion about API? This wolud help users of micro and ts to migrate to the new version of micro. EDIT: I'm aware of https://github.com/zeit/micro/blob/master/packages/micro/micro.d.ts, but I'm more compfortable with TypeScript definitions generated from TypeScript code. |
Thanks for the idea. |
@rauchg one issue I see with this proposal is that it does not appear to provide a mechanism for accumulating/communicating state across composed handlers. Middleware, as you know, typically mutates the request or response objects, which is bad. Marius' server-as-function paper describes a richer request type that includes, for example, authentication details. Because micro is working with low-level req/res types, that approach would involve progressively extending the request type, which sounds messy. All this to say that short of some sort of monadic solution, it seems as though the graphql solution of passing a user-defined context object would solve most use cases. That would mean that handlers would be |
Hey guys, any movement on API, TypeScript or release? Or is micro abandoned? |
@Bessonov please checkout |
@Bessonov this API didn't get traction for various reasons. |
The idea is to introduce an "envelope"-type construct into micro, that would allow us to express a broader range of HTTP responses as a simple
return
statement.It restores a concept absent in the core Node.js HTTP API: a request comes in (function argument), a response goes out (
return
).Summary
micro/res
return typestatusCode
from thrown errors.micro/send
as an imperative wrapper alares.end
TypeScript
/Flow
types exportsExamples
Respond with 404
Respond with 204
Respond with a JSON body
Respond with a stream
Respond with JSON and headers
Fun Trivia: Symmetry
If we avoid
{
and}
, it almost looks like we're using the imperativeres
(similar to our currentsend
):Composition
The return value from
res(200)
would return aResponse
object that works like animmutable-js
Record
.I've written before about why middleware as a framework construct is a bad idea, because functional programming is the ultimate middleware framework. Our language already has all the tools we need.
So let's imagine we want to create "middleware" that always sends CORS headers to all your responses. The end goal would be to use it as follows:
So, how do we write
withCORS
?Which can actually be simplified as
We would have a few methods on
Response
, that just return a new immutable response:setHeaders
merges headerssetStatus
sets status codesetBody
sets the body referenceQuestions
What happens to our other return types?
Undecided and open for discussion.
Currently we support returning:
{}
for JSON serialization andapplication/json
String
/Number
fortext/plain
Stream
forapplication/octet-stream
Promise
null
for 204 (empty body)undefined
for deferring and letting the user respond laterThese cover a wide spectrum, and they're ok if you want to retain the default headers / status code we chose.
There's a strong case for taking them away and always enforcing a
Response
type:withCORS
example above, since the response can return so many things, we would need a method for wrapping and converting into a cohesive type.withCORS
would not be so simple to write!statusCode
from thrown objects)However, if we do want to main them, we could use a wrapper like
res.from()
that gives us aResponse
type from a primitive like"Hello World"
:Another counter-argument of ditching support for returning the primitives is that it would make testing also harder, by requiring wrapping again in your test suites.
What happens to the
throw
behavior?Deprecated. Right now you can
throw
anError
with astatusCode
, and we respond with it. We will be warning (once per process in prod, every time in dev) if you use it, and then remove it.The functionality can also easily be retained in userland.
However, if a handler throws, we will still continue to respond with
500
and a generic"Internal Server Error"
string, with the full stack in developer mode, and we willconsole.error
as well.If the user wishes to override error behavior, they can write their own function that performs
try/catch
.What happens to
send(res, 'Hi')
?Deprecated. We will be warning (once per process in prod, every time in dev) if you use it, and then remove it.
The functionality can also easily be retained in userland.
Transition plans
We should do our best to facilitate access to codemods and utilities (maintained in userland) that make the transition painless and as automatic as possible.
Future Directions
If Node.js brings back support for the
sendfile
kernel syscall, we can consider introducing afile
return type, that makes a reference to a node in the filesystem and it can be streamed back in a very optimized way.It has been suggested in the past, however, that
sendfile
could become an implementation detail of piping a filesystem stream, in which case this wouldn't be necessary.Input from the community
Please feel free to weigh your opinions on this RFC. The more comments we gather, the better!
The text was updated successfully, but these errors were encountered: