Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request's Architecture #1094

Closed
seanstrom opened this issue Sep 24, 2014 · 54 comments
Closed

Request's Architecture #1094

seanstrom opened this issue Sep 24, 2014 · 54 comments
Labels

Comments

@seanstrom
Copy link
Contributor

Hey can some one give me a high level breakdown of request's architecture?
I basically want to get some point of views of how this software is perceived and how it actually works.
@mikeal

@mmalecki
Copy link
Member

Closing this as it's not a bug, feel free to continue discussing here.

@nylen
Copy link
Member

nylen commented Sep 26, 2014

In a couple of words, I would describe request's architecture as "cobbled together". It's gone through significant revisions over time by people with lots of different coding styles and goals, including me.

@mikeal
Copy link
Member

mikeal commented Sep 26, 2014

basically, request is a giant duplex stream. most features are implemented on some part of that duplex and manipulate state. so it's sort of a duplex-state-machine-as-a-stream.

@seanstrom
Copy link
Contributor Author

So as some of you know I have been doing some refactoring work for Request.
However I think my best efforts would be spent reorganizing the code to meet an envisioned architecture.
Who feels like they can spend some time talking with me over how we want to properly refactor Request?

@mikeal @nylen @FredKSchott

@mikeal
Copy link
Member

mikeal commented Oct 8, 2014

@seanstrom we should do this in text here on GitHub. I know it takes longer than a Skype call but this stuff needs to be documented publicly and discussed a bit more openly.

@FredKSchott
Copy link
Contributor

At the top of my wish list would be some way to modularize some of the crazier features out of request itself. For (a strawman) example: Each feature could be a plugin that could listen to certain request events (on('init'..., on('request'..., on('response') etc.) and use these to apply it's feature. This could easily be done incrementally, and as a result we'd be left with a better understanding of the actual "meat" of the request logic, by having the crazy features stripped out of the code itself.

Just my 2 cents :)

@FredKSchott FredKSchott reopened this Oct 10, 2014
@FredKSchott
Copy link
Contributor

(reopening because I'd like this to get a little more exposure)

@nylen
Copy link
Member

nylen commented Oct 10, 2014

@FredKSchott 👍 this is a great idea. Having more of the flow exposed through events would make it easier to write libraries like request-debug that wrap request to do things.

However, it would also provide users a more powerful gun to shoot themselves in the foot. Maybe some of these event names should be prefixed with _ to indicate that they are intended for internal use?

@seanstrom
Copy link
Contributor Author

Yeah we can namespace the events however we see fit.
My main goal is to see if we can expose the code as a state machine, and have all the necessary hooks for the features that are active during those different states.

To do that I feel like I need read through Request enough to make educated choices when refactoring the code to that desired state. That's what I am starting on today.

@FredKSchott
Copy link
Contributor

@seanstrom Awesome! I'd suggest finding the feature that's already the most modular (auth? form/formData? aws?) and starting with that as a prototype, to get a feel for how the core request will need to be updated.

Whatever work you do, we should merge it to a separate branch than master until we're confident this is the direction we want to go. There's a good chance this work will require a breaking, 3.0 type change and we don't want to bother our users with that unnecessarily.

@nylen agreed, although in my mind hooking into these would be a pretty advanced feature. It would be a huge win to allow advanced users access to these "hooks", so the tradeoff might be work it. Either way, we should aim to build some degree of safety/protection into these "feature plugins" regardless, for our own collaborators / sanity.

@FredKSchott
Copy link
Contributor

PS: That suggestion was just one potential way to refactor this. It might be worth the time to think of others and compare :)

@seanstrom
Copy link
Contributor Author

Im curious to see if a good amount of this refactoring doesn't need to break current API.
I'm hoping at a high level we can get the code into a structure where we can easily transfer to 3.0 without a full re-write.

Then when we do move to 3.0 we can use most of the existing structure and flog any unwanted things.
That may be not possible but looking over the code there seems to be just a bunch of method extraction that needs to happen, once that happens that can be refactored into Service Objects.

The plugin idea fits in with the way we're communicating already with events so I think we definitely experiment with that. :)

@FredKSchott
Copy link
Contributor

Agreed, no need to break things unless we need to

@seanstrom
Copy link
Contributor Author

Okay so a few things I noticed while reading through Request today.

  1. We should make sure we use self everywhere, or convert to using it sparingly.
    At the moment we have the two styles, mentioned above, mixed together.
    I am for just making every function use self in reference of this.
  2. I think at some point we should break out some of the methods from the request js file to their own file. Some of these methods are beasts and I feel would be better if we start refactoring them into there own Service Objects. The easiest step towards that, for my own maintainability at least, is to move them into their own file.

What do you guys think?

@mikeal
Copy link
Member

mikeal commented Oct 10, 2014

We should make sure we use self everywhere, or convert to using it sparingly.
At the moment we have the two styles, mentioned above, mixed together.
I am for just making every function use self in reference of this.

+1

I think at some point we should break out some of the methods from the request js file to their own file. > Some of these methods are beasts and I feel would be better if we start refactoring them into there
own Service Objects. The easiest step towards that, for my own maintainability at least, is to move them into their own file.

I'm against moving towards a more object oriented plugin model. Not that what we have now is great, it's certainly a huge problem, but I also don't want to go down the "plugin system" path which is what I think of when I hear "service objects."

Here's a really good example: signing. We have support for a bunch of auth signing, some of which is implemented as giant methods. The actual signing usually lives in a separate module but there is a bunch of setup we have to do in order to prepare information to be sent to the signer.

The main reason signing lives in request rather than being another module you use with request is that signing MUST happen after all logic and events are processed that might manipulate the headers, otherwise you might break the signing. Even implementing this as an event won't work because someone else could listen to the event and manipulate the headers after the signer, so the internal logic needs a "lock" on the headers before they are sent where signing gets called.

One way to modularize this would be to create an "auth object" with a unique request API and just accept any of these objects in order to do auth. Another, and IMO far more robust, approach is to have a "sign" property which signing code sets to a function and the pre-header-send logic calls that function if available, ensuring it's a single call rather than an event that anyone can listen to.

For the most part, request is a giant state machine. If we create additional objects with ever increasing state we'll start to get in to nasty areas where state in different objects effect each other and we're passing around a ton of extra references. Instead, I think the best approach is to flatten state in to a single namespace/object, which is a Request instance. This flies in the face of traditional object oriented design patterns but those are also the patterns that gave us Java Interfaces so I'm not one for tradition.

The other nice thing about this flat namespace/object is that you can easily manipulate a single prototype or instance to add functionality without worrying about inheritance. There's actually no reason that the prototype method for request.oauth() needs to live in the request repo, it could live in oauth-sign which could expose a method that adds it to an objects prototype: require('oauth-sign').setRequest(require('request').Request. It's not a traditional pattern, it's basically a functional pattern for manipulating objects (scary!) but is actually quite useful for modularization without the explosion in shared state you'll see if you start adding more objects to the mix.

@seanstrom
Copy link
Contributor Author

Okay I have feedback to what you just said, but I'll be breaking down my response in chunks.

@seanstrom
Copy link
Contributor Author

Okay, lets first cover that this state machine as a process.

The very high level view of this process:

  • instantiation a request
  • initialization the request
  • starting the request
  • receiving the response
  • then completion.

Note that at almost at any point it can be aborted.
Update: Aborting may only happen at the start of the request and receiving the response.

I believe that's the life of a Request instance.
The shared state here is just the Request instance, correct?

To me the Request instance is just a structure of state.
The prototype methods init and start are the processing of that state.
They process the Request instance and perform changes and may even add behavior.
I believe that these processes fit the roles of functions that take in a request and return a request.

They wouldn't have to have internal state, just return a value.
We would then have another role that orchestrates these processes in manner that fits the
state machine we have, and the behavior we desire.

Perhaps Service Objects was the wrong term to use.
All I meant to say is that we have separations of concerns.

Why go this route?
In my opinion its way more maintainable than what we have.
Right now the Request instance does everything, which to me is too much.

Note
Just in case we want the Request instance to have an interface that can easily access these processes,
we can just point prototype methods to the references of these proccesses.

@seanstrom
Copy link
Contributor Author

My first response - the one above this one - is just fully define what I meant by "Service Objects".
As I mention in the response, perhaps that using that term may have been wrong, but it is separate from a plugin architecture.

The discussion about moving to a plugin base architecture is separate from "Service Objects".

@FredKSchott
Copy link
Contributor

@mikeal that's a really good point. There are definitely some things (signing) that could never be moved out to rely solely on events / as plugins because they're just too ingrained in the request logic. But there are some things that act more as features on top of request, such as form, aws and, oauth. I still think those would do well being build on top of request instead of inside it.

The reason I liked removing these as a first step was because it would remove those concerns from the request instance, and make the "core request logic" (like signing) smaller and less complex. Then, tackling projects like the auth object or signing functions to refactor The State Machine would be more manageable tasks.

@FredKSchott
Copy link
Contributor

The other nice thing about this flat namespace/object is that you can easily manipulate a single prototype or instance to add functionality without worrying about inheritance. There's actually no reason that the prototype method for request.oauth() needs to live in the request repo, it could live in oauth-sign which could expose a method that adds it to an objects prototype: require('oauth-sign').setRequest(require('request').Request. It's not a traditional pattern, it's basically a functional pattern for manipulating objects (scary!) but is actually quite useful for modularization without the explosion in shared state you'll see if you start adding more objects to the mix.

@mikeal re-reading this, and I think we both want the same end goal :) Although I'd argue that while monkey-patching the request object like that itself isn't strictly inheritance, it still comes with a similar set of problems/concerns (you need an understanding of the request implementation to properly modify it, which can change across versions without notice within the rules of semver). Very scary indeed 👻

@mikeal
Copy link
Member

mikeal commented Oct 11, 2014

, such as form, aws and, oauth. I still think those would do well being build on top of request instead of inside it.

I think that when someone does require('request') they should get all of that functionality. The code that implements that functionality should be in a dependency as much as possible, and if at all possible be entirely outside of request.

As it stands, the hard work of stuff like form and oauth are already in dependencies, the code we have in request is just preparing and marshalling data to them. That could also be moved in to a dependency, and possibly generalized. This would also open up the possibility of publishing something like request-base which is request without any of the externalized API pieces added like form and oauth.

I'm pretty sure that once we break off those functions and are forced to create events and other entry points for them to be implemented against we'll end up with a much clearer architecture and we'll be a lot more transparent about state which helps a lot with debugging.

@mikeal
Copy link
Member

mikeal commented Oct 11, 2014

@seanstrom your high level state overview is a little lacking. I'll try my best to expose what you've missed.

    1. request instantiation
    1. request initialization. -- We finalize sending the request if not a PUT or POST and skip to 4.
    1. If PUT or POSt:
    • 3(a). nextTick() or first write() call (we'll have to ditch nextTick() and move to first write() once we do streams2) -- a bunch of prep and gaurds roll in to place.
    • 3(b) .end() is called after all the stream writes.
    1. response headers -- this is where things get tricky.
    • If this is an HTTP forward GOTO 2, in fact this is the entire reason we have a step 2 :)
    • If this is gzip then create a decompressor and pipe it through to the outbound streams.
    • If this is not gzip then pipe to the outbound streams.
    1. First read -- bunch of new guards go in to place, can no longer pipe to new destinations.
    1. Finally, we get an "end" after all the reads -- if a callback we call the callback with the concatenated payload (bunch of potential decoding there as well, like JSON).

In each of these steps there are all kinds of if statements for different features. Many of those features can actually impact each other, so ordering is actually very important. Many features have different behavior for the buffered callback case than they do for streaming. Many features need to get their state blown out on redirect, while others need to explicitly hold on to their state. Finally, you add proxying in to the mix, which changes the way you transport some of the data and requires you to hide/mask/remove some headers and add others.

@mikeal
Copy link
Member

mikeal commented Oct 11, 2014

One more thing. If we do break these features out in to modules which are more or less the glue between request and other existing dependencies there's a few side effects:

  1. We'll want to move work to the request GitHub org instead of here so that contributors added to request can access all the required modules.
  2. It gets much easier to swap out some of the dependencies we have. For instance, I just found out about some good work that is going in to a form-data replacement.

@seanstrom
Copy link
Contributor Author

@mikeal thank you for going more in depth with that break down.

You explained the different phases of the request state machine.
As well the importance of the ordering of the transitions between those phases.
However, I am left wondering whether you're against my thoughts on how to refactor the code.

I want to know if we both understand each others opinions, and if you're opposed to it.
If you are I would like to hear why. -- possibly again
If you are not, maybe give an explanation for everyone else who may read this.

In my mind, the information you've provided about request doesn't conflict with the architecture I mentioned above. So more feedback is appreciated.

@mikeal
Copy link
Member

mikeal commented Oct 11, 2014

I think I am opposed to it, but it's hard to tell just because it's hard to understand all of these ideas without having a few examples implemented. Could you throw out some psuedo code where one of these service objects is used?

As an example, here's what I was talking about previously with the signing:

// request.js
var oauth = require('oauth')

function Request () {}
oauth.setup(Request)

Request.prototype.init = function () {
  // lots of code
  if (self.sign) self.sign()
  // send headers
}

// oauth.js
exports.setup = function (Request)  {
  Request.prototype.oauth = function () {
    // the actual signing api and prep
    this.sign = function () { /* do the actual signing work */ }
  }
}

Note that in Request.prototype.oauth a bunch of state is going to be set on the instance based on the arguments passed and the sign() method will look at those in addition to any other state and headers that were added since.

@seanstrom
Copy link
Contributor Author

Yeah thats a cool way to decorate the request in order to have a signing method.
There are several places I could see use for that.

As for my pseudo code.

// File RequestInitialization.js
function RequestInitialization(request) {
  // handles the complexities of initializing a request instance.
  // it also delegates out to other functions for the work
  // then it returns the request after initialization or just mutates the request implicitly
}

// File Request.js
var RequestInitialization = require('./RequestInitialization')

function Request(opts) {
  // the actual constructor
  // this handles forming all the initial state and containing of the state.
}

function RequestStateMachine(request) {
  // handle the transitions of phases depending on a request instances state.
  // This will coordinate the phases depending on the state, so at the beginning it will initialize
  RequestInitialization(request)
}

// To wrap things up we just have a function that creates a new request
// and state machine each time.
function kickoff(options) {
  request = new Request(options) 
  machine = new RequestStateMachine(request)
}

// We can also for API reasons point to RequestInitialization
Request.prototype.init = RequestInitialization

The reason I'd like this is because we're would then treat the StateMachine, the Request state, and processes that take place on that state as separate things. Each with their own responsibility.
They'll highly related but focused on a piece of the job.

Note that this pseudo is a high level picture, there's more that needs to be thought out than just this.

@mikeal
Copy link
Member

mikeal commented Oct 11, 2014

I see what you're saying. The problem with this approach is that is doesn't actually allow us to move/keep the functionality specific to a feature separate from the rest of request.

All you've really done is moved where we put a ton of monolithic logic about all the features to a different function, but that function is still just as inaccessible from another module as it was before.

For instance, the current init() method is huge, none of what you are suggesting actually makes it smaller it just moves it. But, if we broke down what the init function does in to logical parts regarding their state and exposed hooks we could move all the feature specific logic in to other modules which accessed those hooks. At that point it doesn't matter if those hooks are implemented in the prototype method or in another function like you're suggesting, it should get much smaller and easier to manage.

The hardest part of all of this will be figuring out and preserving the ordering some feature combinations rely on.

I also don't see the point in having one object which is the state and one which is the API, it seems much simpler to flatten those in to one object. We actually want to provide better insight in to the state and so attaching it to the object which carries the API is a much better approach.

@mikeal
Copy link
Member

mikeal commented Oct 11, 2014

Another quick note: it might seem cleaner in theory to put the logic in one place and the state in another but it's actually guaranteed to produce a lot more confusing code, no less.

@seanstrom
Copy link
Contributor Author

and then I did it again :(

@seanstrom
Copy link
Contributor Author

@nylen you're example from node core is good, it definitely gives a face to what I've been describing.
Thank you :)

@FredKSchott
Copy link
Contributor

There's a lot to respond to here, but I definitely see the draw of decorating Request itself.

If we do go the "decoration" route, I'd really like to use a pattern closer to node core:

Stream.Readable = require('_stream_readable');
Stream.Writable = require('_stream_writable');

ie:

Request.prototype.oauth = require('./lib/oauth');

That forces some level of modularization, and would prevent different decorators from interfering with each other. The oauth.setup(Request) pattern allows for too much power/wierdness, which especially scares me in a OSS project like this (where contributions are made without a full understanding of the entire module and how all pieces interact). and chasing down bugs a bug in feature A could require stepping through B, C, D, etc.

@mikeal was your point that this might not be possible / that some Request methods require modifications to the rest of the Request object?

@FredKSchott
Copy link
Contributor

-1 To the idea of splitting a RequestStateMachine out of Request (for more-or-less all the reasons already stated above)

@mikeal
Copy link
Member

mikeal commented Oct 13, 2014

@FredKSchott +1 except the oauth method needs to be a prototype method :)

@FredKSchott
Copy link
Contributor

Ah duh! ninja-editted :)

This was referenced Oct 13, 2014
@seanstrom
Copy link
Contributor Author

Heres an example of a module that wants to support another kind of auth.
Link #808 (comment)
We should think of how we would ideally have lib authors extend request.

@seanstrom
Copy link
Contributor Author

Suggestion

If we have a module like request-base.
And we have areas of request were we want to support many types of auth.
Maybe we do something like @mikeal said, where we decorate Request, but we do it this way.

var request = require('request-base')
var auth = require('request-auth')
request.use({auth: auth})

In this example we would tell request-base to take this auth object as the auth handler.
and we would just make sure that auth exports a sign function

Update:
Then internally to request we just use the auth dependency that was given to us, or even a default if none was given. Which would mean Request would know less about what happens with the auth explicitly but knows when to do auth.

Thoughts?

@mikeal
Copy link
Member

mikeal commented Oct 14, 2014

This is fine for the abstract case, as in someone doesn't want all of request so they explicitly pull in request-base.

Also, I don't think it should be integrated as "auth." A more extensible pattern might be that you're actually registering a function per option. For instance, what if you want oauth2 and hawk support? You wouldn't be able to do just auth.

@seanstrom
Copy link
Contributor Author

I agree we should have a way to integrate many types of auth.
Does the order of which types of auth get ran first?
If so we would need to teach a function to respect that order right?

@Janpot
Copy link
Contributor

Janpot commented Jul 21, 2015

Just stumbled across this thread and wanted to share something I worked on a few months ago. I've since abandoned the project, maybe I'll pick it up again some day, maybe not. It's basically a different approach on how to achieve a request-like library in a modular way. I started from a base module that implements the most basic stripped down http request I could think of. Then I build up all other functionality in the form of middleware. Just like libraries like express build up the server side functionality of http through middleware functions.
The idea here is that a middleware function receives a request object which it can alter at will (headers etc...). Then call a next function that calls all downstream middleware up until the basic request. After which the response object bubbles back up where it can be manipulated by the same middleware.
This way I was able to isolate all functionality for e.g. redirection, gzip, cookies into their own modules.
https://github.com/Janpot/kwest
https://github.com/Janpot/kwest-redirect
https://github.com/Janpot/kwest-cookies
https://github.com/Janpot/kwest-gzip
Now, I'm not saying this is a stable library and you should make request like this. I just wanted to share a way of looking at it that I didn't yet see in this thread. Which is mimicking the way we build http servers to the way we build http clients.

Glancing over my readmes I notice they are not ready and fully consistent so let me add an example here as well, this is how it worked:

var kwest = require('kwest');
var kwestRedirect = require('kwest-redirect');
var kwestBodyparser = require('kwest-bodyparser');

var request = kwest()
  .use(kwestRedirect({ maxRedirects: 10 }))
  .use(kwestBodyparser());

request.use(function rejectErrors(req, next) {
  // You can also alter req here, like setting headers, etc
  return next(req).then(function (res) {
    // This lib was promise based, but that's ofcourse an implementation detail
    if (res.statusCode !== 200) {
      throw new Error('bad status');
    }
    return res;
  });

});

request({
  url: 'http://www.google.com'
}).then(function (res) {
  // do something with the response here
});

Again, not claiming this is better or saying you should do it this way, just want to spark some inspiration by showing a different approach. I found that this approach had its advantages.

I use request a lot in my day to day work (and love it) and the reason why I started exploring this kwest approach is that I often found myself needing things like "a http request that respects robots.txt", "a http request that does caching a certain way" or "a http request that detects encoding and decodes text", etc... which IMO would be great candidates for a middleware/plugin like system. Now it's easy to build modules that wrap request to have it behave that way, but it becomes harder to compose these modules together in new configurations.

@simov
Copy link
Member

simov commented Jul 21, 2015

Thanks for sharing this @Janpot I'll take a look at it. Actually I had similar idea (too lazy to search for the exact thread and comment) still a code in that direction is more valuable than just the idea 👍

@Freyert
Copy link

Freyert commented Sep 9, 2015

+1 for @Janpot 's idea all the way! Breaking apart logic into modules would be tremendous. It's definitely a reason express is so popular.

I might point out though, that most of what's described by @Janpot can be achieved with a function like _.compose, R.compose, or R.composeP.

I submit my vote for the decomposition though, as I have definitely struggled with redirects and cookies. From my limited debugging experience I've gleaned it's very difficult to trace the route from my request to how redirect requests are formed.

Just to add another potential perspective for how one could decompose request, I've been playing around with a small redirect function who's interface looks something like this:

let j = request.jar();
let r = request.defaults({
  jar: j
});
let inputs = {
  method: 'GET',
  url: 'https://www.google.com'
};
redirect(iteratee, r, inputs, cb);

Where iteratee is a function that will tell a redirect to stop or continue (probably also needs to be able to modify the redirect request). And cb is the function that is called when redirects are finished or an error occurs. Stuck with the ol' node callback style to support more platforms. It'd be nice if I could make this more pure function to support things like R.compose, but that might be asking too much.

@seanstrom
Copy link
Contributor Author

In general I would like to see things be more composable, I feel like the internal structure may want to start implementing the state machine as a series of composed functions. Then it should be pretty easy to add any official third party hooks for composition. @Freyert @simov Thoughts?

@Freyert
Copy link

Freyert commented Sep 9, 2015

I'm in the party of less is more. I think not implementing a middleware stack in request will give it an edge in the maintenance department. I think also, especially with koa being around these days, that people will have opinions on how the stack should work. Should I manage state with Promises, callbacks, or generators? State sucks and I would prefer to work with something that stays away from it and let's me make that decision. Especially since JS's feature set is so volatile these days.

Ideally I would be able to do something like below:

//requestAsync is a customized promise I wrote myself.
R.composeP(parse, toJSON, requestAsync(R.__));

//or

R.composeP(parse, toJson, redirect, requestAsync(R.__));

But also be able to use the raw request object for piping responses. Conceivably someone will create a generator version of R.compose and anyone who can take advantage of that can and request won't have to do any maintenance or bump versions. Therefore, less is more.

Another point to make, is that express is more about describing what is. A server can be a fairly static thing and you tend to build up loads of middleware on it. Therefore middleware is a very useful abstraction. I see request as every request can and most of the time is totally different from the other one. Check out this use case:

var get = R.composeP(toJson, redirect, requestAsync);
var getDo = R.composeP(do, get);
var getDoSomethingElse = R.compose(doSomethingElse, get);

Three totally different ways of handling your request. Pretty nice huh?

@seanstrom
Copy link
Contributor Author

I see what you're saying about not having middleware. Most of the suggestions around breaking Request end up showing some kind of middleware system. That said I would prefer having a system where all the features that could be built on top of request would be modeled like function that can compose. Which is what I believe you're describing right?

@simov
Copy link
Member

simov commented Sep 10, 2015

Pseudo code is fine, but reality is different, that's why we have the code base as it is ATM. While I'm not saying that the current code base is structured good enough, there are certain design patterns that simply won't work for building HTTP client. And you can't know that for sure before you actually try and build one with a good amount of features in it.

Also there is a huge difference between refactoring and rewrite. IMO refactoring isn't possible for the current code base. There are too much constraints to look for, the biggest one being the huge user base relying on almost every hack/fix/decision that you can find in the code base.

Mikeal started such rewrite two years ago, and got nowhere. Also it's interesting to see the http-duplex-client which is Streams2 compatible.

I meditated a bit on the above commit, and at the end I decided that I have a different vision about how the HTTP client should be built. So about a month ago I started to put some code aside trying to assemble a working HTTP client. I'm using the HTTP duplex client mentioned above, slightly modified, also it turned out that some parts of that implementation doesn't work as expected, namely the _write method always returns true even though the underlying http.clientRequest write returns false for example.

Anyway, what I'm trying to say is that we should keep the current version 2.x working as it is, that includes adding of new features and fixes. Probably migrate to Streams2 at best, which will require rewrite of a few external modules as well.

Anyone is free to start their own rewrite from scratch. Of course it will be a huge effort to implement all of the request's features, but at least you can see them.

@Freyert
Copy link

Freyert commented Sep 10, 2015

Yeah doing a total rewrite would be a pain, and refactoring also a pain. As it stands now, the ideas I'm throwing out are probably better off for user land (Which you also see with express). Rewriting basic redirect logic is a fairly simple task and it's pretty easy to do what I'm talking about on your own.

It makes me curious though about what the original intent of this project was? What problem is it solving? If I like breaking off different pieces of logic like I am, should I just call it quits and start using the plain old http module or am I getting something from request? (Not to be contentious, I really do love the work that's been put into request, but I'm just curious to know if I'm not using it to it's full potential.)

@simov
Copy link
Member

simov commented Sep 10, 2015

Yep, the API you are proposing should be implemented as a request wrapper. Good example is Purest which only configures request through its API, it doesn't handle anything HTTP specific directly.

As for request, its purpose is to implement some of the common features that you expect an HTTP client to have. The core HTTP module mainly parses the raw HTTP messages and handles the TCP sockets for you, through its Agent class. Well that's probably overly simplified but it's definitely not a full featured HTTP client, like a browser for example.

Some of the features currently in core were first implemented in request, like the forever-agent, also at that time there were no Streams2 and common Duplex stream interface.

Here is what I'm currently outlining as requirements for my own experiment:

  • the client can be used seamlessly like the bare core HTTP module
  • each feature should be enabled by option, there is no defaults except the ones you get from core
  • each option enables default implementation, you can plug in your own via function
  • each default implementation is a separate module set as dependency in the consumer project

I think that's it for now. Oh and btw the redirect logic with all of the request features is not the easiest thing to implement.

@simov
Copy link
Member

simov commented Dec 25, 2015

#1982

@stale
Copy link

stale bot commented Nov 23, 2018

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Nov 23, 2018
@stale stale bot closed this as completed Nov 30, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

8 participants