New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replace callback with filehandles object. #14

Open
duanemaxwell opened this Issue Aug 7, 2013 · 4 comments

Comments

Projects
None yet
2 participants
@duanemaxwell

I've been giving some thought to how one could support piping, etc. It seem that you can lay the groundwork by replacing the current "callback" mechanism with something more similar to file handles.

So, for instance, the current "callback" function parameter to the command would be instead a "filehandles" object, with two properties, "stdin" and "stdout". The "stdin" object would have two function properties, "onRead" and "onEnd", which would be set by the command to capture input, and the "stdout" property would have two functions "write" and "end". This would allow the chaining of commands. You could probably add a "stderr" property similar to the "stdout" property for more UNIX-like behavior. At that point, the piping problem simply becomes one of parsing the command line and wiring up the handlers - you'd probablywant to start with the rightmost one in order to make sure the input handlers are set up before data is sent to them, or alternatively have an explicit initialization call to each command.

@sdether

This comment has been minimized.

Show comment
Hide comment
@sdether

sdether Aug 7, 2013

Owner

That is a really good idea. Replicating the standard I/O pipes both simplifies the model and opens up a lot of functionality, which is likely while a real shell uses it as well.

We'd also need to treat commands more like executables, i.e. they need to get an exit callback, independent of them closing their I/O (or should closing your provided stdout be the signal that you have completed? ). Some way to register the completion of the last command in a piped chain is needed for the shell to know to take I/O back for the next prompt.

Owner

sdether commented Aug 7, 2013

That is a really good idea. Replicating the standard I/O pipes both simplifies the model and opens up a lot of functionality, which is likely while a real shell uses it as well.

We'd also need to treat commands more like executables, i.e. they need to get an exit callback, independent of them closing their I/O (or should closing your provided stdout be the signal that you have completed? ). Some way to register the completion of the last command in a piped chain is needed for the shell to know to take I/O back for the next prompt.

@duanemaxwell

This comment has been minimized.

Show comment
Hide comment
@duanemaxwell

duanemaxwell Aug 7, 2013

I was originally thinking that the "end" on stdout would be sufficient to end the command, but that won't work for a command that only consumes "stdin", or that also or only uses "std err", or a command that neither consumes input nor produces output on those file handles. It seems that an explicit "exit" would be required - it should also probably explicitly call "end" on any output filehandles for which "end" was not called. You would return to the prompt when all of the exit handlers have been called (or maybe just from the last one in the chain?).

I think you could also implement i/o redirect to "files" by supporting the notion of read/write nodes in the fake filesystem tree.

What's cool about this is that implementing a lot of standard UNIX-style commands as mixins becomes interesting - you end up with something much like "busybox".

I was originally thinking that the "end" on stdout would be sufficient to end the command, but that won't work for a command that only consumes "stdin", or that also or only uses "std err", or a command that neither consumes input nor produces output on those file handles. It seems that an explicit "exit" would be required - it should also probably explicitly call "end" on any output filehandles for which "end" was not called. You would return to the prompt when all of the exit handlers have been called (or maybe just from the last one in the chain?).

I think you could also implement i/o redirect to "files" by supporting the notion of read/write nodes in the fake filesystem tree.

What's cool about this is that implementing a lot of standard UNIX-style commands as mixins becomes interesting - you end up with something much like "busybox".

@sdether

This comment has been minimized.

Show comment
Hide comment
@sdether

sdether Aug 7, 2013

Owner

Another consideration for pipes is that we'll need a hook for pre-processing CLI input. Right now it finds the first command and immediately calls its handler, but really we need to be able to split it into multiple commands to support pipes and redirection. And to avoid hardcoding that into the shell, having a pre-processor hook lets pipes be done as a mix-in

Owner

sdether commented Aug 7, 2013

Another consideration for pipes is that we'll need a hook for pre-processing CLI input. Right now it finds the first command and immediately calls its handler, but really we need to be able to split it into multiple commands to support pipes and redirection. And to avoid hardcoding that into the shell, having a pre-processor hook lets pipes be done as a mix-in

@duanemaxwell

This comment has been minimized.

Show comment
Hide comment
@duanemaxwell

duanemaxwell Aug 7, 2013

A mixin makes sense - bash-style command line parsing isn't very complicated (at least for the basics), but may be more than people want to have out of the box.

A mixin makes sense - bash-style command line parsing isn't very complicated (at least for the basics), but may be more than people want to have out of the box.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment