Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CLI support for execution via existing process #49

Closed
crisptrutski opened this issue Sep 7, 2015 · 4 comments
Closed

CLI support for execution via existing process #49

crisptrutski opened this issue Sep 7, 2015 · 4 comments
Assignees
Milestone

Comments

@crisptrutski
Copy link
Member

Seems like the easiest solution to start-up latency is to support sending commands "server daemon" - which could just be the most recently launched repl. Perhaps flags like --existing or --port <number> could be added to any of the CLI commands.

It would also be nice if these arguments could be passively activated by a dotfile or environment variables.

Requires #43 I guess

@crisptrutski crisptrutski changed the title CLI support for existing process CLI support for execution via existing process Sep 7, 2015
@daveyarwood
Copy link
Member

I've had this thought -- an "alda daemon" could definitely help alleviate the start-up time. If you know you're about to do some hacking on a score file and you're going to want to play the file over and over, making changes in between, you could fire up the daemon. I think it's a great idea -- I would use it, for sure.

Specifying a port is a killer idea! It would allow you to have multiple score environments with different configurations.

@daveyarwood
Copy link
Member

The first step toward this feature will be the server process. I think this can be implemented as a separate Alda CLI command, like alda server --port 1234 which will start a web server that listens on that port and executes snippets of Alda code within the context of a working score, which it initializes when it starts.

We should see if any parts of the existing alda.repl code can be reused for this -- in fact, I'm thinking of the server process as a sort of "non-interactive" Alda REPL. Perhaps we could add optional arguments to all the alda.repl.commands that require confirmation, etc., and when they aren't provided, have our server process respond by asking the client to try again, and include additional arguments like confirm=true or whatever. (Whereas the Alda REPL would obtain that information by prompting the user interactively in a REPL session.)

With the above in place, we could do what @crisptrutski proposed and add an optional --port argument to the alda play task, which could send a message to a running Alda server process to :load the file (dealing, in some way, with the need to confirm whether it's OK to scrap whatever score the server process may have had in memory), and then :play it.

I'm just firing off ideas off the top of my head here -- I'm totally open to alternative approaches/ideas.

@daveyarwood
Copy link
Member

OK! I have a passable server/client implementation up on the server-client-java branch, which also includes a nice build pipeline that outputs Unix and Windows binaries with a single command. After an initial stab at writing the client in Go, it ended up making more sense to write it in Java. The major benefit is that the entire Java/Clojure project can be packaged as an uberjar (and thus exported to a single binary). The client parts are fast because they don't require the project's Clojure namespaces, which means no Clojure startup time issues. The Clojure startup time is only incurred when starting a server, and even then, you at least get an immediate "Starting server..." message, so it still feels fast overall. Needless to say, I'm pretty excited about this. 😄

At this point, I'm going to move this issue to "ready" status, and plan to include the server/client in an upcoming 1.0.0-rc1 release.

@crisptrutski
Copy link
Member Author

Sounds like you nailed it! 👍 🎄 😋

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants