Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Working with the Configuration Object #40

Closed
MVSICA-FICTA opened this issue Apr 24, 2018 · 5 comments
Closed

Working with the Configuration Object #40

MVSICA-FICTA opened this issue Apr 24, 2018 · 5 comments

Comments

@MVSICA-FICTA
Copy link

MVSICA-FICTA commented Apr 24, 2018

As you know the world of possible timbres is very large, many configurations are possible. To manage this efficiently it would be great if these timbre configurations could be stored as JSON on a backend then fetched and injected into a virtual-audio-graph at runtime. It does look like the virtualAudioGraph.update function is already designed for such a scenario.

For storing and querying timbre configurations it would be useful to add optional descriptive tags to a configuration. This way a configuration could be named, have a suitable pitch range specified, make note of the author and include other info tags that help fetch and sort and apply timbre configuration objects. This would be extremely helpful both for standard orchestra timbres as well as newly invented ones.

With this in place it is then much easier to fetch and load any timbres that a newly activated score document would reference. For instance, an app can load a score and then subsequently also resolve the timbre configurations it uses, so they could be queried, loaded and updated in the AudioContext.

This would make timbre management so much better, what do you think?

@benji6
Copy link
Owner

benji6 commented Apr 24, 2018

Hey, this sounds very interesting, but I'm not sure I totally understand it right now. When you say "timbre configuration" do you mean some sort of object that represents a graph of audio nodes? And are you suggesting being able to specify that in a serializable way so it can be stored as JSON? I'm not sure how that would work if you wanted the graph to be customizable, like if you wanted to specify pitch or something like that. Maybe some code examples might help me understand better!

@MVSICA-FICTA
Copy link
Author

Right, "timbre configuration" would be the same configuration used in virtual-audio-graph and yes, I'm hoping there is a way to serialize it as JSON and then deserialize back to JS. I'll take a closer look at the docs example and the API to see how this might work and get back to you soon...

@MVSICA-FICTA
Copy link
Author

MVSICA-FICTA commented Apr 27, 2018

Looking closer at the API I now have a clearer idea of what virtual-audio-graph (VAG) is and how it might fit with what I am moving towards. I think it is important to have the right abstractions, so that is what I will try to express below.

The main thing I'm noticing is that what VAG does is hardwire all the values for nodes in a configuration. This is great for defining playable graphs but not so useful for more elaborate playback scenarios, like musical score structures. The kind of VAG configuration I would find useful would only define the nodes and their connections and then just default values or no values for the nodes.

Here is the scenario and abstractions I require:

  1. Have configurations defined as JSON with default or neutral values for all nodes defined in them.

  2. Fetch and load a JSON score structure and create a VAG for each track (or staff voice) defined in the score. The score contains references to specific JSON VAG configurations, so these would be fetched and loaded into the respective track VAG's that were created.

  3. The initial configurations referenced at the start of each track would be loaded into it's respective VAG. Configuration references can be specified at any point in the score track timelines, so these would be fetched and updated into the respective VAG at runtime (or alternatively all would be resolved when a score loads or when a new reference is made via score editing tools).

  4. As the score is played the data values in the score would be scheduled (this process is something I'm doing in a unique way so it would not have to be part of the VAG API). The note pitch, duration, volume and the various params specified in the score tracks would then be injected/updated into the respective VAGs for the score's tracks as playback advances.

  5. The VAG's and their loaded node configurations would receive incremental scheduled values from the score, in contrast to being hardwired into configurations. The score would be the source of all editable and softwired values.

On another point of interest, the site ReadMe says that VAG was inspired by the React VDOM. So I'll just add that I recently discovered lit-html, which is a functional way to render ES6 template-literals. It has a whole new way of updating the DOM that does not use a VDOM approach, it might also be inspiring to you!

https://github.com/polymer/lit-html

@benji6
Copy link
Owner

benji6 commented Apr 29, 2018

Hey, thanks for your comment and I'm sorry it's taken a while to reply - life has been a bit hectic recently!

So I think maybe for your specific use-case virtual-audio-graph might not be the best tool, but you could definitely use it as a base for a library that did allow you to use the web audio API in that sort of way. It should be possible to map JSON scores to inputs that virtual-audio-graph would understand using virtual-audo-graph's custom virtual audio nodes.

And cheers for sharing lit-html - it reminds me of https://github.com/choojs/nanohtml but more advanced!

@benji6
Copy link
Owner

benji6 commented May 10, 2018

Going to close this for now, feel free to reopen if you want to discuss further!

@benji6 benji6 closed this as completed May 10, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants