-
Notifications
You must be signed in to change notification settings - Fork 167
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: Allow PannerNodes to have Spatial Volume #2388
Comments
This comment was marked as off-topic.
This comment was marked as off-topic.
Don't know if this solves the issue, but here is one way I would do the road example. Have a source for the road noise. As you move parallel to the road, move the source the same way. Then the volume stays the same as you move. When you turn to face the road, the panner node should automatically take care of having the sound in both ears. The pub example could be the same, but it seems that you want the pub sound to be fixed. For 2.1, Then as you move the sound is in the left ear, but gets quieter as you move away. You'll also start to get some sound in the other ear. Not sure why you want the sound to be constant. You are moving away from the pub. For 2.2, if you put the pub source in the middle of the pub, I think you will get sound in front and also to the right. For 2.3, I don't know what to expect. [9,9] is close to one corner. If it were a real pub with equally distributed people, most of the sound would come for the left and behind me. (Ignoring reflections and such. You can model the room using a convolver node to get the room response. But fundamentally, audio sources are point sources. If you want some kind of room effect to show a larger space, you need to use a convolver node for the room response. This probably doesn't quite work out from a physics point of view because as you move around the room, the response changes. Not sure how to solve that problem except to do perhaps some kind of ambisonics as done in Omnitone. |
Also, some simple diagrams would certainly help. |
Teleconf: Are you aware of any other system that has this kind of spatial source? That would be helpful. |
One way to get some volume is to place a number of point sources in your bar. Then as you move around, you'll get the effect you want. Well, ignoring the effects of the walls and such. If you want the wall attenuation, I guess you could place the sources on the wall with appropriate attenuation. Seems like this is now a big acoustical physics simulation. |
Here is a super basic interactive example of the bug on a road: Yes, we can place sounds and move them around the space, and I guess that would work for a bar, but for the road and river, especially if the river bends, I'm not sure how to do that. |
I posted this question on audiogames.net and according to the sound designers on there, creating volumetric sound is rather difficult in general. I would love if there was something that supported creating volumetric sounds in Web Audio, even if it was a more fake approach. |
This comment was marked as off-topic.
This comment was marked as off-topic.
No, the codepen has one single point that never moves and has no concept of bending or curves. It's like with drawing, there are points, polygons and lines that are different. I would never use a point to draw a road, it doesn't work, a road is either a line or a polygon. |
It sounds as if the most realistic option is to recreate volumetric sounds from scratch (having waves crashing at different times along a beach, having different cars travel along a highway)... This could become millions of sound sources on a decently sized VR environment. How many PannerNodes can the Web Audio API handle at once? |
This comment was marked as off-topic.
This comment was marked as off-topic.
On Thu, Apr 8, 2021 at 5:34 PM Brandon ***@***.***> wrote:
It sounds as if the most realistic option is to recreate volumetric sounds
from scratch (having waves crashing at different times along a beach,
having different cars travel along a highway)... This could become millions
of sound sources on a decently sized VR environment. How many PannerNodes
can the Web Audio API handle at once?
In principal, there's no limit, but, as always, the number of nodes is
limited by how fast your computer is and how much memory you have.
PannerNodes using HRTF are relatively expensive; equalpower is much cheaper.
Don't know about millions being required, but maybe a hundred or thousand
would be good enough. Your ears don't have nearly as much resolving power
as your eyes. (I'm speculating; I've never tried anything like this.)
… —
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<https://github.com/WebAudio/web-audio-api-v2/issues/122#issuecomment-816321593>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAFB6PA4A2LRM42BQQAZYZDTHZDR5ANCNFSM42JNJE5A>
.
--
Ray
|
Take a look at this paper "Interpolation of Head-Related Transfer Functions" which directly addresses this question: Also "Binaural Source Localization and Spatial Audio Reproduction for Telepresence Applications": Both of these papers deal with interpolation methods between widely-spaced HRTFs to avoid synthesizing a crazy number of sources. I am not sure how practical it is to apply these methods directly using Web Audio but the general idea seems promising. |
Important clarification of subject line: this issue is referring to spatial volume, not Gain. Immediate impression: issue 122 is a 3D spatial use case, feasible in X3D4. Suggestions: pose the 3D audio problem in a way that can be demonstrated in multiple ways. Establishing this basic correlation can help everyone get aligned with common understanding of problem and different ways of checking results. Achieving an example correlation would then let us examine other effects with shared comprehension of how they affect results.
Illustrating the already-rich capabilities of Web Audio API seems to be a necessary prerequisite before determining if any other functionality is needed.
|
@brutzman I changed the title, and added a prototype that can be used to follow along with example 1. I also expanded example 1 to fit with your points. |
Thanks for links to the docs, @joeberkovitz. I skimmed over them and it seems to me that they're primarily about how to interpolate HRTFs so you can still get good localization of sound but with only a "few" measured HRTF responses. Perhaps I missed something while skimming, but it seems the sources are still treated as basically point sources. |
@rtoy you’re right about what the papers describe, but I thought maybe one could approximate a spatially dispersed “volumetric” source by interpolating an array of fine grained point sources from a few coarse grained ones. |
Thanks for confirming that. I agree with you that with the current API an array of point sources to represent the volumetric source is the best we can do now. |
TPAC 2022: |
Hello, |
Describe the feature
This feature stems from my unanswered Stackoverflow question.
I would like to be able to specify a mesh or 2D shape for the size of a 3D PannerNode.
This functionality would be useful for playing the looping sound of a road as you walk along the road, representing buildings with a looping sound, playing the sound of a river or ocean, playing the looping sound of leaves on trees rustling in the wind, Playing the sound of water flowing through pipes, and any sound that emanates from a large amount of space.
Is there a prototype?
You can follow along with example 1 below using this minimal prototype.
I have thought of three possible solutions, but each are ugly:
There are two major problems with this approach:
1.1. It is extremely slow updating the nearest point of multiple complex objects.
1.2. This approach does not handle sounds coming from multiple sides of the user, like example 2.2.
Describe the feature in more detail
There are many examples when this would be useful. Here they are from most simple to most complex:
1.1. The user is walking along side the road at [8,5], and they're facing with their left ear to the road (0,0,-1). They go to [8, 10], [8,15], [8,50] and the whole time, they are hearing the road at the same volume. Currently, the sound gets farther away behind as the user moves.
1.2. When the user is coming back down the y plane, they are facing the road, but they are at [3, 115]. Since they are facing the road with orientation (0,0,1), it should sound as if it is in front of the user. Currently, the sound is very far away and ahead.
1.3. When the user is at [3, 50], the sound should play equally in both ears.
1.4. If the user is facing (1,0,-1) and is at [8, 5], the sound should come from the forward right to the back left.
2.1. When the user walks along the right side of the building [15,3], [15,6], and [15, 9], and the user is facing up the y plane, then the sound should remain constant in the left ear.
2.2. When the user is at [3,3] facing up the y plane, then the sound should be both to the right of the user, and in front of the user.
2.3. When the user is at [9, 9], the sound will play equally loudly in both ears.
I have been looking at the cone attributes (PannerNode.coneInnerAngle, PannerNode.coneOuterAngle, and PannerNode.coneOuterGain), but they look like they only deal with the 1 point size of the sound object and change what direction the sound is facing (like in the Boombox example).
I'm probably misunderstanding how the attributes work, but it doesn't seem as if the Web Audio API allows for multiple sized sound objects.
The text was updated successfully, but these errors were encountered: