Replies: 92 comments 33 replies
-
The idea is fun but would probably be a mess. Bad latency might maybe also unstable, meaning that your local metronome has a speed varying with the network latency. It also implies adding lot of sychronization information for this protocol to work instead of the current "best effort" mode of jamulus. I personally like the approach of Jamulus described in Volker paper as following the "keep it simple principle". Handling latency smartly is at the heart of Ninjam. I think this is good to have distinct software having distinct functions. But it's just my opinion :) |
Beta Was this translation helpful? Give feedback.
-
I totally agree with you. I do not like the idea of having a metronome in Jamulus. If you need a metronome, something else is not configured correctly or you simply cannot jam in realtime because of some physical restriction (too far away from each other, your internet provider does not have low ping times, etc.). |
Beta Was this translation helpful? Give feedback.
-
The idea is fun but would probably be a mess. Bad latency might maybe
also unstable, meaning that your local metronome has a speed varying
with the network latency. It also implies adding lot of sychronization
information for this protocol to work instead of the current "best
effort" mode of jamulus.
The idea could be implemented in various levels of "intelligence":
1st level: Just implement an independent metronome ticker on the server
as the central coordination point, and thus spreading a timing normal
unconditionally as even as possible to every participant, and presumably
split the average metronome latency by two, but by much more as far as
the individual participants of a session are concerned. Especially so if
they experience about the same individual latency relative to the server.
2nd level: Implement a short calibration phase, to be invoked willingly:
measure *average* response times for a short period (like 3 s) based on
individual member returns of pulsed test signals to each of them. Modify
subsequent individual metronome transmits according to this (static)
measurement to equalize expectable latency differences. Will not catch
sudden quasi-permanent changes in network effects between calibration runs.
3rd level: Add a local, non-calibrated time stamp information to UDP
packets sent to the server. Single precision integer (with overrun
detection) would be completely satisfying, IMHO. Derive changing
*relative* latencies from comparing their drift relative to each other
(assumption: individual computer clock is sufficiently precise for the
duration of a rehearsal). Do this on a several seconds averaging, so
flatten out unmanageable jitter effects. Derive slightly deviating
latencies and adjust relative metronome beats accordingly.
I personally like the approach of Jamulus described in Volker paper
<http://llcon.sourceforge.net/PerformingBandRehearsalsontheInternetWithJamulus.pdf>
as following the "keep it simple principle".
I agree completely to the KISS strategy but I wouldn't think that my
suggestion violates it. I'd consider the increase in complexity for the
suggestions above as quite small, compared to the compexity of the
existing Jamulus as it is today.
Cheers,
Petra
|
Beta Was this translation helpful? Give feedback.
-
Hi, metronome is helping a lot! We use this.. The way we do is: We add a member “clicky” who is running on a laptop in the same network as the server. Only the drummer is listening to “clicky” and we are trying to listen tintje drummer (it was allready hard in our rehearsal studio 😂). It works fine, except for the drummer, it is some times really difficult. But it prevents us to slow down during the song. What a really great program Jamulus is!!! We use it during Corona for one month, it is really fun and works!!! |
Beta Was this translation helpful? Give feedback.
-
Hi, metronome is helping a lot! We use this..
We add a member “clicky” who is running on a laptop in the same network
as the server.
This is about the method we use presently as well, since one of us (me)
ist typically in the same network as the server. I use a smartphone
metronome lying on the keyboard. But if we are more spread (begining
next week) this method will double the latency, hence my suggestion.
I should mention, perhaps, that we try to organize classical music over
Jamulus, therefore no such leading person with us usually, and quite
some pauses that tend to pull us slow, since at the end of the pauses
each is trying to listen to the others.
Cheers,
Petra
|
Beta Was this translation helpful? Give feedback.
-
The metronome is helping us for rock songs above 110 We decease in tempo... but with the version 3.5 and behringer IO we manage to keep the delay under the 29ms. And we live in Hilversum, Loosdrecht, Ankeveen around 10 kilometer apart.. So only the drum, is maintain the tempo... |
Beta Was this translation helpful? Give feedback.
-
With my band we can play rock songs with tempos like 170. We all only hear our own signal which comes back from the server. That way we keep in sync. |
Beta Was this translation helpful? Give feedback.
-
This is not entirely correct. A metronome also helps participants to stay on tempo, because otherwise, the listen to each other and thus the slight, almost non-noticeable delay adds up to something really noticeable. (I haven’t yet used Jamulus, working on a proper Debian package for it first, but the person who recommended it to me on the OSAMC mailing list noted that in her tests of it.) |
Beta Was this translation helpful? Give feedback.
-
As a musician, you never ever need a metronome to rehearse or to play within a band. The metronome is useful for practicing your instrument alone. As a music teacher, I always tells my student to use it when they practice but never when they play in a band. What you need when playing in a band is to listen to the other musicians you're playing a song with, and not to listen to a little plastic and electronic who knows nothing about music. In other words, using a metronome in a band is just killing the musicality. Mostly because it's annoying for people who don't need it (audio pollution), and the people who think they need it, needs in fact to listen to other musicians (and to work their instrument at home, even maybe with a metronome). Not to mention that it'll not cope with any of the songs you'll play which need tempo variation (even if slight BPM variations to enlight and strengthen small mood change in a song). Having a metronome on jamulus is a good recipe for a musical experience disaster in my opinion.
That is pretty much how any rock band is functioning all over the world 😉 |
Beta Was this translation helpful? Give feedback.
-
..., and the people who think they need it,
needs in fact to listen to other musicians
This is exactly what should be *avoided* while using Jamulus: As I
stated already before, and some others in here confirmed, listening to
the fellow musicians will inadvertently lead to a slow-down due to the
always present latency. As it is small, you *think* you can orient
yourself by listening to your fellow musicians, but you can't.
This is even more so if there are some pauses in your voice - which may
be rather seldom in rock/pop but is quite frequent in classic music. In
addition, with most classic pieces there ain't such a thing as a
drummer. :-) (Okay, there are exceptions like Bolero, but I would like
to play some other pieces as well ...)
Cheers,
Petra
|
Beta Was this translation helpful? Give feedback.
-
@Petra-Kate I don't understand what you mean by "split the average metronome latency by two". We are enslaved by latency, we do not have control on it. Could you clarify that point?
👍 very interesting. How would you estimate this theoretically? I guess my below reasoning is wrong. Would you be able to make something accurate? Assume the worst bandmate latency is L - expressed in millisecond, the speed is S in BPM, and the duration is D minutes. Is this right to estimate the song will be stretch by LxSxD milliseconds? With this reasonning, a 3 minutes song at 60 BPM with a 30 ms latency would last 5.4 more seconds. |
Beta Was this translation helpful? Give feedback.
-
1st level: Just implement an independent metronome ticker on the server
as the central coordination point, and thus spreading a timing normal
unconditionally as even as possible to every participant, and presumably
split the average metronome latency by two, but by much more as far as
the individual participants of a session are concerned.
@Petra-Kate <https://github.com/Petra-Kate> I don't understand what you
mean by "split the average metronome latency by two". We are enslaved by
latency, we do not have control on it. Could you clarify that point?
For the calculations I just assume some numbers: Average latency 50 ms,
jitter 5 ms (playing across Germany, with someone situated in a
"retarded" area :-))
If the person with a 50 ms latency is providing the metronome it will
arrive at the others players 100 ms later (due to backward latency of 50
ms and then 50 ms forward latency again. In addition the jitter will
increase, once again due to double passes, to averaged probably 7.5 ms
If we have an autonomous metronome at the server it will send its
strokes independently, so they will arrive with the players about 50 ms
later (depending on the individual latencies, of course), but with all
players delayed mostly similiar. So the span of latency is split roughly
by half. Jitter will be the one experienced by any player individually
as well, so it remains the old value.
listening to the fellow musicians will inadvertently lead to a
slow-down due to the
always present latency. As it is small, you /think/ you can orient
yourself by listening to your fellow musicians, but you can't.
👍 very interesting. How would you estimate this theoretically? I guess
my below reasoning is wrong. Would you be able to make something accurate?
Assume the worst bandmate latency is L - expressed in millisecond, the
speed is S in BPM, and the duration is D minutes. Is this right to
estimate the song will be stretch by LxSxD milliseconds? With this
reasonning, a 3 minutes song at 60 BPM with a 30 ms latency would last
5.4 more seconds.
I'd like to reason on the basis of relative deceleration:
Let's assume we have 120 BPM, hence 2 beats per second (PBS). If each
beat is delayed by 30 ms we have a retardation of 60 ms every second, or
roughly 1/20 of the present speed. The effect will decrease over time
since the number of beats per unit time will decrease, but nevertheless
speed is constantly slowing down. I'm too lazy to do an exact
mathematical calculation right now, but you will arrive at a (near-to)
stand-still very soon: After about half a score sheet you will arrive at
approximately half the original tempo. And this is excactly what we were
observing when trying to listen to each other as we are used to from
live playing.
Of course this is not practically correct as every musician will listen
at least partially to his/her inner metronome, but the more you tend to
listen to your fellows the stronger this effect is.
(Btw.: I guess this is one of the principal duties of the conductor
during the concerto: to define an optical lead to the orchestra by his
stroking. Luckily the velocity of light is slightly larger than that of
sound. :-) Mind the fact that we have a delay of 3 ms/m in a live
performance, so for large orchestras with distances of 20 - 30 m between
left-most and right-most player this should play some role already, at
least for really fast passages.)
Cheers,
Petra
|
Beta Was this translation helpful? Give feedback.
-
Then one is not using the right tools and/or it's misconfigured. If in a band playing configuration one musician have to stop listening to other musicians, then it's a fail. Playing in a band is about listening to other.
Yes, that is right, there is not a drummer in classical music, but there is a "conductor"... and we were talking about a rock band here. Moreover, you're giving me an example to proove my point even more with this example, in classic, they is more musicians and the music is even harder to play. And guess what ? Nobody is listening to a metronome 😉 |
Beta Was this translation helpful? Give feedback.
-
Olivier Humbert dixit:
in classic, they is more musicians and the music is even harder to
play. And guess what ? Nobody is listening to a metronome
Because one can see the conductor there, with virtually no latency.
A server-generated metronome (Petra’s #1) would be the minimum and
very useful. People will be using this far beyond the “Jam session”
initial usecase. (I’d also think it might do for voice conferences;
ofc no need for a metronome there, but this proves my point.)
Also consider amateur musicians, who need this to stay somewhat in sync.
|
Beta Was this translation helpful? Give feedback.
-
Yes, and a conductor is a human connected to the music, a metronome isn't.
It has nothing to do with "amator" VS "professionnal" (or whatever). It has to do with playing as a musician or being training yourself. If you are training yourself, yes, you need a metronome. If you're playing with other musicians and as a musician yourself, then the last thing you want is to have your attention focused on a clic-clic machine rather than the other musicians you're playing with. If there is too much latency for that to happen, bad luck you can't really play music with other musicians. |
Beta Was this translation helpful? Give feedback.
-
From the entire thread above I think we can conclude that we would be best served with a metronome that adds the ticks locally (as a channel one can change the volume of). I have the feeling that the ntp part of the thread was an example that got out of hand. Fundamentally, the client already knows its delay with the server. So from that it only needs to know the meter and the speed to start ticking away at the synchronized timestamps. I was hoping to find some time to look into this, but did not have the chance yet. |
Beta Was this translation helpful? Give feedback.
-
Yes, at least from my point of view (and suggestions)
From my point of view it was an intentional suggestion to do it completely locally (yet synced globally), with no consideration of the server <-> client <-> server delay that is observed currently. At times when that delay changes (jitter?) we don't want the metronome to hick-up or lag. Jamulus will do its best to handle the audio streams in such a situation, but for a metronome there's no better way to behave and no better source of truth than just the uniformity of time. BTW: The best that Jamulus can possibly achieve (by detecting the delay change and reacting by adjusting the metronome shift accordingly) will be uniformity anyway.
Whichever way you do it (average delay/2 or "simply" local clock) it will be highly appreciated. |
Beta Was this translation helpful? Give feedback.
-
@msf-git it should not be hard to make a separate piece of code (even in python) to play local ticks to the control of a little server one starts next to the jamulus server, as a proof of concept. |
Beta Was this translation helpful? Give feedback.
-
Just today I tried playing with a metronome by connecting audacity playing a rhythm track into the Jamulus client using voicemeeter. This was with 3 people (including myself) playing classical music (using acoustic instruments means "only hear our own signal which comes back from the server" isn't really an option). It worked pretty well, it definitely helped avoid the slowdown that would usually happen due to the lag. One person using asio4all with their standard built-in soundcard was still lagging just slightly behind the beat; I don't know if a hypothetical Jamulus integrated metronome could compensate for that, but it would nice if that was possible. |
Beta Was this translation helpful? Give feedback.
-
Hi all - so that we can better keep the Issues list to things that we have clear specs for, I'm going to migrate this classic discussion to a real discussion! |
Beta Was this translation helpful? Give feedback.
-
WTF is this thing? It looks the same just even less clear to use. |
Beta Was this translation helpful? Give feedback.
-
Is there a simpler solution (articulated by others above):
My band has been successful with #1 for months, even when some members were at 70ms Overall Delay. Is there some reason that #2 can't work for acoustic instruments and vocal groups? Instructions for how to achieve the "injection" are readily available for Mac, Windows, and probably Linux. |
Beta Was this translation helpful? Give feedback.
-
DavidSavinkoff dixit:
I've pondered this one for a while and my observations are:
- 1 An intelligent metronome will benefit some musicians greatly.
- 2 An intelligent metronome will be of no use for some musicians.
- 3 An intelligent metronome is a reasonable feature.
- 4 There are musicians that want to hear their delayed instrument through Jamulus.
- 5 There are musicians that do Not want to hear their delayed instrument through Jamulus.
- 6 Musicians that play instruments with a soft attack (non percussive) may agree with 1 and 4
- 7 Musicians that play an electronic keyboard instrument may agree with 1 and 4
- 8 Musicians that play instruments with a hard attack (percussive) may agree with 2 and 5
9. Vocalists… probably 1, 4 and 5
|
Beta Was this translation helpful? Give feedback.
-
dingodoppelt dixit:
screamingly loud. Just as on stage when there are 8 brass players
blasting my head off from behind ;) For jamulus I found I can't play an
acoustic instrument any other way if I want to play in time.
That reminds me of when I was sat near the trumpet, the other singers
in my voice behind me, and I could hear neither them nor myself.
For vocalists, this is actually irritating. We kind of need the echo
to mix with the other voices *and* be as few delayed as possible,
because we cannot disable the direct feedback (bone propagation or
however this is called in English). We can use a dirigent to stay
*ahem* somewhat synchronous. But doing without the mixed feedback
is hard. I tried to sing with one finger in one ear once, after having
read about that, and it completely threw me off, I couldn’t even tell
whether I was even remotely close to the correct pitch.
Having a metronome integrated in Jamulus means that
• the server can take the varying TTLs to participants into account
• the metronome can be controlled from one of the graphical Jamulus
instances running, so the dirigent can slow it down, speed it up,
temporarily pause, etc. it during the piece, in a musician-friendly
way
and this is kinda something that could help choral adoption.
I’ve not entirely thought this out. I’ve not yet practiced with
purely a metronome, and the entire online stuff is frustrating,
but, given Corona shit, this is what we’ve got :/ after all I’m
an amateur, if that, musician, more a programmer who’s researched
musical engraving (of vocal scores mostly) in depth and likes to
sing a bit.
I think the metronome also at least needs something of a playlist
ability, where you can program certain speeds to switch back and
forth with a sole keypress later instantly. For example, go to
http://www.mirbsd.org/music/free/ and get Haßler -- Cantate Domino
in a format useful for you (mscx/mscz=MuseScore 3, xml=MusicXML),
there’s a ♩=♩. speedup in there (which is rendered in the MIDI/MP3).
bye,
//mirabilos
--
„Cool, /usr/share/doc/mksh/examples/uhr.gz ist ja ein Grund,
mksh auf jedem System zu installieren.“
-- XTaran auf der OpenRheinRuhr, ganz begeistert
(EN: “[…]uhr.gz is a reason to install mksh on every system.”)
|
Beta Was this translation helpful? Give feedback.
-
I'm another vote for some sort of metronome feature. I'm a beginning guitarist using Jamulus to do one-on-one lessons with my teacher. My goal is to reach a level of skill where I am listening to the other player, but I'm not there yet. Jamulus has been a marvelous improvement over Zoom for our lessons; I am deeply grateful to Volker and the entire community. I hope you'll consider the use-case of "Music Lesson" for adding a metronome. I've attempted several solutions, such as a nearby phone playing a metronome app into my mic channel, mixing in a clicktrack via loopback, and adding a third player via additional hardware. As workarounds, they've been functional, at the cost of considerable complexity and setup. An easy, in-client solution would be a boon for music instructors working with less technically adept or less temporally precise students. |
Beta Was this translation helpful? Give feedback.
-
My personal summary:
In my opinion, the optimal solution would be something external to Jamulus, which would still provide good usability. I'm thinking of a special Jamulus client (could even be a fancy web application!) which could feed arbitrarily complex metronome data into a Jamulus server. "Metronome as a service", if you want. Go to a website, enter the Jamulus server, enter your metronome settings, press "Play". I think, this is doable. If I had some more time, I'd probably hack up a proof of concept at some time. I can't promise anything, so if anyone wants to borrow this idea, feel free to. :) |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
My experience with Jamulus is that band members drift from each other to the point that it becomes impossible to recover. Even with a regular click, each gets this at different times and a dominant band memeber may drive the beat far away from the click. This is particularly challenging between the lowest and highest latency members. If there was a mechanism to allow each user to receive some form of sync would be beneficial which is maximised if the sync is generated in the Jamulus server. This immediately reduces the latency an jitter of the sync for everyone because there is no CODEC or latency between the sync and the server. Users who have said in this discussion that the click would be intrusive for thier workflows can turn it off. Obviously a sync mechanism must be started and stopped as required. You don't want a metronome running between tracks, during setup, etc. I don't think triggering the click in advance necessarily helps. Everyone will then hear a click before the audio from other users arrives. If the click is synchronised to the server then everyone plays against their own one-hop delay only. If you use local monitoring then your own audio is one-hop advanced and everyone else's audio is one-hop retarded so you end up with everyone at two-hop latency. It would be advantageous to have a direct audio input to the server so that such a "click / metronome" could be inserted if desired by the server owner without the overhead of CODEC processing. It would also facilitate dynamic tempo change as the source of audio could be a sequenced click, e.g. drum machine. One workflow described is to use an extra client to feed a click into the server. This requires running the extra client with the processing overhead and additional latency. I would prefer to see a feature built into the server. I would like to have both options:
The sync signal could be sent to the clients for them to generate the audible / visual sync, e.g. metronome or it could be an audio signal mixed with the monitoring feed. The former may allow each user to dial in their own preference for (pre) delay, i.e. if a user wants the click to trigger earlier or later or not at all they can. There were suggestions to use a third party solution but this disconnects the synchronisation from Jamulus and makes the problem worse. Jamulus knows what audio it is delievering where and at what time so has the data required to deliver the sync signal. An asynchronous system would almost certainly drift and fail. There was a suggestion that multiple NTP servers only provides resilience. This is not necessarily true. NTP has the ability to tune clocks so that the client has high accuracy, low jitter time. There are also other protocols that provide highly accurate timing. I really want to see this feature. I am struggling to get a workable Jamulus environment due to latency being up to 60ms which is challenging to play against. If a sync could arrive at 30ms then it may reduce the impact and allow this to work better. If I can contribute to the codebase I will do so. |
Beta Was this translation helpful? Give feedback.
-
There is often distortion and noise on the jamulus monitoring feed due to packet loss. This can make it difficult to play so I add some direct local mix to avoid a total loss of concentration. The depredation to the sound can also be significant distraction. |
Beta Was this translation helpful? Give feedback.
-
I appreciate that this discussion has be revived after a long break. A long time ago, I have already thought more extensively about how a conductor role could be added to Jamulus and how this could be implemented. But I have not yet published these thoughts. |
Beta Was this translation helpful? Give feedback.
-
This issue has been migrated from Sourceforge. Created: 2020-04-03
Creator: Petra-Kathi
An idea to minimize latency differences:
The server can detect latencies of individual clients by clock
comparisons of respectively transmitted packets, so it should be able to
advise the individual clients to create metronome ticks (not audible
with the other participants) with individual latencies / time shifts of
some (ten) milliseconds , so the strokes are heard with best relative
timing.
The individual stroke timing should be derived from observation of
the ping timing over the last, say, 5 seconds, so it would do some
averaging against individual extraordinary retarded packets.
The participant with the slowest line should then get the stroke first, and the
other ones appropriately retarded by some milliseconds. The result
should be a better aligned combined sound, even though that would be
retarded by about the amount of the latency of the slowest participant
(single direction latency), but rather consistently.
Beta Was this translation helpful? Give feedback.
All reactions