The SDL forums have moved to discourse.libsdl.org.
This is just a read-only archive of the previous forums, to keep old links working.


SDL Forum Index
SDL
Simple Directmedia Layer Forums
SDL_mixer issue, no one talks about ?!
David Olofson
Guest

On Thu, Dec 5, 2013 at 8:10 PM, JurisL85 wrote:
[...]
Quote:
I have used SDL_GetTicks() and SDL_GetPerformanceCounter() to measure (for
example) 100ms and play the sound. But the problem is that playback is not
consistent, even few 'ms' of time difference can be heard when sounds are
played rapidly.
[...]

I suspect few actually understand the problem, and most just
(incorrectly) assume it's in the nature of things and work around it.
I don't actually know of any games, other than my own, that aren't
"hacking" around the problem by using looped samples for machine guns,
engines and the like.


Anyway, what happens in most sound engines is, the API calls either
queue up messages, or locks the engine and changes its state directly.
The next time the audio callback runs, which is typically every 20 ms
or so, new sound effects are started, right at the start of the audio
buffer. So, all your commands are effectively quantized to whatever
buffer period audio output is configured to!

A low latency musical application typically runs audio processing an
order of magnitude more frequently (less than 1 ms is common), which
reduces this issue to acceptable levels for most applications - but
unfortunately, you can't rely on that "solution" (it's still a
hack...) unless you're on a machine that's equipped and configured for
serious music production.

What you can do is timestamp the messages from the API, and delay them
as needed in the mixer to maintain constant latency. That takes the
buffer setting out of the equation; more buffering only increases
latency - not quantization.

However, that still leaves your game video frame rate dependent. If
you just check the current time in the mixer API calls, timing still
gets quantized, only now, it's to the rendering frame rate. If you're
running a fixed logic frame rate with interpolation/"tweening," that's
still not good enough!

The next step is to add explicit timestamping support in the API.
(That's what I'm doing in both generations of Audiality, used in Kobo
Deluxe and Kobo II respectively.) Instead of using actual timestamps
for commands, you derive audio command timestamps from game logic
time.

If you tune this carefully enough, you should theoretically get away
with triggering sound effects with millisecond accurate timing, or
even better - but realistically, it's never going to be that accurate
on any normal operating system. So, I'm going to admit that I'm
cheating a little in Kobo II: That 100 RPS minigun is indeed all
realtime synthesis and scripted bullet by bullet - but that script
runs in the audio engine context, and just takes start/stop commands
from the game logic scripts. :-)


TL:DR: This issue can of course be fixed, but it's not trivial. You'd
have to hack SDL_mixer (not sure how deeply) and change its API
slightly.


--
//David Olofson - Consultant, Developer, Artist, Open Source Advocate

.--- Games, examples, libraries, scripting, sound, music, graphics ---.
| http://consulting.olofson.net http://olofsonarcade.com |
'---------------------------------------------------------------------'
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
SDL_mixer issue, no one talks about ?!
Eric Wing
Guest

On 12/6/13, David Olofson wrote:
Quote:
On Thu, Dec 5, 2013 at 8:10 PM, JurisL85 wrote:
[...]
Quote:
I have used SDL_GetTicks() and SDL_GetPerformanceCounter() to measure
(for
example) 100ms and play the sound. But the problem is that playback is
not
consistent, even few 'ms' of time difference can be heard when sounds are
played rapidly.
[...]

I suspect few actually understand the problem, and most just
(incorrectly) assume it's in the nature of things and work around it.
I don't actually know of any games, other than my own, that aren't
"hacking" around the problem by using looped samples for machine guns,
engines and the like.


Anyway, what happens in most sound engines is, the API calls either
queue up messages, or locks the engine and changes its state directly.
The next time the audio callback runs, which is typically every 20 ms
or so, new sound effects are started, right at the start of the audio
buffer. So, all your commands are effectively quantized to whatever
buffer period audio output is configured to!

A low latency musical application typically runs audio processing an
order of magnitude more frequently (less than 1 ms is common), which
reduces this issue to acceptable levels for most applications - but
unfortunately, you can't rely on that "solution" (it's still a
hack...) unless you're on a machine that's equipped and configured for
serious music production.

What you can do is timestamp the messages from the API, and delay them
as needed in the mixer to maintain constant latency. That takes the
buffer setting out of the equation; more buffering only increases
latency - not quantization.

However, that still leaves your game video frame rate dependent. If
you just check the current time in the mixer API calls, timing still
gets quantized, only now, it's to the rendering frame rate. If you're
running a fixed logic frame rate with interpolation/"tweening," that's
still not good enough!

The next step is to add explicit timestamping support in the API.
(That's what I'm doing in both generations of Audiality, used in Kobo
Deluxe and Kobo II respectively.) Instead of using actual timestamps
for commands, you derive audio command timestamps from game logic
time.

If you tune this carefully enough, you should theoretically get away
with triggering sound effects with millisecond accurate timing, or
even better - but realistically, it's never going to be that accurate
on any normal operating system. So, I'm going to admit that I'm
cheating a little in Kobo II: That 100 RPS minigun is indeed all
realtime synthesis and scripted bullet by bullet - but that script
runs in the audio engine context, and just takes start/stop commands
from the game logic scripts. :-)


TL:DR: This issue can of course be fixed, but it's not trivial. You'd
have to hack SDL_mixer (not sure how deeply) and change its API
slightly.


--
//David Olofson - Consultant, Developer, Artist, Open Source Advocate

.--- Games, examples, libraries, scripting, sound, music, graphics ---.
| http://consulting.olofson.net http://olofsonarcade.com |
'---------------------------------------------------------------------'


David has a great answer.

But as for easy workarounds, I have two additional suggestions:

1) Use the looping feature of Mix_PlayChannel
This might minimize some overhead for looping reducing the latency and
maybe even randomness.

2) Try ALmixer_PlayChannel with infinite looping
ALmixer is a library I wrote that is very API-close to SDL_mixer so it
is pretty easy to port to. It uses OpenAL under the hood. For infinite
looping, ALmixer uses OpenAL's native looping feature so I kind of
expect the latency/randomness issues to be among the best you will be
able to get.

In both cases, you'll want to employ an explicit start and stop of the
sound (e.g. on button down and up) instead of trying to explicitly
play each loop yourself.

Thanks,
Eric
--
Beginning iPhone Games Development
http://playcontrol.net/iphonegamebook/
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
JurisL85


Joined: 26 Sep 2013
Posts: 3
Location: Dublin (Riga)
This starts to make sense now guys. Will try some of the ideas I got from you
today. I Don't want to use any other Libs other then SDL and OpenGL at the moment as my code runs on desktops and mobiles. But will try some stuff with SDL_mixer today. Will post back on how it goes.