The SDL forums have moved to discourse.libsdl.org.
This is just a read-only archive of the previous forums, to keep old links working.


SDL Forum Index
SDL
Simple Directmedia Layer Forums
SDL_OpenAudio() and callback frequency
SparkyNZ


Joined: 02 Nov 2010
Posts: 72
Hi. Can somebody please help me understand how often the audio callback function is called?

Using the values below as in the online documentation, how often would the callback function be called? I don't understand the relationship between 'samples' and 'freq'. I need to understand how often the callback function is called for my music playing/composing app so I can synchronise the playing of new notes and instruments with time.

Code:
    /* Set 16-bit stereo audio at 22Khz */
    fmt.freq = 22050;
    fmt.format = AUDIO_S16;
    fmt.channels = 2;
    fmt.samples = 512;        /* A good value for games */
    fmt.callback = mixaudio;
    fmt.userdata = NULL;


Cheers
Sparky
SDL_OpenAudio() and callback frequency
David Olofson
Guest

On Wednesday 27 April 2011, at 20.52.36, "SparkyNZ" wrote:
Quote:
Hi. Can somebody please help me understand how often the audio callback
function is called?
[...]
Quote:
Code:
/* Set 16-bit stereo audio at 22Khz */
fmt.freq = 22050;
fmt.format = AUDIO_S16;
fmt.channels = 2;
fmt.samples = 512; /* A good value for games */
fmt.callback = mixaudio;
fmt.userdata = NULL;
[...]

In this example, the audio device will consume 22050 16 bit stereo samples
every second. Since one callback will generate 512 samples, the callback needs
to be called 22050 / 512 (approximately 43) times per second to keep up.


--
//David Olofson - Consultant, Developer, Artist, Open Source Advocate

.--- Games, examples, libraries, scripting, sound, music, graphics ---.
| http://consulting.olofson.net http://olofsonarcade.com |
'---------------------------------------------------------------------'
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
SparkyNZ


Joined: 02 Nov 2010
Posts: 72
Thanks David, so if the sample rate was 44100, 16bit, that would be 44100 samples - each sample being 2 bytes. With a buffer size of 512 samples (ie. 1024 bytes), it would be called 44100/512 approx 86 times per second.

So.. if I had a music player (sequencer) that needs to plays notes at 120 BPM, a quarter note "resolution" would be 480 quarter notes per minute, or 480/60 = 8 quarter notes per second. (For some reason this doesn't sound right to me - so please correct me if I'm wrong Smile )

If I wanted to create a sequencer that would service new notes at a rate of 8 quarter notes per second, I'd either need to load new sample data within the callback approx every 10 calls, or better still, have a seperate thread that loads sample data into a buffer 8 times per second.

..and I suppose the best way to avoid changing the sample data when the callback is being called would be double -buffer somehow and switch between buffers once the sample data is committed to the buffer that the callback would be using?
SDL_OpenAudio() and callback frequency
Will Langford
Guest

Do I smell someone writing their own mod player ? Smile

Sparky: you need to doubly-decouple your audio from the hardware.


That is, you should only be copying data within the SDL callback, not processing any kind of logic.


Your main loop (or other thread, whatever) should be generating a buffer that the callback will pull data from. 


-Will




On Wed, Apr 27, 2011 at 3:35 PM, SparkyNZ wrote:
Quote:
Thanks David, so if the sample rate was 44100, 16bit, that would be 44100 samples - each sample being 2 bytes. With a buffer size of 512 samples (ie. 1024 bytes), it would be called 44100/512 approx 86 times per second.

So.. if I had a music player (sequencer) that needs to plays notes at 120 BPM, a quarter note "resolution" would be 480 quarter notes per minute, or 480/60 = 8 quarter notes per second. (For some reason this doesn't sound right to me - so please correct me if I'm wrong )

If I wanted to create a sequencer that would service new notes at a rate of 8 quarter notes per second, I'd either need to load new sample data within the callback approx every 10 calls, or better still, have a seperate thread that loads sample data into a buffer 8 times per second.

..and I suppose the best way to avoid changing the sample data when the callback is being called would be double -buffer somehow and switch between buffers once the sample data is committed to the buffer that the callback would be using?


_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

SDL_OpenAudio() and callback frequency
Mason Wheeler
Guest

That doesn't sound right to me either. You're equating "beat" with "whole note", and the two are completely different musical concepts.


>From: SparkyNZ
>Subject: Re: [SDL] SDL_OpenAudio() and callback frequency
>Thanks David, so if the sample rate was 44100, 16bit, that would be 44100 samples - each sample being 2 bytes. With a buffer size of 512 samples (ie. 1024 bytes), it would be called 44100/512 approx 86 times per second.
Quote:

So.. if I had a music player (sequencer) that needs to plays notes at 120 BPM, a quarter note "resolution" would be 480 quarter notes per minute, or 480/60 = 8 quarter notes per second. (For some reason this doesn't sound right to me - so please correct me if I'm wrong )
Re: SDL_OpenAudio() and callback frequency
SparkyNZ


Joined: 02 Nov 2010
Posts: 72
Will Langford wrote:
Do I smell someone writing their own mod player ? Smile


Indeed I am, Will! Smile Actually I'm writing my own tracker composing program. I just hope I know what I'm doing. Smile

Yeah, I hear what you're saying about the decoupling. Seperate thread for sure.
Re: SDL_OpenAudio() and callback frequency
SparkyNZ


Joined: 02 Nov 2010
Posts: 72
Mason Wheeler wrote:
That doesn't sound right to me either. You're equating "beat" with "whole note", and the two are completely different musical concepts.


Yeah I know very little about music.. So in 4/4 time, would a beat actually be quarter note?
SDL_OpenAudio() and callback frequency
Mason Wheeler
Guest

Quote:
From: SparkyNZ
Subject: Re: [SDL] SDL_OpenAudio() and callback frequency

Mason Wheeler wrote:
Quote:
That doesn't sound right to me either. You're equating "beat" with "whole note",

and the two are completely different musical concepts.

Yeah I know very little about music.. So in 4/4 time, would a beat actually be
quarter note?

Yes. See http://en.wikipedia.org/wiki/Tempo#Beats_per_minute for the
technical details.

_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
SDL_OpenAudio() and callback frequency
David Olofson
Guest

On Wednesday 27 April 2011, at 22.35.02, "SparkyNZ" wrote:
Quote:
Thanks David, so if the sample rate was 44100, 16bit, that would be 44100
samples - each sample being 2 bytes. With a buffer size of 512 samples
(ie. 1024 bytes), it would be called 44100/512 approx 86 times per
second.

Yep, sounds right. :-)


Quote:
So.. if I had a music player (sequencer) that needs to plays notes at 120
BPM, a quarter note "resolution" would be 480 quarter notes per minute, or
480/60 = 8 quarter notes per second. (For some reason this doesn't sound
right to me - so please correct me if I'm wrong Smile )

Actually, a "beat" is generally a quarter note, so that's 120 quarter notes
per minute, or two beats per second.

The "480 per minute" would be 16ths - and 8 *16ths* per second sounds
reasonable for 120 BPM. :-)


Quote:
If I wanted to create a sequencer that would service new notes at a rate of
8 quarter notes per second, I'd either need to load new sample data within
the callback approx every 10 calls,

I think it's a better idea to count audio samples rather than callbacks.
Buffer size is a configuration option that can potentially be set very high if
latency is not critical (such as in a plain music player, or during cut
scenes), so you don't want that to have any impact on timing accuracy.

Also, looking at MIDI sequencers and MIDI (as in, the old 31250 bps wires) in
general, millisecond accuracy or better is desirable. I'd say sample accuracy
whenever possible!


Quote:
or better still, have a seperate
thread that loads sample data into a buffer 8 times per second.

I would strongly recommend against that! It's inefficient, inaccurate, and
hardly even works at all on general purpose operating systems. Do it all by
means of logic in the audio callback instead, and get reliabl sample accurate
timing for "free".

This is how I handle it in ChipSound, the sound engine I use in Kobo II (some
"noise" removed) :
---------------------------------------------------------------------
static void cs_AudioCallback(void *ud, Uint8 *stream, int len)
{
CS_state *st = &cs_globalstate;
Sint16 *devicebuf = (Sint16 *)stream;
int remain = len / 4;

...

cs_ProcessMIDI(st);
while(remain)
{
unsigned frag = remain > CS_MAXFRAG ? CS_MAXFRAG : remain;
memset(masterbuf, 0, frag * sizeof(int));
cs_ProcessVoices(&st->groups[0], masterbuf, frag);
cs_ProcessMaster(st, masterbuf, devicebuf, frag);
devicebuf += frag * 2;
remain -= frag;
}

...
}
---------------------------------------------------------------------

Basically, it processes CS_MAXFRAG samples at a time, until there is room for
less than CS_MAXFRAG frames, where it processes a "short" fragment to complete
the buffer. This way, I can set CS_MAXFRAG to whatever I like.

(And, I've set it rather low - 64 samples - to reduce CPU cache footprint.
This can be a major performance win, as most DSP code will run a lot faster
than the RAM can keep up with. Just don't set it so low that entry/exit
overhead eats the profit. :-)


Internally, cs_ProcessVoices() further subdivides fragments as needed (same
idea; you just use "frames until next event" instead of CS_MAXFRAG), allowing
everything to be sample or sub-sample accurate.

(Tech trivia: ChipSound is driven by a "microthreading" realtime scripting
engine that runs one thread per voice, controlling all parameters with sample
or sub-sample accuracy, sending messages and stuff. The language was really
intended for low level "chip style" sound programming only, but all Kobo II
sfx and music so far is written entirely in it, using a plain text editor.
Couldn't resist the temptation to try it the 8-bit way! Very Happy )


Quote:
..and I suppose the best way to avoid changing the sample data when the
callback is being called would be double -buffer somehow and switch
between buffers once the sample data is committed to the buffer that the
callback would be using?

There is no need to bother with that, unless you have very good reasons to do
the actual processing outside the audio callback. Buffering and doing the work
in another thread can be a good idea when dealing with compressed streams with
large, fixed size chunks, or if you have a very complex sequencer that needs
dynamic memory management and stuff - but for a "normal" synth/sound/music
engine, I think it's just complicating things and limiting your options. Real
time control options, more specifically; using "songs" as sound effects and
that sort of stuff.

BTW, ChipSound *does* use dynamic memory management, so I wouldn't say that's
a showstopper either. The scripting engine needs call stacks, voices are
allocated dynamically and so on... The objects that are actually allocated and
freed in the realtime context are preallocated and pooled using LIFO stacks.
It can refill the pools from the realtime context in case of emergency (which
*usually* works without causing drop-outs), but tuning the startup parameters
for the project at hand should keep that from ever happening in a finished
product.


--
//David Olofson - Consultant, Developer, Artist, Open Source Advocate

.--- Games, examples, libraries, scripting, sound, music, graphics ---.
| http://consulting.olofson.net http://olofsonarcade.com |
'---------------------------------------------------------------------'
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
SDL_OpenAudio() and callback frequency
Will Langford
Guest

On Wed, Apr 27, 2011 at 4:03 PM, SparkyNZ wrote:
Quote:



Will Langford wrote:

Do I smell someone writing their own mod player ?




Indeed I am, Will! Actually I'm writing my own tracker composing program. I just hope I know what I'm doing.

Yeah, I hear what you're saying about the decoupling. Seperate thread for sure.




Years ago I wrote a mod player, then an s3m player.  Nothing terribly fancy.  Didn't use SDL though, just windows wave out stuff.  I might see if I can dig it up again... although I think it's long lost Sad.  Straight up C, commented, etc.  Hope I still have it somewhere.

-Will
 
Re: SDL_OpenAudio() and callback frequency
SparkyNZ


Joined: 02 Nov 2010
Posts: 72
Thanks again David for a very informative response. I'm actually at work at the moment so I don't have the time to look into this right now (wish I did! Smile ). I'll try and digest what you're saying later on this evening, so don't be surprised if you hear back from me again.

Its good that you're working with chip sounds - what I'm doing is similar to GoatTracker. I'm writing my own player that will use ReSID to generate C64 sounds (thats phase 1 at least Smile ). I'm just finding the whole audio/samples/mixing/sequencing a bit of a headache at the moment because basic sound playback and mixing is all new to me. However, I would like to use real samples in the next phase so I'd like to do it the right way rather than just hack something together - ie. its important that I get syncronisation of ReSID and sample playback and of course correct musical timing.
Re: SDL_OpenAudio() and callback frequency
SparkyNZ


Joined: 02 Nov 2010
Posts: 72
OK.. I'll think out loud and see if I get this right. Lets assume I want 44100 samples per second and I want 16 bit samples.

Assume I have a tune that plays at 120BPM, which would be 2 beats per second. With the rate above, 1 beat would require 22050 samples. Yes? For the sake of simplicity, lets assume that this tune is a children's tune and never contains notes quicker than quarter notes. So.. I could have an array of note values and each element would contain a quarter note value (pitch).

So my callback would be called every.. whenever.. and I would have to count the number of samples between successive callback calls until 22050 samples have been requested before looking up another set of notes - ie. I would consult my array of notes every 22050 samples and provide a new set of buffer fragments for each callback at that point. Yes?

If I'm right so far, then let me try and take things one step further. Counting 22050 samples for quarter note only resolution would be fine.. but lets increase the resolution to eighth note resolution. For tunes containing eighth notes at 120BPM, I'd have to provide a new set of notes every 11025 samples.. Yes?

So what happens when I want to use 16th notes or 32nd notes? Would I further half the 11025 sample count, or would I keep a floating point count and provide the new notes based on a rounded count? In the case of 16th notes, I'd have to provide new notes every 5512.5 samples.. so would it be OK to provide a note on a count of 5513, and then on a count of 11025? (ie. would the brain not notice the timing?)

One of the things I am beginning to notice now is that buffer size would become important for counting samples accurately when a tune contains lots of rapid/short notes. For instance, if a buffer was set of say 4096 samples, it would be impossible to provide 16th notes because I would have no way of counting 5512 samples. I'd have to set the buffer to 5512 or a factor of 5512 wouldn't I? (such as 5512/2 ->2756 samples).

So would it be fair to say that the buffer size should be the same as the smallest note resolution required?

I'm kind of getting the idea that I would have to set a different buffer size depending upon the chose BPM of a tune.

Lots of questions, folks, but am I on the right track, Dave? Oh boy.. I hope I am. Smile
SDL_OpenAudio() and callback frequency
David Olofson
Guest

On Thursday 28 April 2011, at 11.44.32, "SparkyNZ" wrote:
Quote:
OK.. I'll think out loud and see if I get this right. Lets assume I want
44100 samples per second and I want 16 bit samples.

I think it's easier to just think in terms of complete sample frames (like the
SDL API does), leaving sample format and number of channels out of the higher
level equations.


Quote:
Assume I have a tune that plays at 120BPM, which would be 2 beats per
second. With the rate above, 1 beat would require 22050 samples. Yes?

Yep.


Quote:
For
the sake of simplicity, lets assume that this tune is a children's tune
and never contains notes quicker than quarter notes. So.. I could have an
array of note values and each element would contain a quarter note value
(pitch).

So my callback would be called every.. whenever.. and I would have to count
the number of samples between successive callback calls until 22050
samples have been requested before looking up another set of notes - ie. I
would consult my array of notes every 22050 samples and provide a new set
of buffer fragments for each callback at that point. Yes?

Yeah, that's basically it.

For any non-trivial applications, it might be a good idea to use a priority
queue to keep track of what's up next. That way, you just check the "top"
event to see how many sample frames remain until it's time to process that
event, and then you generate max(buffer_size, time_until_next_event) sample
frames.


Quote:
If I'm right so far, then let me try and take things one step further.
Counting 22050 samples for quarter note only resolution would be fine..
but lets increase the resolution to eighth note resolution. For tunes
containing eighth notes at 120BPM, I'd have to provide a new set of notes
every 11025 samples.. Yes?

So what happens when I want to use 16th notes or 32nd notes? Would I
further half the 11025 sample count, or would I keep a floating point
count and provide the new notes based on a rounded count? In the case of
16th notes, I'd have to provide new notes every 5512.5 samples.. so would
it be OK to provide a note on a count of 5513, and then on a count of
11025? (ie. would the brain not notice the timing?)

What sequencers and the like normally do is use "pulses" and a running integer
"music time" counter, translating musical time into milliseconds, audio sample
frames or whatnot whenever an event is actually to take place. This way, exact
timing is maintained over longer periods of time, as rounding to the nearest
physical timing unit is done for each event. That is, no rounding error
buildup.

120 PPQN (Pulses Per Quarter Note) is a common timing resolution in MIDI
sequencers (and consequently, MIDI files), as it fits nicely with the usual
note timing/durations as well as triplets. This is usually configurable, and
many modern sequencers use much larger "magic" PPQN values for higher
resolution with preserved absolute integer accuracy.

It's really a lot like the "lines" of trackers; a fixed timing resolution for
the song, or part of a song. The difference is that with timestamped events
(like in MIDI files and the like), you have a sparse representation of the
"patterns", so timing resolution has little effect on memory footprint and
file size.


Quote:
One of the things I am beginning to notice now is that buffer size would
become important for counting samples accurately when a tune contains lots
of rapid/short notes. For instance, if a buffer was set of say 4096
samples, it would be impossible to provide 16th notes because I would have
no way of counting 5512 samples. I'd have to set the buffer to 5512 or a
factor of 5512 wouldn't I? (such as 5512/2 ->2756 samples).

A fixed buffer size corresponding to a "tick" (or "pulse" in MIDI sequencer
terminology) might work for simple applications, but the accuracy quickly
becomes very low as the "tick" resolution increases. More seriously, with a
constant rounding error being added every tick, the tempo is significantly
off, and you quickly run out of sync with the intended timeline - and perhaps
even more seriously, the size of the error depends on the selected output
sample rate.


Quote:
So would it be fair to say that the buffer size should be the same as the
smallest note resolution required?

I'm kind of getting the idea that I would have to set a different buffer
size depending upon the chose BPM of a tune.

I think you should decouple event timing from buffer size entirely. It's only
marginally more complex, and almost infinitely superior. Wink

Remember the trackers in the 8 and 16 bit days, where timing was generally
based on the PAL (50 Hz) or NTSC (60 Hz) display refresh rate, with poor tempo
resolution, only certain tempos converting properly between PAL and NTSC
etc...? That's the kind of issues you run into when relying on the audio
sample rate; just not quite that bad. :-)


Quote:
Lots of questions, folks, but am I on the right track, Dave? Oh boy.. I
hope I am. Smile

You'll sort it out. It's not quite rocket science, though there are some
tricky areas - and it's not all just plain right or wrong, but rather depends
a bit on your requirements.

Either way, I think looking at how audio/MIDI sequencers do things is a good
idea, as they've evolved over decades, and cover pretty much everything one
might want to do musically, at least on the technical, "internals" level.


--
//David Olofson - Consultant, Developer, Artist, Open Source Advocate

.--- Games, examples, libraries, scripting, sound, music, graphics ---.
| http://consulting.olofson.net http://olofsonarcade.com |
'---------------------------------------------------------------------'
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Re: SDL_OpenAudio() and callback frequency
SparkyNZ


Joined: 02 Nov 2010
Posts: 72
David Olofson wrote:

120 PPQN (Pulses Per Quarter Note) is a common timing resolution in MIDI
sequencers (and consequently, MIDI files), as it fits nicely with the usual
note timing/durations as well as triplets.


Got it, 120 is cool because its easily divisible by 2, 4, 3, 6, 8 and 12 - more than I'd ever need.

David Olofson wrote:

I think you should decouple event timing from buffer size entirely. It's only
marginally more complex, and almost infinitely superior. Wink


This is the way I'd originally intended doing it but I'm becoming confused about how much logic to put where.. I visualise two threads doing the work here (one thread being the already existing SDL thread that calls the callback) and a seperate event timing logic thread of my own. Originally I thought these 2 threads would do the job - my thread generates the new note data when required and the SDL callback grabs fragments of whatever the generation (sequencing logic) thread has created.

Unfortunately I'm becoming confused because I'm hearing that I should be using the callback to count the samples and generate the new notes.. So which thread would do the counting - the callback or the generation thread? When you mentioned timing events according to the number of milliseconds or ticks within a tune, thats where I've become lost - I don't know whether I should be counting samples or ticks, or both.. or if I'd need some sort of messaging/flagging between the 2 threads.

I would like to keep with the idea of provider and consumer - I'm just not 100% sure on the synchronisation of the two threads. At least I'm getting a better understanding but I'm still not convinced enough in my own mind to go away and code. Smile
SDL_OpenAudio() and callback frequency
David Olofson
Guest

(This is mostly general audio/music programming stuff, not specific to SDL.
Maybe we should move it off list?)


On Thursday 28 April 2011, at 21.03.25, "SparkyNZ" wrote:
[...]
Quote:
David Olofson wrote:
Quote:
I think you should decouple event timing from buffer size entirely. It's
only marginally more complex, and almost infinitely superior. Wink

This is the way I'd originally intended doing it but I'm becoming confused
about how much logic to put where.. I visualise two threads doing the work
here (one thread being the already existing SDL thread that calls the
callback) and a seperate event timing logic thread of my own.

This is how audio/MIDI sequencers tend to work, but that's only because audio
I/O is buffered, whereas MIDI I/O generally is not - so you need to handle
MIDI events realtime in a separate, timer driven thread. However, even there,
softsynths and their MIDI handling is usually done all in the audio thread,
for maximum accuracy. Driving softsynths from the "live" MIDI sequencer thread
is just complicating things, and results in less accurate timing.

Multiple threads can be useful for audio if you need to do CPU intensive
and/or timing wise "nasty" processing (ie large, fixed size blocks)
processing, but if you're doing this in a realtime musical application, you're
probably doing it wrong. Wink


Quote:
Originally I
thought these 2 threads would do the job - my thread generates the new
note data when required and the SDL callback grabs fragments of whatever
the generation (sequencing logic) thread has created.

Unfortunately I'm becoming confused because I'm hearing that I should be
using the callback to count the samples and generate the new notes.. So
which thread would do the counting - the callback or the generation
thread?

IMHO, there are two correct ways of doing this:

1) Use only one "thread": The audio callback. Basically, your engine should
have an entry point that generates whatever integer number of audio samples
requested from it, and then you call that from the callback. Done!

2) Same engine interface as above, but do the rendering in an extra
"background" thread. Use a circular buffer or similar to feed data form that
thread to the audio callback, adding *substantial* buffering, to cut the
processing thread some slack.


The second approach will reduce the risk of drop-outs due to the engine
"process" call taking too long, as those calls are made by another thread,
with enough buffering that it doesn't matter if it temporarily falls behind
every now and then. However, it will also add substantial latency (due to the
buffering), so it's not really suitable if you want responsive realtime
control. It's definitely not suitable if you intend to drive instruments from
live MIDI input and the like! You want the lowest (constant!) latency you can
possibly get for that.


Quote:
When you mentioned timing events according to the number of
milliseconds or ticks within a tune, thats where I've become lost - I
don't know whether I should be counting samples or ticks, or both.. or if
I'd need some sort of messaging/flagging between the 2 threads.

That's an implementation detail, basically - though you need to think about
accuracy, rounding errors and overflows.

One way to implement it would be to maintain a fixed or floating point version
of the current song position in ticks, and bump that for each audio buffer
processed. You need pretty good accuracy for that to work in the long term,
though. It may not seem like a problem at first, but what happens if you
manage tracks separately, for example...? Different event time deltas will
result in different rounding errors, so the tracks may drift out of sync over
time.

To eliminate the rounding error add-up, you can translate the song position
corresponding to the first sample frame of each buffer instead, but that can
become rather tricky if you have tempo changes, loops, branches and stuff in
the songs.


Quote:
I would like to keep with the idea of provider and consumer - I'm just not
100% sure on the synchronisation of the two threads. At least I'm getting
a better understanding but I'm still not convinced enough in my own mind
to go away and code. Smile

Provider/consumer has nothing to do with threads - and using multiple threads
just adds a whole set of new problems. (Thread-safe buffering, threads
fighting for the CPU, added latency, ...) A separate processing thread is a
valid solution in some cases, but I believe an interactive music application
is generally not one of them. :-)


--
//David Olofson - Consultant, Developer, Artist, Open Source Advocate

.--- Games, examples, libraries, scripting, sound, music, graphics ---.
| http://consulting.olofson.net http://olofsonarcade.com |
'---------------------------------------------------------------------'
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Re: SDL_OpenAudio() and callback frequency
SparkyNZ


Joined: 02 Nov 2010
Posts: 72
David Olofson wrote:
(This is mostly general audio/music programming stuff, not specific to SDL.
Maybe we should move it off list?)


I'm happy to move this discussion off the forum and make it a private discussion but if the moderators don't mind and are happy to let others maybe benefit from our discussion, I would just leave it here - although perhaps many of the responses differ from the original topic and need to be moved.
SDL_OpenAudio() and callback frequency
Rainer Deyke
Guest

On 4/28/2011 13:03, SparkyNZ wrote:
Quote:

David Olofson wrote:
Quote:

120 PPQN (Pulses Per Quarter Note) is a common timing resolution in
MIDI sequencers (and consequently, MIDI files), as it fits nicely
with the usual note timing/durations as well as triplets.

Got it, 120 is cool because its easily divisible by 2, 4, 3, 6, 8 and
12 - more than I'd ever need.

Shouldn't you also care about septuplet notes? They're uncommon but not
/that/ uncommon, in my experience.


--
Rainer Deyke -

_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
SDL_OpenAudio() and callback frequency
eclectocrat


Joined: 26 Mar 2011
Posts: 72
Buffer size is unrelated to notes. In general you'll want to provide a buffer size that minimizes latency without putting unnecessary strain on the thread scheduler. I go with between 512-4096 frames per buffer. If you want a note to start somewhere in the middle of that buffer, you'd just write your audio data wherever it is supposed to start:

const unsigned int start_frame = audioStream->currentFrame();
const unsigned int stop_frame = start_frame + audioStream->bufferFrames();
const unsigned int note_start_frame = notePlayer->nextNoteStartFrame();


if(note_start_frame >= start_frame && note_start_frame < stop_frame)
{
    const unsigned int note_offset = note_start_frame - start_frame;
    const unsigned int note_frame_count = min(stop_time, notePlayer->nextNoteStopFrame()) - note_offset;
    const float * buffer = notePlayer->getNextNoteFrames(note_frame_count);
    audioStream->copyToStream(buffer, note_offset, note_frame_count);
}


That's the (very) rough gist of it. Find out if your note starts somewhere in this audio buffer. If it does, calculate what sample it starts at and then copy the data from that frame, until the end of the note or the end of the buffer, whichever comes first. That's my very sleepy explanation, I hope it made sense.


PS> As a programmer with a lot of audio experience, let me express my deepest condolences.
Re: SDL_OpenAudio() and callback frequency
SparkyNZ


Joined: 02 Nov 2010
Posts: 72
eclectocrat wrote:
Buffer size is unrelated to notes. In general you'll want to provide a buffer size that minimizes latency without putting unnecessary strain on the thread scheduler. I go with between 512-4096 frames per buffer. If you want a note to start somewhere in the middle of that buffer, you'd just write your audio data wherever it is supposed to start:

PS> As a programmer with a lot of audio experience, let me express my deepest condolences.


I just wish I had a (half?) day spare to sit down and draw this all out and code something. Sad Any other programming I can fit in in spare 20-30 mins gaps during my day but this being new to me.. Grrr... Smile

I hear what you're saying but I still need to sit down and put all of this discussion together. It still seems logical to me for the start of each sample buffer to start on a particular note resolution - just seems like it could be easier for sequencing and calculating where the notes will be played etc.
SDL_OpenAudio() and callback frequency
David Olofson
Guest

On Saturday 30 April 2011, at 21.32.21, "SparkyNZ" wrote:
Quote:
eclectocrat wrote:
[...]
Quote:
I hear what you're saying but I still need to sit down and put all of this
discussion together. It still seems logical to me for the start of each
sample buffer to start on a particular note resolution - just seems like
it could be easier for sequencing and calculating where the notes will be
played etc.

It is logical if it's accuraty enough for your application, or you're
recalculating the buffer size for each buffer with some sort of error
feedforward. However, that's not how audio I/O works (these days, at least),
at least not on the hardware level, or in any API useful for serious low
latency audio.

So, you need to split and merge buffers one way or another, to fit the
odd/variable buffers from your engine into the I/O buffers. You can do this
physically, by means of an extra thread and a buffer queue, or virtually, by
"warping" your engine processing loop so it can generate any number of sample
frames requested, allowing it to be used directly from the audio callback.

--
//David Olofson - Consultant, Developer, Artist, Open Source Advocate

.--- Games, examples, libraries, scripting, sound, music, graphics ---.
| http://consulting.olofson.net http://olofsonarcade.com |
'---------------------------------------------------------------------'
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org