The SDL forums have moved to discourse.libsdl.org.
This is just a read-only archive of the previous forums, to keep old links working.


SDL Forum Index
SDL
Simple Directmedia Layer Forums
Outputting text to accessibility tools
Sik


Joined: 26 Nov 2011
Posts: 905
I was trying to implement screen reader support in my game and reached
the conclusion that it's a total mess, so I was wondering if it would
be possible to get this implemented as a feature in SDL itself
instead. Basically, just let the program set the string output to
accessibility tools by calling a function (think of how
SDL_SetWindowTitle works for the titlebar).

I was discussing with somebody and on Windows it seems you'd use
WM_GETOBJECT to tell Windows what the control is (you'd include this
text here). Not sure how you tell it when the text changed, but
anyway. No idea how to do it in other platforms, but I'd like to at
least start getting it implemented in some.

Can we get this feature inside SDL? (also what function name? maybe
SDL_SetAccessibilityText or something like that?)
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Marcus von Appen
Guest

On, Wed Dec 10, 2014, Sik the hedgehog wrote:

Quote:
I was trying to implement screen reader support in my game and reached
the conclusion that it's a total mess, so I was wondering if it would
be possible to get this implemented as a feature in SDL itself
instead. Basically, just let the program set the string output to
accessibility tools by calling a function (think of how
SDL_SetWindowTitle works for the titlebar).

I was discussing with somebody and on Windows it seems you'd use
WM_GETOBJECT to tell Windows what the control is (you'd include this
text here). Not sure how you tell it when the text changed, but
anyway. No idea how to do it in other platforms, but I'd like to at
least start getting it implemented in some.

Can we get this feature inside SDL? (also what function name? maybe
SDL_SetAccessibilityText or something like that?)

I did that years ago for arbitrary application objects in Python for a
GUI toolkit based on pygame (SDL as back-end) on Win32 and Unix
platforms. Not sure about Win7 and Win8 these days, but I think, the
MSAA layer is still the same, which means that it should be pretty
straight forward for Windows and OS X. The accessibility status for
Unix/Linux may have changed, since there were several attempts to
modernize the whole stuff in the past (moving away from CORBA to DBUS).

You can find the MSAA implementation at
http://sourceforge.net/p/ocemp/svn/HEAD/tree/trunk/papi/src/msaa/
and the ATK-based version at
http://sourceforge.net/p/ocemp/svn/HEAD/tree/trunk/papi/src/atk/

I'll gladly help anyone, who takes up that challenge, with extensive
explanations about both implementations.

Cheers
Marcus

_______________________________________________

_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Sik


Joined: 26 Nov 2011
Posts: 905
I think that on Linux the GUI libraries have some way to communicate
with screen readers and such, but I have no information about this
(but yeah, I think it's something making use of dbus, and that's about
as much as I know).
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Marcus von Appen
Guest

On, Thu Dec 11, 2014, Sik the hedgehog wrote:

Quote:
I think that on Linux the GUI libraries have some way to communicate
with screen readers and such, but I have no information about this
(but yeah, I think it's something making use of dbus, and that's about
as much as I know).

As I wrote, it used ATK (AT-SPI with CORBA) in the passed. The AT-SPI
bridges for accessibility tools were updated heavily (the implementation
originated from GNOME/GTK, KDE/QT however did not want to pull in Glib)
and I did not follow the latest changed since 2009.

Cheers
Marcus
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Sik


Joined: 26 Nov 2011
Posts: 905
2014-12-11 3:53 GMT-03:00, Marcus von Appen:
Quote:
As I wrote, it used ATK (AT-SPI with CORBA) in the passed. The AT-SPI
bridges for accessibility tools were updated heavily (the implementation
originated from GNOME/GTK, KDE/QT however did not want to pull in Glib)
and I did not follow the latest changed since 2009.

Yeah, I was talking about the current status though. I recall reading
somewhere that they use dbus now (so confirming what you had guessed
before) but I don't remember exactly where I saw this.
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Sik


Joined: 26 Nov 2011
Posts: 905
So, any new info on this?
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Marcus von Appen
Guest

Quote:
On 15.12.2014, at 00:00, Sik the hedgehog wrote:

So, any new info on this?

I did not hear anything from anyone.

Cheers
Marcus
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
i8degrees


Joined: 22 Nov 2014
Posts: 39
Sik the hedgehog,

Do you happen to have any prototype code for getting a screen reader functioning properly under Windows? If you are serious about the endeavor, it's imaginably not so much a matter of *if* we could get this kind of feature implemented in SDL so much as it is a matter of somebody doing it.

It looks like it'll be no less than ~2..4w before I feel comfortable putting what I'm working on now aside to take a serious look at prototyping a OS X screen reader using the official APIs available on the platform. It might be nice for me to have some reference code (that's you) to go by to get a feel for things.

Cheers,
Jeffrey Carpenter


_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Sik


Joined: 26 Nov 2011
Posts: 905
2014-12-15 7:54 GMT-03:00, Jeffrey Carpenter:
Quote:
Do you happen to have any prototype code for getting a screen reader
functioning properly under Windows? If you are serious about the endeavor,
it's imaginably not so much a matter of *if* we could get this kind of
feature implemented in SDL so much as it is a matter of somebody doing it.

I have some code in Sol for screen reader support although it's not
exactly any of the best methods I've discussed (one involves talking
to the speech synthesis engine, one involves using the clipboard, one
involves using the titlebar, one involves outputting to the standard
output). I could share them here if people want, but take that into
account, as none of them talk to the screen reader directly, which is
what I'm having problem with.

I suppose the whole concept of textbuffer (analogous to framebuffer)
used by it could be useful.
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
i8degrees


Joined: 22 Nov 2014
Posts: 39
Hi again,

Mmm, I doubt that the code by itself would do *me* much good at this point. Maybe once I take a look at the Windows API for this stuff, I'll understand things better and perhaps want to see the code of Sol.

The concept of a text buffer could indeed be a very useful application :-)

P.S. I casually browsed through Qt's Accessibility API at http://qt-project.org/doc/qt-4.8/accessible.html and found it interesting that it appears -- at a high level, at least -- to be similar perhaps to the OS X API model. (I like looking at API documentation of other implementations to help get an idea at naming things, general structuring, etc.)

Cheers,
Jeffrey Carpenter


_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Sik


Joined: 26 Nov 2011
Posts: 905
Looking around, on iOS apparently we have this:
http://stackoverflow.com/a/18891418
That's for iOS 7 onwards, though it seems there are answers there for
older versions too. I'll let you decide what version is best to have
as minimum for this feature.

On Android we'll have to use the speech API it seems.

2014-12-15 10:07 GMT-03:00, Jeffrey Carpenter:
Quote:
The concept of a text buffer could indeed be a very useful application :-)

The idea is simple: the same way a framebuffer holds the image to show
on screen, a textbuffer holds a string to show on the screen reader.
Basically:

- When the textbuffer is altered, the text is sent to the screen reader
- When the textbuffer becomes empty, the screen reader goes mute
- When the user presses some key to repeat the text, the text in the
textbuffer gets sent to the screen reader again

Does this make any sense? What I'm looking for here is to make a
function that writes to the textbuffer essentially (there would be one
for each window).

Quote:
P.S. I casually browsed through Qt's Accessibility API at
http://qt-project.org/doc/qt-4.8/accessible.html and found it interesting
that it appears -- at a high level, at least -- to be similar perhaps to the
OS X API model. (I like looking at API documentation of other
implementations to help get an idea at naming things, general structuring,
etc.)

Trying to get my head around it, though I have the feeling most of
that won't be revelant (・~・) (we're talking about enhancing just a
normal window after all, no more complex controls to deal with) What
is the name and the description of a control?
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Sik


Joined: 26 Nov 2011
Posts: 905
There's also this if somebody wants to look into it:
http://hg.q-continuum.net/accessible_output2/

It's a multiplatform library that handles screen readers. The code is
MIT licensed, so we probably shouldn't worry much. The biggest problem
is that it's written in Python so we can't use it directly Razz But to
get an idea of how things work this seems like it could help.
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
i8degrees


Joined: 22 Nov 2014
Posts: 39
Hi,

Ah, the implementation would be beautifully elegant if we could manage to get it to work sensibly like so. It feels like an event queue sort of thing to me -- that may because I've been dealing with animation queues, but anyhow -- it definitely makes sense. Let me start thinking about that...

Screw the Qt Accessibility API, what you describe vaporizes all of that (so we would hope Razz).

Cheers,
Jeffrey Carpenter


_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Sik


Joined: 26 Nov 2011
Posts: 905
2014-12-16 7:42 GMT-03:00, Jeffrey Carpenter:
Quote:
Screw the Qt Accessibility API, what you describe vaporizes all of that (so
we would hope Razz).

Well, it's still useful to know what to look for to implement this Razz
But yeah it's like SAPI, it includes lots of functions and took me a
while to get my head around it... and in the end I only really needed
the Speak function. So much for complexity.

We could provide a SAPI backend on Windows later (especially since
it's actually pretty easy even from pure C and I already have working
code), but on Windows I'd like to see if we could implement it using
the proper UI features so it works with screen readers directly.
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
i8degrees


Joined: 22 Nov 2014
Posts: 39
Don't know anything about SAPI Very Happy Speech Application Programming Interface? Sounds neat. Reminds me of the say command in OS X -- shell command for using speech vocals built-in with the environment (part of the accessibility stuff).

I do agree that it would be nice to interface with screen readers directly ... I haven't played with it yet, but I've found some sample code for implementing the OS X API for a "simple, accessible tic tac toe game" at https://developer.apple.com/wwdc/resources/sample-code/

It's too early to say for sure, but I'm getting the impression that, for the textbuffer concept to work, one might essentially have to fake an UI element and treat that as the buffer to make requests on behalf of, if that makes sense ... which *shrug* I'd be totally OK with if it is feasible without much trouble ... at least for an initial stab at it.

Cheers,
Jeffrey Carpenter


_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Sik


Joined: 26 Nov 2011
Posts: 905
2014-12-16 9:51 GMT-03:00, Jeffrey Carpenter:
Quote:
It's too early to say for sure, but I'm getting the impression that, for the
textbuffer concept to work, one might essentially have to fake an UI element
and treat that as the buffer to make requests on behalf of, if that makes
sense ... which *shrug* I'd be totally OK with if it is feasible without
much trouble ... at least for an initial stab at it.

We shouldn't need to fake an UI element, the window itself should be
enough (and if not, we could just make a custom control). The catch is
that SDL is the one who controls the window, so it's SDL who needs to
handle this as well Razz

(I'm aware that SDL can be made to use a window created elsewhere
which it can't control... I think we should just resort to a different
backend in that case)
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Sik


Joined: 26 Nov 2011
Posts: 905
Oh, and if somebody gets confused: SAPI and Speech Dispatcher talk to
the speech synthesis engines directly, not to the screen readers, so
ideally they should be seen only as back-up backends and not as the
proper solution. Having them around is a good idea anyway.

We should probably also have a hint to select the backend much like
the case for the renderer API.
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Sik


Joined: 26 Nov 2011
Posts: 905
Sorry for multiple posting (for those in the forum). Here's some quick
code for SAPI to get you the idea of how it would work (if we
implement that backend on Windows). Of course this would need to be
properly adapted for use in SDL and such (especially where sapi_voice
is stored).

Stuff we need:
------------------------------------------------------------
#include <windows.h>
#include <sapi.h>

ISpVoice *sapi_voice = NULL;
------------------------------------------------------------
(those headers are what's needed on the most recent MinGW-w64 but I
think Visual Studio requires different headers, you may want to check)

To initialize:
------------------------------------------------------------
if (FAILED(CoInitialize(NULL))) {
// Error...
}
if (FAILED(CoCreateInstance(&CLSID_SpVoice, NULL, CLSCTX_ALL,
&IID_ISpVoice, (void **) &sapi_voice))) {
// Error...
}
------------------------------------------------------------

To say something (replace utf8_to_utf16 with whatever is relevant):
------------------------------------------------------------
wchar_t *wstr = utf8_to_utf16((const char *) str);
sapi_voice->lpVtbl->Speak(sapi_voice, wstr,
SPF_PURGEBEFORESPEAK | SPF_IS_NOT_XML, NULL);
free(wstr);
------------------------------------------------------------

To deinitialize:
------------------------------------------------------------
if (sapi_voice != NULL) {
sapi_voice->lpVtbl->Speak(sapi_voice, L"",
SPF_IS_NOT_XML | SPF_PURGEBEFORESPEAK, NULL);
sapi_voice->lpVtbl->WaitUntilDone(sapi_voice, INFINITE);
sapi_voice->lpVtbl->Release(sapi_voice);
sapi_voice = NULL;
}
CoUninitialize();
------------------------------------------------------------
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Sik


Joined: 26 Nov 2011
Posts: 905
And here is Speech Dispatcher (used in Linux), in its own message for
the sake of organization. Again, same deal as with the SAPI example
code.

What we need:
------------------------------------------------------------
#include <libspeechd.h>

static SPDConnection *speechd_connection = NULL;
------------------------------------------------------------

To initialize:
------------------------------------------------------------
speechd_connection = spd_open("<programnameorwhatever>",
NULL, NULL, SPD_MODE_THREADED);
if (speechd_connection == NULL) {
// Error...
}
spd_set_data_mode(speechd_connection, SPD_DATA_TEXT);
------------------------------------------------------------

To say something:
------------------------------------------------------------
spd_stop_all(speechd_connection);
spd_cancel_all(speechd_connection);
if (*str != '\0') {
spd_say(speechd_connection, SPD_TEXT, str);
}
------------------------------------------------------------

To deinitialize:
------------------------------------------------------------
if (speechd_connection != NULL) {
spd_close(speechd_connection);
speechd_connection = NULL;
}
------------------------------------------------------------
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Marcus von Appen
Guest

Quote:
On 16.12.2014, at 13:33, Sik the hedgehog wrote:

2014-12-16 7:42 GMT-03:00, Jeffrey Carpenter:
Quote:
Screw the Qt Accessibility API, what you describe vaporizes all of that (so
we would hope Razz).

Well, it's still useful to know what to look for to implement this Razz
But yeah it's like SAPI, it includes lots of functions and took me a
while to get my head around it... and in the end I only really needed
the Speak function. So much for complexity.

We could provide a SAPI backend on Windows later (especially since
it's actually pretty easy even from pure C and I already have working
code), but on Windows I'd like to see if we could implement it using
the proper UI features so it works with screen readers directly.

I sent some links around on that. For win32 it is pretty simple to write a generic wrapper, and somewhat complex for Unix.

Cheers
Marcus
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Jared Maddox
Guest

Quote:
Date: Tue, 16 Dec 2014 12:10:06 -0300
From: Sik the hedgehog
To: SDL Development List
Subject: Re: [SDL] Outputting text to accessibility tools
Message-ID:
<CAEyBR+Wb=9xr9KPFc2nXd5tiQC5C1Cd4fOw-m3ipjRV=
Content-Type: text/plain; charset=UTF-8

Oh, and if somebody gets confused: SAPI and Speech Dispatcher talk to
the speech synthesis engines directly, not to the screen readers, so
ideally they should be seen only as back-up backends and not as the
proper solution. Having them around is a good idea anyway.

We should probably also have a hint to select the backend much like
the case for the renderer API.


Out of curiosity, what sort of use-cases is the eventual API expected
to support? If a user's selection has moved from one control to
another then outputting a completely new text string to completely
replace the old one seems obvious, but what would the proper behavior
be if the user is editing some text in the middle of a paragraph,
within a text editor?

Call out individual letters as typed?
Call out resulting word as it's typed, starting from scratch every
time that whitespace is inserted?
Call out all modifications when a sufficient pause occurs, and then
start from scratch?
One of the above, but with an audio note about the type of change?
Not call out the entire window, or *shudder* the entire document, right?

How would any of this change with a text prompt, instead of a text
editor? Any other corners that YOU can think of?
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Sik


Joined: 26 Nov 2011
Posts: 905
2014-12-16 22:40 GMT-03:00, Jared Maddox:
Quote:
Out of curiosity, what sort of use-cases is the eventual API expected
to support? If a user's selection has moved from one control to
another then outputting a completely new text string to completely
replace the old one seems obvious, but what would the proper behavior
be if the user is editing some text in the middle of a paragraph,
within a text editor?

Call out individual letters as typed?

According to Orca, this. At least when I tried it and tried to type
into the terminal, it'd spell out every letter as I inputted it.

I imagine this would only matter for backends that talk to speech
synthesizers, since if the screen reader is in use instead the screen
reader should automatically handle this one (text is being entered
through the OS facilities, after all). Also I can't say what happens
when using an IME to enter text since the speech support here doesn't
understand Japanese at all Sad (it just skips over Japanese text)

Quote:
How would any of this change with a text prompt, instead of a text
editor? Any other corners that YOU can think of?

Navigation and such, but I'm thinking that for the kind of programs
that would use this function it's probably a no-brainer (and most of
the stuff that could matter could be handled by the program itself).
Any program that needs more detailed support will be most likely using
native UI controls in the first place.

If somebody else knows of some issue that could be potentially
troublesome, go ahead and say so.
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Joseph Carter


Joined: 20 Sep 2013
Posts: 279
Most of this is configured by the user in their screen reader. It is
common to output either typed letters or typed words as the user
types them. That's only going to work right if you use a native text
input (if modified), though.

Joseph
Resident blind guy. Wink

On Tue, Dec 16, 2014 at 07:40:04PM -0600, Jared Maddox wrote:
Quote:
Quote:
Date: Tue, 16 Dec 2014 12:10:06 -0300
From: Sik the hedgehog
To: SDL Development List
Subject: Re: [SDL] Outputting text to accessibility tools
Message-ID:
<CAEyBR+Wb=9xr9KPFc2nXd5tiQC5C1Cd4fOw-m3ipjRV=
Content-Type: text/plain; charset=UTF-8

Oh, and if somebody gets confused: SAPI and Speech Dispatcher talk to
the speech synthesis engines directly, not to the screen readers, so
ideally they should be seen only as back-up backends and not as the
proper solution. Having them around is a good idea anyway.

We should probably also have a hint to select the backend much like
the case for the renderer API.


Out of curiosity, what sort of use-cases is the eventual API expected
to support? If a user's selection has moved from one control to
another then outputting a completely new text string to completely
replace the old one seems obvious, but what would the proper behavior
be if the user is editing some text in the middle of a paragraph,
within a text editor?

Call out individual letters as typed?
Call out resulting word as it's typed, starting from scratch every
time that whitespace is inserted?
Call out all modifications when a sufficient pause occurs, and then
start from scratch?
One of the above, but with an audio note about the type of change?
Not call out the entire window, or *shudder* the entire document, right?

How would any of this change with a text prompt, instead of a text
editor? Any other corners that YOU can think of?
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Sik


Joined: 26 Nov 2011
Posts: 905
2014-12-17 3:00 GMT-03:00, T. Joseph Carter:
Quote:
Most of this is configured by the user in their screen reader. It is
common to output either typed letters or typed words as the user
types them. That's only going to work right if you use a native text
input (if modified), though.

Yeah, and if the screen reader itself is being used that won't matter
since the screen reader will notice that the program is getting text
input (SDL explicitly has a text input mode) and speak whatever it
needs on its own. This really matters mostly when faking it (e.g. when
talking to a speech synthesis engine), and there you don't have much
of an option.
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Sik


Joined: 26 Nov 2011
Posts: 905
OK, I think it's time to start getting this arranged.

So, here's what we have:

- We probably need several backends (some platforms only have one
option, but some have both separate screen reader and speech synthesis
support, and we also need to account for the situation where nothing
works) This also means adding a new hint to select the backend, as
usual.

- We don't want screen reader to be turned on by default (think of
what could happen if the backend was a speech synthesis engine and the
user didn't need a screen reader... program will speak even though
it'll get in the way)

- Some methods are tied to UI controls so this output needs to be
window-specific. This can be easily faked for the other backends
though (by looking which window has focus).

Anyway, the first two points would hint that it'd be probably better
to implement this as its own subsystem. Third point would require it
to be able to interact with the video subsystem though (although I
think the game controller subsystem relies on the joystick subsystem,
so this wouldn't be the first time something like this happens). In
any case it seems that we'll need to deal with the subsystems at some
point. What do people think about this?

Also if this ends up getting its own subsystem we'll need to come up
with a name for it Razz
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Joseph Carter


Joined: 20 Sep 2013
Posts: 279
Q: support for speech separately… just taking advantage of the idea that if you're doing accessibility support, you might as well add the ability to access the OS speech engine for non-accessibility reasons?

Joseph
Sent via mobile

Quote:
On Dec 29, 2014, at 07:57, Sik the hedgehog wrote:

OK, I think it's time to start getting this arranged.

So, here's what we have:

- We probably need several backends (some platforms only have one
option, but some have both separate screen reader and speech synthesis
support, and we also need to account for the situation where nothing
works) This also means adding a new hint to select the backend, as
usual.

- We don't want screen reader to be turned on by default (think of
what could happen if the backend was a speech synthesis engine and the
user didn't need a screen reader... program will speak even though
it'll get in the way)

- Some methods are tied to UI controls so this output needs to be
window-specific. This can be easily faked for the other backends
though (by looking which window has focus).

Anyway, the first two points would hint that it'd be probably better
to implement this as its own subsystem. Third point would require it
to be able to interact with the video subsystem though (although I
think the game controller subsystem relies on the joystick subsystem,
so this wouldn't be the first time something like this happens). In
any case it seems that we'll need to deal with the subsystems at some
point. What do people think about this?

Also if this ends up getting its own subsystem we'll need to come up
with a name for it Razz
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Jared Maddox
Guest

Quote:
Date: Mon, 29 Dec 2014 12:57:48 -0300
From: Sik the hedgehog
To: SDL Development List
Subject: Re: [SDL] Outputting text to accessibility tools
Message-ID:

Content-Type: text/plain; charset=UTF-8

OK, I think it's time to start getting this arranged.

So, here's what we have:

- We probably need several backends (some platforms only have one
option, but some have both separate screen reader and speech synthesis
support, and we also need to account for the situation where nothing
works) This also means adding a new hint to select the backend, as
usual.


The "nothing works" case may call for an app-supplied callback. Maybe
another hint, and routing via the event subsystem?


Quote:
Anyway, the first two points would hint that it'd be probably better
to implement this as its own subsystem. Third point would require it
to be able to interact with the video subsystem though (although I
think the game controller subsystem relies on the joystick subsystem,
so this wouldn't be the first time something like this happens). In
any case it seems that we'll need to deal with the subsystems at some
point. What do people think about this?

My main concern would be it's interaction with the ongoing text-input
work. If I was implementing, I would want to keep them from involving
each other (the user should always be able to do it themselves without
interference from the library itself, in essence), though I don't know
if that's possible.


Quote:
Also if this ends up getting its own subsystem we'll need to come up
with a name for it Razz


I'm gonna take the lazy route, and say that it should be
"Accessibility" / SDL_INIT_ACCESSIBILITY Razz .
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
mr_tawan


Joined: 13 Jan 2014
Posts: 161
Although I'm kinda agreed that having multi-platform accessibility *wrapper* is a good thing, does it really fit in what SDL is aimed for ? I mean, is it better to make it a separated library (and may be having a bridge to SDL if it's really need to) ? Also is it really needed to integrate the functionality into SDL ? Can we keep them separated ?

I'm just afraid that one day SDL might become one big monolithic platform that handle everything even if only parts of it are really used in most case.

Just my 2 cents.
Outputting text to accessibility tools
Sik


Joined: 26 Nov 2011
Posts: 905
2014-12-30 10:41 GMT-03:00, mr_tawan:
Quote:
Although I'm kinda agreed that having multi-platform accessibility *wrapper*
is a good thing, does it really fit in what SDL is aimed for ? I mean, is it
better to make it a separated library (and may be having a bridge to SDL if
it's really need to) ? Also is it really needed to integrate the
functionality into SDL ? Can we keep them separated ?

Well, part of the issue came from the fact that I had absolutely no
way to implement this properly without modifying SDL (since on Windows
I need to not just intercept some window messages, but also respond to
them during the window procedure, and SDL doesn't seem to be very
helpful about this - and who knows what other requirements there could
be in other platforms!).

I'd rather get this in SDL than going insane trying to figure out some
really ugly hack (that may even have undefined behavior) and that may
still not really work (and thereby get deservedly insulted by the
users to whom I promise this feature).

Quote:
I'm just afraid that one day SDL might become one big monolithic platform
that handle everything even if only parts of it are really used in most
case.

Isn't this already the case anyway? (and if it wasn't then why is it
split in several subsystems that can be initialized independently?)

Now seriously, the pattern as far as I know is that SDL mostly handles
talking to the operating system specific stuff, while the satellite
libraries take care of higher level stuff (I think SDL_net is the only
exception, one could also say SDL_gpu but technically that only
understands OpenGL from what I recall, it's not anywhere as extensive
as what SDL does). Since this is something that involves directly the
operating system APIs, that would mean it would be indeed something
that belongs to SDL.

Also honestly I'm kind of tired that every time somebody requests
something the answer is "make it into a separate library" regardless
of what is being requested.
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Re: Outputting text to accessibility tools
mr_tawan


Joined: 13 Jan 2014
Posts: 161
Sik wrote:

Also honestly I'm kind of tired that every time somebody requests
something the answer is "make it into a separate library" regardless
of what is being requested.


Well it's kind of counter-intuitive to have one library managed to do everything. Actually I think 'is it needed to be included?' and 'can we separate?' are the most important questions to ask when someone proposing a new feature.

We can have it as an extension, just like SDL_ttf, SDL_image, or SDL_mixer, right ?
Outputting text to accessibility tools
Sik


Joined: 26 Nov 2011
Posts: 905
2014-12-30 12:33 GMT-03:00, mr_tawan:
Quote:
Well it's kind of counter-intuitive to have one library managed to do
everything. Actually I think 'is it needed to be included?' and 'can we
separate?' are the most important questions to ask when someone proposing a
new feature.

We can have it as an extension, just like SDL_ttf, SDL_image, or SDL_mixer,
right ?

SDL_ttf uses its own font renderer then outputs the result to a
surface. SDL_image handles image formats and then loads them to a
surface. SDL_mixer mostly takes care of rendering audio and then
outputting it using a SDL callback. The common thing among all of
these is that they can easily be done without ever knowing what SDL is
doing inside, they just use the SDL API the same way any program would
without ever really having to deal with the operating system (except
maybe allocating memory and accessing files, but that can be done with
the standard library).

This thing is extremely operating system specific, which is the polar
opposite of what the satellite libraries do, and may even need messing
with resources that SDL reserves to itself.
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Jared Maddox
Guest

Quote:
Date: Mon, 29 Dec 2014 10:39:55 -0800
From: "T. Joseph Carter"
To: SDL Development List
Subject: Re: [SDL] Outputting text to accessibility tools
Message-ID:
Content-Type: text/plain; charset=utf-8

Q: support for speech separately? just taking advantage of the idea that if
you're doing accessibility support, you might as well add the ability to
access the OS speech engine for non-accessibility reasons?


It makes sense, but at the same time this is slighty more towards
SDL_mixer or SDL_sound's territory. Exposing speech engines via paired
SDL_RWops, AND letting a satellite library use that + a synthesiser
(in case the OS stuff won't work for some reason) to provide the
generic manifestation is probably the right way to go.

I was going to include a link to a page talking about a cheap (though
imperfect) English speech synthesis algorithm, but I can't seem to
find it (maybe it was something about detecting syllables instead of
actual text-to-speech stuff?).



Quote:
Date: Tue, 30 Dec 2014 13:41:25 +0000
From: "mr_tawan"
To:
Subject: Re: [SDL] Outputting text to accessibility tools
Message-ID:
Content-Type: text/plain; charset="iso-8859-1"

Although I'm kinda agreed that having multi-platform accessibility *wrapper*
is a good thing, does it really fit in what SDL is aimed for ?

SDL is aimed at being a platform-abstraction layer: it's a DOS-weight
partial-OS specialized for multi-media applications. Some of the
people that will want to use these applications will need
screen-reader or similar support. SDL should therefor provide both the
portions of the system that SHOULD be in SDL if they are to function
correctly, AND enough to use that same support in a platform-agnostic
manner.

Thus: this is necessary.


Quote:
I mean, is it
better to make it a separated library (and may be having a bridge to SDL if
it's really need to) ?

Courts in the US have occasionally (it doesn't actually come up often,
from what I understand) ruled that a law that was intended to require
business owners to be handicaped-accessible ALSO applies to web pages
(I understand that one of the major retailers got hit with this... and
lost). It's very easy to infer that it's binding on software in
general, thus it's something that should be supported.

Moving it into a seperate library would add unjustified complexity to
programmers.


Quote:
Also is it really needed to integrate the
functionality into SDL ? Can we keep them separated ?


Is it really needed to keep it out of SDL? No, it doesn't have to be kept out.
Can we make them integrated? Yes, we can make them integrated.


Quote:
I'm just afraid that one day SDL might become one big monolithic platform
that handle everything even if only parts of it are really used in most case.


This is not an appropriate cause for fear, but instead for some
self-analysis. Many people will automatically have different ideas
about what should and what should not be in SDL. I think that textured
triangles (and maybe a batching system) should be in it. This is not
because you CAN'T do without them, but instead because those two
features allow both SDL and satellite libraries to do their jobs much
better.


Quote:
Just my 2 cents.


Extending SDL isn't apocryphal, it simply needs to be restrained.

Adding a full GUI system? THAT would be taking things a touch too far
(we already have graphics, so the support that a gui satellite library
needs is already fully implemented).



Quote:
Date: Tue, 30 Dec 2014 11:20:43 -0300
From: Sik the hedgehog
To:
Subject: Re: [SDL] Outputting text to accessibility tools
Message-ID:

Content-Type: text/plain; charset=UTF-8

2014-12-30 10:41 GMT-03:00, mr_tawan:

Quote:
Quote:
I'm just afraid that one day SDL might become one big monolithic platform
that handle everything even if only parts of it are really used in most
case.

Isn't this already the case anyway? (and if it wasn't then why is it
split in several subsystems that can be initialized independently?)


Yeah, it is.



Quote:
Date: Tue, 30 Dec 2014 15:33:09 +0000
From: "mr_tawan"
To:
Subject: Re: [SDL] Outputting text to accessibility tools
Message-ID:
Content-Type: text/plain; charset="iso-8859-1"


Sik wrote:
Quote:

Also honestly I'm kind of tired that every time somebody requests
something the answer is "make it into a separate library" regardless
of what is being requested.



Well it's kind of counter-intuitive to have one library managed to do
everything.

SDL doesn't do "everything", and won't with this extension either.
Now, if SDL directly integrated the satellite libraries? THAT would be
"doing everything".

What SDL is *SUPPOSED* to do is act as an abstraction layer, a
"quasi-OS" that provides you with a generic API for things that would
otherwise require entirely platform-specific code. This is what SDL 1
was created for, and this is what SDL 2 is designed for. This is why
SDL 2 allows you to specify your own OpenGL library, but doesn't
actually implement one itself: that bit's already abstract, the
problem is in the initialization.


Quote:
Actually I think 'is it needed to be included?' and 'can we separate?' are
the most important questions to ask when someone proposing a new feature.


"Does it make more sense combined or seperate?" is the question that
should actually be asked, because the ones you listed express the
implication that the correct answer is "Seperate", regardless of
reality.


Quote:
We can have it as an extension, just like SDL_ttf, SDL_image, or SDL_mixer, right ?


Sik has said that any extension version will be a hack, and I believe
that the mention was in the very message you were replying to. If you
want to implement a full-featured extensions API for SDL 2 then it can
be done as a library without problems, but you'll have to actually
implement said extensions API.
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Sik


Joined: 26 Nov 2011
Posts: 905
2014-12-30 23:13 GMT-03:00, Jared Maddox:
Quote:
It makes sense, but at the same time this is slighty more towards
SDL_mixer or SDL_sound's territory. Exposing speech engines via paired
SDL_RWops, AND letting a satellite library use that + a synthesiser
(in case the OS stuff won't work for some reason) to provide the
generic manifestation is probably the right way to go.

I was going to include a link to a page talking about a cheap (though
imperfect) English speech synthesis algorithm, but I can't seem to
find it (maybe it was something about detecting syllables instead of
actual text-to-speech stuff?).

I had at first considered just using the speech synthesis engine for
my game, but the end result is that everybody I asked complained at me
because they want to be able to use their screen readers (because they
already configured them to their needs, which vary widly among users,
especially depending on their training).

(also nitpick: speech engines aren't useful when you're both deaf and
blind, in which case you need a braille screen, and screen readers
don't hook into those engines)

Quote:
Courts in the US have occasionally (it doesn't actually come up often,
from what I understand) ruled that a law that was intended to require
business owners to be handicaped-accessible ALSO applies to web pages
(I understand that one of the major retailers got hit with this... and
lost). It's very easy to infer that it's binding on software in
general, thus it's something that should be supported.

Moving it into a seperate library would add unjustified complexity to
programmers.

One thing I want to make it clear before it becomes more confusing:
this legal requirement does *not* apply to software per-se. The thing
is that many countries require that there can't be discrimination for
disabilities when offering a service, and several countries have
considered that websites count as a service (and this number keeps
increasing over time). This doesn't mean that the browser has to be
accessible, but rather that a site should be accessible when the
browser supports it.

But yes, anything that makes it easier for developers to make their
software accessible is definitely always welcome (as it encourages
them to do it). This was another of the reasons prompting me to just
include it in SDL itself.
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Joseph Carter


Joined: 20 Sep 2013
Posts: 279
What I think makes sense is:

1. Having access to send text to be spoken by a screen reader.
2. Having access to system text to speech (not always the same).
3. Knowing when the user wants something read (either at the
keyboard cursor or at the mouse cursor) and perhaps a hint of what
they want to know if we have such a thing.

Anything beyond that is well outside SDL's scope. But that much
gives your SDL-using apps access to speech if the system has it, and
it gives them a way to write accessibility if they want to.

Visual cues for the deaf, monaural mixing, non-traditional input
devices, colorblind-friendly settings, high contrast text (exactly
not like FROZEN BUBBLE!), whatever else … Those are things for the
application to do if they're wanted.

Deep accessibility features don't belong in SDL, but access to the
API might not be over the top.

Joseph

On Tue, Dec 30, 2014 at 01:41:25PM +0000, mr_tawan wrote:
Quote:
Although I'm kinda agreed that having multi-platform accessibility *wrapper* is a good thing, does it really fit in what SDL is aimed for ? I mean, is it better to make it a separated library (and may be having a bridge to SDL if it's really need to) ? Also is it really needed to integrate the functionality into SDL ? Can we keep them separated ?

I'm just afraid that one day SDL might become one big monolithic platform that handle everything even if only parts of it are really used in most case.

Just my 2 cents.





Quote:
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Joseph Carter


Joined: 20 Sep 2013
Posts: 279
On Tue, Dec 30, 2014 at 11:20:43AM -0300, Sik the hedgehog wrote:
Quote:
Now seriously, the pattern as far as I know is that SDL mostly handles
talking to the operating system specific stuff, while the satellite
libraries take care of higher level stuff (I think SDL_net is the only
exception, one could also say SDL_gpu but technically that only
understands OpenGL from what I recall, it's not anywhere as extensive
as what SDL does). Since this is something that involves directly the
operating system APIs, that would mean it would be indeed something
that belongs to SDL.

Ah, but in SDL_net's case, the OS-specific stuff is actually pretty
generic and platform-independent as-is. With just a few exceptions
mostly for Windows, and even those in practice are mostly quick
redefinitions of constants. Mostly—the exceptions are significant
and important. But that's all the more reason for there to be a
library to abstract that out for you if you're not comfortable with
those differences.

And the helper libs hosted on libsdl.org kind of rank a bit higher
than the others, especially now that there's no longer a place on the
website to help you find SDL-using projects (games, apps, and helper
libs…)

Joseph

_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Joseph Carter


Joined: 20 Sep 2013
Posts: 279
On Tue, Dec 30, 2014 at 03:33:09PM +0000, mr_tawan wrote:
Quote:
Quote:
Also honestly I'm kind of tired that every time somebody requests
something the answer is "make it into a separate library" regardless
of what is being requested.

Well it's kind of counter-intuitive to have one library managed to do everything. Actually I think 'is it needed to be included?' and 'can we separate?' are the most important questions to ask when someone proposing a new feature.

We can have it as an extension, just like SDL_ttf, SDL_image, or SDL_mixer, right ?

It is a reasonable thing to suggest the idea of an extra library, but
if it ain't possible with SDL as it is, or at least if it ain't
practical with SDL as it is, the question isn't whether or not it
should be done outside of SDL, but rather if it should be done at
all. Because if it's going to be done at all, SDL reasonably has to
be changed in some way, either to allow it to be a helper lib, or to
include the functionality directly.

And again, which it should be is not always evident from the outset.
As I said last night, the GameController API is fully an extension
library baked in to SDL proper, basically because Valve wanted it.
Turns out that it's a very good and useful thing, if a little limited
in scope simply because it's exactly what Valve wanted and neither
more nor less. But that's what an ABI-breaking 2.1 is for.

Joseph

_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Joseph Carter


Joined: 20 Sep 2013
Posts: 279
On Tue, Dec 30, 2014 at 08:13:47PM -0600, Jared Maddox wrote:
Quote:
Quote:
Q: support for speech separately? just taking advantage of the idea that if
you're doing accessibility support, you might as well add the ability to
access the OS speech engine for non-accessibility reasons?

It makes sense, but at the same time this is slighty more towards
SDL_mixer or SDL_sound's territory. Exposing speech engines via paired
SDL_RWops, AND letting a satellite library use that + a synthesiser
(in case the OS stuff won't work for some reason) to provide the
generic manifestation is probably the right way to go.

I was going to include a link to a page talking about a cheap (though
imperfect) English speech synthesis algorithm, but I can't seem to
find it (maybe it was something about detecting syllables instead of
actual text-to-speech stuff?).

SDL_mixer doesn't actually have access to window events, SDL
internals, or anything of the sort. Speech synthesis is only "part
of sound" on Linux—anywhere else it's an OS call you feed a string
to. And again, since when did SDL_mixer handle your mouse?


Quote:
Quote:
Although I'm kinda agreed that having multi-platform accessibility *wrapper*
is a good thing, does it really fit in what SDL is aimed for ?

SDL is aimed at being a platform-abstraction layer: it's a DOS-weight
partial-OS specialized for multi-media applications. Some of the
people that will want to use these applications will need
screen-reader or similar support. SDL should therefor provide both the
portions of the system that SHOULD be in SDL if they are to function
correctly, AND enough to use that same support in a platform-agnostic
manner.

Thus: this is necessary.

Okay, for just one moment I need to take off the busy and disgruntled
developer from the days of yore hat.

I remember back in the day someone took old-school Quake 1 and redid
its sound system completely to use positional audio and otherwise
did away with the sound mixer's (many) quirks. Then they blacked the
screen. The blind players wiped the floor with the sighted ones.
Menus were not accessible though because it was a research project
rather than an intent to create an accessible FPS. And the results
of the "research" were that the game wasn't "fair" for the sighties.
Give people back their screens and give everyone some optical camo
(Snake? Snake?! Snaaaaaaaaake!!) so that people cannot be seen any
further away than they could be heard and use that kind of positional
audio setup, and it'll be a fair deathmatch. :)

I don't see many people going out of their way to make games that are
accessible, but if we can help make it easier for them to do it, that
should exist somewhere. Sik says it can't really go somewhere other
than SDL, and knowing a little about accessibility toolkits (though
not a lot admittedly), he's right.

Some people on this list already know that I am legally blind. I
don't actually need speech output in anything really, but I often use
it anyway to save on eyestrain to read long posts and whatnot. In
any game I can configure the font size, I never even worry about it.
And the fact is that there are a whole lot of games I would LOVE to
play if I could read the damned in-game text. Fallout, *craft, you
name it.

But I can't. Because ultimately I'm legally blind. I'm never going
to be able to read small print in a game, certainly not real-time.
It's one of the things on a list that's growing shorter all the time.
In the past 20 years, I've been able to drive cars, shoot guns, and
use gadgets so visually-oriented that they don't actually have
physical buttons. But I still can't play Starcraft. Not because
Starcraft is something unplayable, but because the text is too small
and I can't make it bigger enough to read without stopping the game
and pulling out a magnifier to slowly read my screen.

And while I doubt adding OS-abstracted events for "the user wants
<thing> read", "next/previous item", "adjust control up/down" (which
if you think those things belong in SDL_mixer, I want a crate of
whatever you're smoking). Speech output on every platform that isn't
Linux is completely different than PCM audio as well, so there's no
reason why SDL's implementation shouldn't include the remaining
OS-abstracted calls such as maybe SDL_AccesSetAccessLabel,
SDL_AccessSpeakText, and SDL_AccessShutUp (I'm now lobbying for that
last one as the function name, even though I'm sure the idiom doesn't
fit other languages…) These usually work without even thinking about
opening a sound device, and probably that's true even under Linux if
it works at all. Most OSes now have the ability to speak a bit of
text without the user setting up any accessibility features, so the
SDL_AccessSpeakText function might be available by default on many
platforms. Linux ain't one of them.

Actually, it's not unheard of for the "spoken" text output by a
screen reader to go to a Braille terminal rather than a speech
synthesizer. Some special-purpose programs over the years output
different things on speech and Braille devices, but that's not
something SDL could ever even hope to do in an OS-independent way.
The software that can is pretty much written for embedded devices.

I hope you're not saying that accessibility shouldn't be done,
because I would find that deeply offensive in 2015. Most of the
civilized world has concluded by now that the disabled should not be
excluded as a general rule. That wasn't possible under DOS, but it
sure is possible today. If Apple can design a buttonless interface a
blind person can operate, the NFB can design a car that we can drive
blindfolded (not that drives itself mind you, but that a blind person
can drive), then Xerox can make copiers that we can use, Panasonic
can design microwaves we can operate out of the box, OSes can feature
accessibility from the installation onward, and SDL can provide a
handful of functions to shove speech out to the OS and pass the
special keyboard or gesture commands back.

If you've got a problem with that, I suggest you might want to
migrate to the 21st century. Because the attitude that the disabled
aren't worth consideration or qualify as "bloat" or "cruft" is the
kind of thing that can and does result in very expensive lawsuits
with damage awards these days. And rightfully so, if filed under
those circumstances. You would not tolerate discrimination against
someone because of their skin color or sexual orientation or religion
anywhere else, so why the hell should we accept status as second
class citizens of the modern world?

Now, if the thought never crossed your mind, that's one thing. Or if
you can't figure out how to make something accessible, that's also
fine. There are things I don't know how to make accessible, and I'm
the blind dude on the mailing list. If I don't know how to do it
even conceptually, how can anyone else be expected to have the
answer? That's the biggest reason why I would argue that SDL's
accessibility support would have to be thin, BTW—I can't imagine how
else to implement it in an OS-agnostic way.

But when accessibility support becomes less about "didn't" or
"couldn't", and more about "won't" or even "shouldn't", you better
believe I start getting surly.


Quote:
Quote:
I mean, is it
better to make it a separated library (and may be having a bridge to SDL if
it's really need to) ?

Courts in the US have occasionally (it doesn't actually come up often,
from what I understand) ruled that a law that was intended to require
business owners to be handicaped-accessible ALSO applies to web pages
(I understand that one of the major retailers got hit with this... and
lost). It's very easy to infer that it's binding on software in
general, thus it's something that should be supported.

Moving it into a seperate library would add unjustified complexity to
programmers.

MANY websites are only "mostly" accessible. If you can basically
make it work more or less, even if it's not easy, you don't have a
leg to stand on to sue.

However, if you are a public commercial enterprise, and it is
actually impossible to access the checkout button of your website
without being able to see and click on it, there's a problem. If we
then approach you with the problem and offer you the code to fix it,
and you REFUSE… Ask Target how that worked out when we sued them.
(Hint: They lost.)

Didn't or couldn't vs won't or shouldn't. Target argued that online
shopping was only for able-bodied people. The disabled could just
walk into their stores if they had a problem with the website. And
they shouldn't have to go and make their buttons clickable just
because some disabled people couldn't use their broken javascript.

That could have cost Target dearly, if we wanted to make it an
expensive lesson. But all we asked was that they fix it. Cost them
their web developers' time to implement the fix and some court costs.

Likewise, we asked Apple to make the iPod accessible and they said
there weren't enough blind people out there who listened to music for
them to worry about it. We educated them as to the depth of their
error in thinking. Again, it could've been a very expensive lesson
for them, but we went after fixing it more than a payday. And they
implemented the fix we asked for (spoken names of songs as m4a files,
along with the menus) because it actually was an easy fix.

But they also took the lesson to heart. The next iPhone had a screen
reader that was revolutionary. Webkit went from zero to accessible
in one major OS revision. Apple improved their magnifier and created
VoiceOver on the Mac. And Apple's accessibility push was so complete
and profound that it literally forced Microsoft to do the same to
Windows 8—it no longer costs blind people $1000 on top of the price
of a computer for the privilege of being able to use it. (I wasn't
involved in the Apple lawsuit at all actually, but I approve of the
outcome most assuredly!)

Including a few hooks for the OS's own accessibility features won't
make video games accessible to anyone. But putting the ability for
game developers to do it into games will hopefully encourage people
to at least consider it. After all, DOS couldn't do unicode either
and SDL now does that exceptionally well. And some day, I'll figure
out how it works and start using it. Because it's worth doing for
the people who need it.

Quote:
Quote:
Also is it really needed to integrate the
functionality into SDL ? Can we keep them separated ?

Is it really needed to keep it out of SDL? No, it doesn't have to be kept out.
Can we make them integrated? Yes, we can make them integrated.

It's already been discussed that SDL fundamentally needs to be
changed to make accessibility possible, library or not. But as the
support required entails translating some system events to SDL events
and providing a wrapper around a system that fundamentally gets
passed a string when appropriate, I'd say it's just as important as
supporting unicode, and for just the same reasons. You COULD put
that in a library. You shouldn't though.


Quote:
Quote:
I'm just afraid that one day SDL might become one big monolithic platform
that handle everything even if only parts of it are really used in most case.

This is not an appropriate cause for fear, but instead for some
self-analysis. Many people will automatically have different ideas
about what should and what should not be in SDL. I think that textured
triangles (and maybe a batching system) should be in it. This is not
because you CAN'T do without them, but instead because those two
features allow both SDL and satellite libraries to do their jobs much
better.

Actually, I have interest in being able to backend SDL into libretro
for a few things, which largely involves being able to gut large
components from SDL as it is normally compiled/installed and build a
small one-target library. Probably a static one at that.

That would seem to run cross-purposes to things like adding lots of
features like accessibility, but so much of SDL is modular and the
modules don't often have a huge degree of interdependency. That
doesn't mean we shouldn't exercise some stewardship over what does
and doesn't go into the library, but it does mean that some things
should go into the library because they belong there, even if someone
else might find them to be unnecessary at this time. The renderer
for 2D games and the GameController API are examples I've cited of
this recently. The one is totally irrelevant to any modern 3D title,
and the other already is a helper lib that was just stuck in the
trunk because Valve wanted it for Steam. And today nobody would
really argue that either thing didn't belong in SDL.


Quote:
Extending SDL isn't apocryphal, it simply needs to be restrained.

Adding a full GUI system? THAT would be taking things a touch too far
(we already have graphics, so the support that a gui satellite library
needs is already fully implemented).

Not only that, but GUI is such a nebulous concept that people's needs
are going to be wildly different. The GUI I would need for a game's
menus is going to be a lot more simplistic than you might want for a
3D modeling program. It doesn't even make sense to implement one in
terms of the other most of the time.


Quote:
SDL doesn't do "everything", and won't with this extension either.
Now, if SDL directly integrated the satellite libraries? THAT would be
"doing everything".

What SDL is *SUPPOSED* to do is act as an abstraction layer, a
"quasi-OS" that provides you with a generic API for things that would
otherwise require entirely platform-specific code. This is what SDL 1
was created for, and this is what SDL 2 is designed for. This is why
SDL 2 allows you to specify your own OpenGL library, but doesn't
actually implement one itself: that bit's already abstract, the
problem is in the initialization.

I don't see it as any way related to an OS. I see it as a way to not
care about an OS at all, FWIW. General rule in my mind is that
nothing outside of SDL should need to know about things like that.
In practice there will likely be some, but it should be limited.


Quote:
"Does it make more sense combined or seperate?" is the question that
should actually be asked, because the ones you listed express the
implication that the correct answer is "Seperate", regardless of
reality.

I see the following possibilities for any thing you might do:

1. It should not be done anywhere.
2. It should be done outside of SDL.
3. It should be #2, but SDL needs to be enhanced so it can be.
4. It should be part of SDL itself.

Normally, any public function in SDL makes its way into your program
via SDL.h. I can see that being otherwise for certain more internal
bits (say of the renderer) which are frozen for the current ABI and
exported so that you can extend SDL from the outside, but that aren't
really intended for use by most programs.

I dunno if that's a good idea, but it's one that is rattling around
in my head.

Joseph


_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Sik


Joined: 26 Nov 2011
Posts: 905
Damn, this took a while to read x_x; (that I'm doing other stuff at
the same time isn't helping matters)

2015-01-02 19:07 GMT-03:00, T. Joseph Carter:
Quote:
SDL_AccessShutUp

Hah! The only problem is that the only time you'd want to explicitly
shut up the screen reader is if the text is gone in the first place,
so speaking an empty string may do the job as well. (and when it's the
user wants to make the screen reader shut up that's the screen
reader's job, not the program's)

I suppose that even then it still wouldn't hurt even if it ends up as
just a wrapper function (could help with code clarity, maybe?).

Quote:
These usually work without even thinking about
opening a sound device, and probably that's true even under Linux if
it works at all. Most OSes now have the ability to speak a bit of
text without the user setting up any accessibility features, so the
SDL_AccessSpeakText function might be available by default on many
platforms. Linux ain't one of them.

Linux has Speech Dispatcher, and it's installed by default at least in
the case of Ubuntu (since Orca needs to make use of it), though I
gotta admit, the default speech engine leaves a lot to be desired...
but yeah it's there. And yeah, it works without having SDL initialize
the sound (the daemon is a separate process, after all...).

Quote:
Actually, it's not unheard of for the "spoken" text output by a
screen reader to go to a Braille terminal rather than a speech
synthesizer.

This is the main reason why I wasn't happy with SAPI and Speech
Dispatcher and instead wanted a way to ensure text went to screen
readers (the other issue being that they don't follow screen reader
settings which is guaranteed to infuriate users).

Quote:
Some special-purpose programs over the years output
different things on speech and Braille devices, but that's not
something SDL could ever even hope to do in an OS-independent way.
The software that can is pretty much written for embedded devices.

Yeah, I think the only way to tell for sure is to talk to the screen
reader directly which isn't feasible without using their proprietary
APIs (and not all of them provide one, either). I'd say that this is
most likely low priority for now anyway, let's focus on the most
important aspect i.e. outputting text in the first place.
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Joseph Carter


Joined: 20 Sep 2013
Posts: 279
On Sat, Jan 03, 2015 at 07:26:21AM -0300, Sik the hedgehog wrote:
Quote:
Damn, this took a while to read x_x; (that I'm doing other stuff at
the same time isn't helping matters)

The problem with big replies is that they tend to generate big
replies themselves.


Quote:
2015-01-02 19:07 GMT-03:00, T. Joseph Carter:
Quote:
SDL_AccessShutUp

Hah! The only problem is that the only time you'd want to explicitly
shut up the screen reader is if the text is gone in the first place,
so speaking an empty string may do the job as well. (and when it's the
user wants to make the screen reader shut up that's the screen
reader's job, not the program's)

I suppose that even then it still wouldn't hurt even if it ends up as
just a wrapper function (could help with code clarity, maybe?).

Trust me, the user of the screen reader will want to shut it up all
the time. Probably they have a system-wide keybinding for this, but
if your stuff somehow overrides that or is self-voicing, probably the
function will be wanted.

Quote:

Quote:
These usually work without even thinking about
opening a sound device, and probably that's true even under Linux if
it works at all. Most OSes now have the ability to speak a bit of
text without the user setting up any accessibility features, so the
SDL_AccessSpeakText function might be available by default on many
platforms. Linux ain't one of them.

Linux has Speech Dispatcher, and it's installed by default at least in
the case of Ubuntu (since Orca needs to make use of it), though I
gotta admit, the default speech engine leaves a lot to be desired...
but yeah it's there. And yeah, it works without having SDL initialize
the sound (the daemon is a separate process, after all...).

Speech on Linux is simply godawful. If you don't have a license for
something like a Nuance engine that sounds human, you have academic
research stuff, optimized versions of the same, and old-school
hardware speech chips if you can find one. And you can, usually the
DoubleTalk, which sounds like DoubleSh*t. Actually, going along with
the ability to shut the thing up us the ability to generate very fast
speech without swallowing syllables or even just phonemes and
morphemes. Which is a fancy way of saying that you need your speech
engine to be able to blather like an auctioneer and still understand
what the thing is saying. That's usually more important than natural
prosody or human characteristics like the IMO kind of weird taking a
breath sound made by Apple's Alex synth.

The best speech I've ever heard out of a speech chip was actually one
local guy's JAWS for DOS setup back in the day for the Accent synth.
That synth is based on the same physical speech chip as was used in
the original Speak-n-Spell. It frankly didn't sound much better with
default settings. But he'd managed to build a voice profile that
sounded great to me, a guy who otherwise preferred the Keynote Gold
for precisely the reason just stated: When sped up, you hear every
phoneme and morpheme distinctly, even if the entire speech engine
sounds like a guy trying to speak very clearly while holding his
nose. Ahh, those were the daze.


Quote:
Quote:
Actually, it's not unheard of for the "spoken" text output by a
screen reader to go to a Braille terminal rather than a speech
synthesizer.

This is the main reason why I wasn't happy with SAPI and Speech
Dispatcher and instead wanted a way to ensure text went to screen
readers (the other issue being that they don't follow screen reader
settings which is guaranteed to infuriate users).

Also it's quite likely that a visually impaired user will have
configured their screen reader for optimal reading performance for
their needs b u t l e a v e t h e d e f a u l t s l o w ,
a n n o y i n g s e t t i n g for the default system voice. I
haven't done that because I tend to use the default system voice with
a "read this" key command far more often than an actual screen
reader.


Quote:
Quote:
Some special-purpose programs over the years output
different things on speech and Braille devices, but that's not
something SDL could ever even hope to do in an OS-independent way.
The software that can is pretty much written for embedded devices.

Yeah, I think the only way to tell for sure is to talk to the screen
reader directly which isn't feasible without using their proprietary
APIs (and not all of them provide one, either). I'd say that this is
most likely low priority for now anyway, let's focus on the most
important aspect i.e. outputting text in the first place.

If you talk to T.V. Raman about emacspeak and start talking about
screen readers, he'll start making fun of you. Emacspeak isn't as
screen reader. Rather, it is speech access to the internal state of
emacs, which of course is a fully functional environment you never
need to leave anyway.

Take for example my tmux status bar, reproduced below in a squished
format:

"[0] 0:Python- 1:mutt* 2:bash Sun Jan 04 15:28 "

How does a screen reader interpret that? Usually by trying to guess.
It has to know what those things mean. Is "Sun" the word sun, or is
it intended to be an abbreviation for Sunday? A screen reader must
expend effort trying to figure that out. If this were emacspeak or a
similar embedded environment that was self-voicing, it would know
what those numbers and punctuation marks on the left mean, and that
the thing on the right was a date. I could thus read each
appropriately.

What's my current window? "Window one." Or more verbosely, "Window
one, currently running mutt." The datetime would be read as "Sunday,
January fourth, fifteen twenty-eight.", or, more tersely, "Fifteen
twenty-eight.". Something to keep in mind for your SDL apps is that
if you're sending stuff to the screen reader yourself, rather than
having it try to scrape what you're sticking in the window, it
doesn't have to say what it does on the screen.

The thing to remember is that visual access to a screen is inherently
random-access, but speech or even Braille displays tied in to a
screen reader are inherently serial access. Hence the value of the
shut up keystroke. If you need to know it's Sunday, the rest of the
datetime is irrelevant and you've got stuff to do.

Just some advice for how SDL's hopefully soon-to-be-developed
accessibility features should be used once they're available, from a
legally blind user who was designing accessible UX back before
Windows 95 was actually a thing. For experienced users, it's all
about what do I really need to know, and how do I most quickly get
that information. The elderly (or those using speech to supplement
decaying vision) and the inexperienced people still trying to think
visually want more parity between a spoken and a displayed interface.

Used to be we had pretty good access (custom-designed solutions), and
acceptable screen scraping of known DOS apps. Then as things got
graphical our access became less perfect. Nowadays with actual
access labels on controls and views, we're beginning to regain the
access we had when the interfaces were all custom-designed for our
benefit. It's kind of cool, actually, and I'm excited to see SDL
benefiting from the modern push in the hopefully near future.

Joseph

_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Sik


Joined: 26 Nov 2011
Posts: 905
2015-01-04 20:42 GMT-03:00, T. Joseph Carter:
Quote:
Trust me, the user of the screen reader will want to shut it up all
the time. Probably they have a system-wide keybinding for this, but
if your stuff somehow overrides that or is self-voicing, probably the
function will be wanted.

What I meant is that you could probably achieve the same effect by
just calling SDL_AccessSpeakText(""), rendering SDL_AccessShutUp()
kind of pointless.

Quote:
How does a screen reader interpret that? Usually by trying to guess.
It has to know what those things mean. Is "Sun" the word sun, or is
it intended to be an abbreviation for Sunday? A screen reader must
expend effort trying to figure that out. If this were emacspeak or a
similar embedded environment that was self-voicing, it would know
what those numbers and punctuation marks on the left mean, and that
the thing on the right was a date. I could thus read each
appropriately.

Oh, I thought you were talking about distinguishing between speech and
braille output, to account for the inherent differences in the output
medium.

But yeah, isn't that the whole point to having separate accessibility
text? The program displays one thing to the screen, but the tools see
something else which is more appropriate. Kind of how the alt text
works with the img element in HTML (at least when used properly). This
would already come as-is with the proposed API, the bigger problem
would be educating developers to understand how to use it properly -
which I imagine we should not have a problem with it, right?
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Joseph Carter


Joined: 20 Sep 2013
Posts: 279
On Mon, Jan 05, 2015 at 04:16:45AM -0300, Sik the hedgehog wrote:
Quote:
2015-01-04 20:42 GMT-03:00, T. Joseph Carter:
Quote:
Trust me, the user of the screen reader will want to shut it up all
the time. Probably they have a system-wide keybinding for this, but
if your stuff somehow overrides that or is self-voicing, probably the
function will be wanted.

What I meant is that you could probably achieve the same effect by
just calling SDL_AccessSpeakText(""), rendering SDL_AccessShutUp()
kind of pointless.

Perhaps. Some speech setups on Linux operate on a stream of text and
don't necessarily stop reading until they're done unless explicitly
told to do it. Of course it should be that the screen reader
silences the synth before feeding it something new that isn't
explicitly part of an ongoing stream.

Anything that doesn't behave as you describe should be considered
useless and broken, and that's beyond SDL's control.


Quote:
Quote:
How does a screen reader interpret that? Usually by trying to guess.
It has to know what those things mean. Is "Sun" the word sun, or is
it intended to be an abbreviation for Sunday? A screen reader must
expend effort trying to figure that out. If this were emacspeak or a
similar embedded environment that was self-voicing, it would know
what those numbers and punctuation marks on the left mean, and that
the thing on the right was a date. I could thus read each
appropriately.

Oh, I thought you were talking about distinguishing between speech and
braille output, to account for the inherent differences in the output
medium.

But yeah, isn't that the whole point to having separate accessibility
text? The program displays one thing to the screen, but the tools see
something else which is more appropriate. Kind of how the alt text
works with the img element in HTML (at least when used properly). This
would already come as-is with the proposed API, the bigger problem
would be educating developers to understand how to use it properly -
which I imagine we should not have a problem with it, right?

Indeed that is the point of the accessibility text. It just didn't
used to be how screen readers worked. That it now is more
intelligent is a recent trend which can probably be blamed on Apple
doing it right. The Microsoft way was to simply make it so a screen
reader had access to text content of everything that went on the
screen and let the screen readers employ deep hacks the equivalent of
LD_PRELOADing stuff to be able to get something more intelligent out
of insignificant programs nobody uses like Microsoft Word.

Consequently, Word crashed a lot for some reason when run under a
screen reader like JAWS until basically the latest version under
Windows 8. That and the $1000 price tag for the screen reader from
Freedom Science Fiction was just the cost of being blind unless you
thought you could slum it with a freebie like NVDA which didn't even
used to exist or the "affordable" option of WindowEyes (which most
argued wasn't nearly as good…)

Or System Access! Only a few hundred dollars, and it kinda worked
pretty well for the things it actually supported at all… yeah.

But now I've mostly left any semblance of a topic behind.

Joseph

_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Sik


Joined: 26 Nov 2011
Posts: 905
OK so now that things seem to have calmed down... Can we go forward
with this? And can we summarize what we have figured out about API
support? I got lost on that already.

In any case the biggest problem right now is how to implement this
from the SDL API's viewpoint. Should it be its own subsystem, part of
the video subsystem or something else? How should it be enabled?
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Joseph Carter


Joined: 20 Sep 2013
Posts: 279
Your idea, propose an API. Wink Helps if you include a sample implementation for at least one platform and have at least looked to confirm that it works in the others. Mac and Linux are the most obvious because the screen readers are free. Windows is possible with NVDA being free.

Again I cite the Game Controller API: Not the prettiest implementation possible, but working code trumps theoretical perfection.

Joseph
Sent via mobile

Quote:
On Jan 16, 2015, at 14:30, Sik the hedgehog wrote:

OK so now that things seem to have calmed down... Can we go forward
with this? And can we summarize what we have figured out about API
support? I got lost on that already.

In any case the biggest problem right now is how to implement this
from the SDL API's viewpoint. Should it be its own subsystem, part of
the video subsystem or something else? How should it be enabled?
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Sik


Joined: 26 Nov 2011
Posts: 905
2015-01-18 1:00 GMT-03:00, T. Joseph Carter:
Quote:
Your idea, propose an API. Wink Helps if you include a sample implementation
for at least one platform and have at least looked to confirm that it works
in the others. Mac and Linux are the most obvious because the screen readers
are free. Windows is possible with NVDA being free.

Again I cite the Game Controller API: Not the prettiest implementation
possible, but working code trumps theoretical perfection.

Easier said than done since I think this is the first time an entire
API would bleed into other parts of SDL (the Game Controller API
didn't, since it's built on top of the Joystick API rather than
integrating into it). I'd need to implement an entire API first to
have a sample implementation and if I get a detail wrong that would
make it unusable in some system then I'd have to ditch *all* of it.

Really the biggest problem right now is how it should interact with
the Video API (more specifically, to be able to pass information to
screen readers through the window). It'd be tempting to just include
it into it, but then people would be right to ask why it should know
about speech engines too.

Find out a decent way to get out of that and we can proceed to make
the API. Unless people think it's OK to just make it part of the Video
API (which would solve most of the dilemma), that is.
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Jared Maddox
Guest

Quote:
Date: Fri, 16 Jan 2015 19:30:19 -0300
From: Sik the hedgehog
To: SDL Development List
Subject: Re: [SDL] Outputting text to accessibility tools
Message-ID:

Content-Type: text/plain; charset=UTF-8

OK so now that things seem to have calmed down... Can we go forward
with this? And can we summarize what we have figured out about API
support? I got lost on that already.

In any case the biggest problem right now is how to implement this
from the SDL API's viewpoint. Should it be its own subsystem, part of
the video subsystem or something else? How should it be enabled?


Is there any plausible case where the accessibility stuff might not
work, AND we could definately detect it? If so then I suggest a
subsystem, so that we can do this:

if( SDL_InitSubSystem( SDL_INIT_ACCESSIBILITY ) < 0 )
{
/* Error message here. */
}

Past that, as T. Joseph Carter said: you're the one making the
proposal, the rest of us are just useful for feedback.
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Sik


Joined: 26 Nov 2011
Posts: 905
2015-01-19 20:03 GMT-03:00, Jared Maddox:
Quote:
Is there any plausible case where the accessibility stuff might not
work, AND we could definately detect it? If so then I suggest a
subsystem, so that we can do this:

if( SDL_InitSubSystem( SDL_INIT_ACCESSIBILITY ) < 0 )
{
/* Error message here. */
}

Yeah, that seems the logical solution (and yes, it can fail to
initialize), and honestly the approach I'd take. My grip is how to
make it interact with the video subsystem for the cases where it needs
to output using the window (i.e. for screen readers that ask the GUI
what to display for the focused object).

If anybody has any ideas on how to handle that I'll see if I can get
to work on it.
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Jared Maddox
Guest

Quote:
Date: Mon, 19 Jan 2015 22:13:08 -0300
From: Sik the hedgehog
To: SDL Development List
Subject: Re: [SDL] Outputting text to accessibility tools
Message-ID:

Content-Type: text/plain; charset=UTF-8

2015-01-19 20:03 GMT-03:00, Jared Maddox:
Quote:
Is there any plausible case where the accessibility stuff might not
work, AND we could definately detect it? If so then I suggest a
subsystem, so that we can do this:

if( SDL_InitSubSystem( SDL_INIT_ACCESSIBILITY ) < 0 )
{
/* Error message here. */
}

Yeah, that seems the logical solution (and yes, it can fail to
initialize), and honestly the approach I'd take. My grip is how to
make it interact with the video subsystem for the cases where it needs
to output using the window (i.e. for screen readers that ask the GUI
what to display for the focused object).

If anybody has any ideas on how to handle that I'll see if I can get
to work on it.


I'm not sure precisely where your struggle is, so I'm gonna engage in
a smidge of blind-fire.

How about defining a new structure for accessibility callbacks, and
adding a pointer to it into either the window structure or the
renderer structure? If you start it with a size_t and then just add
one or two function pointers (to return a string identifying the
"accessibility driver") then that would be a start, and you never go
wrong with providing some simple informational functions.

Remember that either the video or the events subsystem requires the
other, so this sort of dependency isn't an impediment. I assume that
you'll need to reserve some stuff in "video subsystem space", but as
long as the relevant code is stored in the video subsystem
implementation files I expect that there wouldn't be a problem.
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Sik


Joined: 26 Nov 2011
Posts: 905
Yeah, that's pretty much the problem here.

2015-01-20 20:15 GMT-03:00, Jared Maddox:
Quote:
Remember that either the video or the events subsystem requires the
other, so this sort of dependency isn't an impediment. I assume that
you'll need to reserve some stuff in "video subsystem space", but as
long as the relevant code is stored in the video subsystem
implementation files I expect that there wouldn't be a problem.

Hmmm, now that I look into it, SDL_PumpEvents calls SDL_GetVideoDevice
which returns a structure with *all* the functions related to the
video subsystem. Maybe I can try using this to communicate with the
video subsystem.

- - - - - - - - - -

Anyway, this aside: would it be OK if I make a mock-up of the API
implementing only some functionality to test? (before having anybody
trying to integrate it into SDL) Thinking about doing SAPI or
Speech-Dispatcher since those two are easy to implement without
touching SDL's internals (and I already have some code around for
them).
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Joseph Carter


Joined: 20 Sep 2013
Posts: 279
I sort of figured SAPI would be a good starting place.

Joseph

On Wed, Jan 21, 2015 at 10:31:37AM -0300, Sik the hedgehog wrote:
Quote:
Yeah, that's pretty much the problem here.

2015-01-20 20:15 GMT-03:00, Jared Maddox:
Quote:
Remember that either the video or the events subsystem requires the
other, so this sort of dependency isn't an impediment. I assume that
you'll need to reserve some stuff in "video subsystem space", but as
long as the relevant code is stored in the video subsystem
implementation files I expect that there wouldn't be a problem.

Hmmm, now that I look into it, SDL_PumpEvents calls SDL_GetVideoDevice
which returns a structure with *all* the functions related to the
video subsystem. Maybe I can try using this to communicate with the
video subsystem.

- - - - - - - - - -

Anyway, this aside: would it be OK if I make a mock-up of the API
implementing only some functionality to test? (before having anybody
trying to integrate it into SDL) Thinking about doing SAPI or
Speech-Dispatcher since those two are easy to implement without
touching SDL's internals (and I already have some code around for
them).
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
Outputting text to accessibility tools
Sik


Joined: 26 Nov 2011
Posts: 905
*totally not e-mailing to hide his laziness*

OK, after messing around a bit with this code I've decided that the
best option would be if this feature was implemented the same way
renderers are (rather than as a subsystem). This may cause a bit of
inconvenience with SAPI but I think it's easy to cope with.

So, first we'd have a new type: SDL_Reader (decided to call it reader
because let's face it, screen readers will be the main use of this
thing)

The API would be like this for now (I know there may be demand for
more functionality, but let's focus on the basics first):

- SDL_CreateReader(window)
- SDL_DestroyReader(reader)
- SDL_ReaderSpeak(reader, text)
- SDL_ReaderShutUp(reader)
- SDL_ReaderRepeat(reader)

The first two are self-explanatory. SDL_ReaderSpeak outputs the text
to the screen reader (speaks in speech engines, displays in braille
screens). SDL_ReaderShutUp clears the output (shuts up in speech
engines, clears in braille screens). SDL_ReaderRepeat is like Speak,
but it repeats the last text that was sent to that window.

Finally, there would be a hint to override the backend choice,
SDL_HINT_RENDER_DRIVER. The list of drivers will of course change over
time, although this is what comes off the top of my head right now:

- "sapi" SAPI 5.x (Windows)
- "speechd" Speech-dispatcher (Linux)
- "voiceover" VoiceOver (OSX, iOS)

Does this seem good for a start?
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org