OSX and Retina |
Trevor Powell
|
After further research, it appears that SDL's Retina behaviour on iOS is to ignore the issue, and simply to always deal with pixels, ignoring the "points" concept that underpins Cocoa layouts. That's very reasonable on a device where you have a single physical size for the display.
And I'm sort of leaning toward doing the same thing under OS X; patch SDL to convert everything it receives from Cocoa from points into pixels, and then work entirely on pixels internally. This would make it so that every number the client code sees is expressed in terms of pixels, which would drastically simplify working with a high-density display. This would result in a few potential issues: - Client code would have no way to know whether or not it's running on a retina display (except by making educated guesses based upon the resolution it's using). This might make it awkward to (for example) scale buttons or other interactive UI elements to an appropriate physical size on the screen. - Low-resolution SDL games played in a window would appear in an extremely small window on retina screens, since resolution (and therefore window size) would be specified in pixels. (They should be fine when played fullscreen, though) Questions: - Would it be worth adding API functions for exposing the density of a high-density display? (This could be used by client code to address issue #1, above). Alternately, adding this sort of API function would make it possible for us to continue to use points as the standard unit in OSX SDL, and have client code convert into real pixels when it needs to; this would keep us from needing to patch so much unit conversion code into mouse/window/etc code within SDL. - Would it be worthwhile to add a video flag (SDL_HIGHDENSITYSURFACE, or somesuch) to enable/disable Retina support? So everything continues to run using the usual "normal-density display" logic, and retina support would only be enabled if that flag was provided? (This would address issue #2, above) - High-density displays are actually a bit more complicated on a desktop device than on a mobile one, since we've got some nastier use-cases; multiple monitors with different densities, windows spanning two monitors, etc. Not sure to what extent we want/need to worry about these corner-case situations, as long as they don't result in instability.. |
|||||||||||
|
OSX and Retina |
Brian Barnes
Guest
|
Trevor Powell wrote:
I'd vote for this, as I already depend on it for iOS. In dim3, ever interface element is defined by the project creator as absolute size in an absolute field that is scaled to the screen size, so they work on any resolution and always fit the scale of any art behind it (i.e., the elements are set in a virtual window that is translated to the real window.) If people deal with absolute sizes regardless of screen resolution, than they will run into very small elements, but they probably shouldn't be in the first place. [>] Brian _______________________________________________ SDL mailing list http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org |
|||||||||||||
|
Re: OSX and Retina |
Trevor Powell
|
I do exactly the same thing in my own projects; it does seem like the sensible way to work. But I can imagine other people who want to maintain interface elements at a particular physical size, regardless of the size of the screen. Just didn't want to completely discount that use case. (Or at least, to make my eventual patch's non-support for it explicit) More experimentation is showing that we don't need to add a flag to enable/disable retina mode; OS X applications require a special key in their Info.plist file to specify that they support high-density rendering. ("NSHighResolutionCapable") So anyone who doesn't set that won't get high-density rendering even if SDL asks for it. Additionally, end-users can disable high-resolution rendering on any program by turning it off in the program's "Get Info" window. If the developer doesn't include that key in the Info.plist, or the user explicitly disables high-resolution rendering, then the program runs in 1-pixel-per-point compatibility mode anyhow. So no flag required. The trickier part is that on a desktop machine, we can wind up with different scaling factors on different monitors. Or even different scaling factors on a single window, in the case of display mirroring. The net result of all this is that it's turning out to be a little tricky to generate a realistic list of available resolutions, as measured in pixels -- especially before the window has even been created. But I'm working on it. |
|||||||||||||
|
Re: OSX and Retina |
RodrigoCard
|
I vote for setting a opt-in HINT for a Retina size, since this is the default behavior in OS X.
something like: SDL_SetHint(SDL_APPLE_RETINA_DISPLAY) Also, would be great if it works the same way on iOS. I needed to patch SDL when trying to make a non-retina size game, handling it manually is so much harder. So a opt-in hint would be a better option. IMHO |
|||||||||||
|
Re: OSX and Retina |
Trevor Powell
|
So I have SDL now working for retina-with-retina-enabled, and non-retina modes (OpenGL only; don't currently have test code to run against the 2D interface for testing). This includes window and mouse handling (both captured and non-captured). Haven't yet fixed resize events.
And retina-with-retina-disabled is mostly working, too, except for one little gotcha: SDL_ListModes() finds the list of available resolutions by calling CGDisplayCopyAllDisplayModes(). And those results are all expressed in terms of raw device pixels, regardless of whether the application has been set as "do not support high resolution rendering". Which means that in non-high-resolution rendering mode, a MacBook Pro Retina reports supporting a 2880x1800 resolution, when in fact we can only create a window holding a 1440x900 resolution. In theory, I can talk to the NSScreen to find the scaling factor between pixels and points, but in non-high-resolution-rendering mode, that interface returns a 1:1 mapping between pixels and points, so that doesn't help us in this situation. There's also a Core Graphics interface (CGContextConvertRectToUserSpace()) which should be able to convert from device pixels to Cocoa points for the purpose of determining what size fullscreen windows we can make in non-high-resolution mode, but that call requires a CGContext object, and CGDisplayGetDrawingContext() is returning NULL for me. I'm a neophyte with Quartz, so I assume that I just need to dig deeper into the documentation. |
|||||||||||
|
Trevor Powell
|
This is really kicking my butt.
I'm having no luck at all figuring out how to convert from the display resolutions returned by CGDisplayCopyAllDisplayModes() into the window sizes to be created through Cocoa, when retina support is disabled. Quartz calls don't adjust for the scaling factor, NSScreen pretends that there is no scaling factor (when the application's retina support is disabled), etc. Incidentally, this issue is already present on the current stable release of SDL 1.2, which has no retina support (and probably also with 2, if it has been implemented in a similar manner); it gets available screen sizes by asking Quartz, and then tries to use one by creating a window through Cocoa. It just blindly assumes that the two OSX libraries are using the same units, which isn't a valid assumption on any device with a high-density display. Means that SDL_ListModes() reports resolutions which then don't work correctly when selected via SDL_SetVideoMode. Maybe I should file this as a bug, to get more eyes on the problem? This presumably is already affecting every SDL-based program running on an OSX machine with a retina display. |
|||||||||||
|