The SDL forums have moved to discourse.libsdl.org.
This is just a read-only archive of the previous forums, to keep old links working.


SDL Forum Index
SDL
Simple Directmedia Layer Forums
SDL 2 2D hardware acceleration on Raspberry Pi
Hop


Joined: 07 Oct 2015
Posts: 3
Is it possible to use SDL2 with 2D hardware accelerated rendering on Raspberry Pi?

If 2D hardware accelerated rendering is possible on the device, how do you check to see that your application is using hardware acceleration? i.e. How do you ensure that SDL2 and the application are set up correctly to use OpenGL ES 2.0?

I have built SDL2 from source configured with as described in this post: https://solarianprogrammer.com/2015/01/22/raspberry-pi-raspbian-getting-started-sdl-2/

In short:

Code:

(Copy Fresh Raspbian Wheezy image to SD card)
(Boot to command prompt - no X)
(Use raspi-config to increase the GPU memory to 128MB)
cd ~
wget https://www.libsdl.org/release/SDL2-2.0.3.tar.gz
tar zxvf SDL2-2.0.3.tar.gz
cd SDL2-2.0.3 && mkdir build && cd build
../configure --disable-pulseaudio --disable-esd --disable-video-mir --disable-video-wayland --disable-video-x11 --disable-video-opengl
make -j 4
sudo make install


I believe the configure step should prevent SDL from using a software implementation of OpenGL, and force it to use an OpenGL ES backend, which is supported by the hardware. I hope this implies that any 2D render calls (e.g. SDL_DrawRect) call through to the OpenGL ES API so access the GPU and are therefore hardware accelerated.

When configure runs it states that it has found the OpenGL ES v1 and v2 headers, and lists Video drivers : dummy, opengl_es1 and opengl_es2

I'm using a very simple Tetris clone as a test case. The application's naive render function consists of drawing each of the 200 32x32 pixel squares that make up the board in turn to a 1280x720 buffer. Each square requires 2 calls to SDL_SetRenderDrawColor a call to SDL_RenderDrawRect and SDL_RenderFillRect. It is initialised like so:

Code:

    SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 2);
    SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 0);
    SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_ES);

    window = SDL_CreateWindow(window_name, SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED,
                                          window_width, window_height, SDL_WINDOW_SHOWN | SDL_WINDOW_OPENGL);

    renderer = SDL_CreateRenderer( &window, -1, SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC );


and built with:

Code:
 -L /opt/vc/lib -lEGL -lGLESv2 -I /opt/vc/include/


This runs at about 15 FPS, which strikes me as extremely slow and makes me think that software rendering is in use.

Is there any other set-up/initialisation required in the application to ensure that the GLES backend is used?

Is there anyway to check at runtime that the renderer is using hardware acceleration?

To be clear: I don't actually want to use the GL API at all, I just want to use the SDL 2D Render API, ensuring that the rendering happens in hardware and not software.

Many thanks!
gepatto


Joined: 14 Sep 2015
Posts: 6
Location: Breda, The Netherlands
Hi Hop,

I have been working on using sdl2 on a raspberry pi with the OpenFL project.
I managed to get a working version from the 2.0.4 branch and I'm doing the same configure as you are doing.
At 1080p I can reach 30fps.

I had to make changes to several files to get a working mouse cursor because I'm not running x11.

Maybe you can point me to the tetris example you are using so that I can test it on my side.
Hop


Joined: 07 Oct 2015
Posts: 3
I managed to get a little further with this (I already posted this response over on http://raspberrypi.stackexchange.com/questions/37096/sdl2-performance/37120#37120

> Is it possible to use SDL2 with 2D hardware accelerated rendering on Raspberry Pi?

> Is there anyway to check at runtime that the renderer is using hardware acceleration?

Yes and yes. It seems that this method is actually producing an executable that uses SDL2 with an OpenGLES 2 backend. This can be confirmed by adding some code:

Code:

    #include <GLES2/gl2.h>
    ...
    static void SetGLAttribute(SDL_GLattr attr, int value)
    {
       if( SDL_GL_SetAttribute(attr, value) != 0 )
          fprintf( stderr, "SDL_GL_SetAttr failed: %s\n", SDL_GetError() );
    }
    static void PrintGLString(GLenum name)
    {
       const GLubyte* ret = glGetString(name);
       if( ret == 0 )
          fprintf( stderr, "Failed to get GL string: %d\n", name );
       else
          printf( "%s\n", ret );
    }
    #endif // GL_ES_VERSION_2_0
    ...
   // Let's see if we can use OpenGL ES 2 on Raspberry Pi
   SDL_GLContext gl_context = SDL_GL_CreateContext(m_pWindow);
   printf("GL_VERSION: ");
   PrintGLString(GL_VERSION);
   printf("GL_RENDERER: ");
   PrintGLString(GL_RENDERER);
   printf("GL_SHADING_LANGUAGE_VERSION: ");
   PrintGLString(GL_SHADING_LANGUAGE_VERSION);
   printf("GL_EXTENSIONS: ");
   PrintGLString(GL_EXTENSIONS);
   SDL_GL_DeleteContext(gl_context);
    ...
    static void PrintRendererInfo(SDL_RendererInfo& rendererInfo)
    {
       printf( "Renderer: %s software=%d accelerated=%d, presentvsync=%d targettexture=%d\n",
         rendererInfo.name,
         (rendererInfo.flags & SDL_RENDERER_SOFTWARE) != 0,
         (rendererInfo.flags & SDL_RENDERER_ACCELERATED) != 0,
         (rendererInfo.flags & SDL_RENDERER_PRESENTVSYNC) != 0,
         (rendererInfo.flags & SDL_RENDERER_TARGETTEXTURE) != 0 );
    }
    ...
   int numRenderDrivers = SDL_GetNumRenderDrivers();
   printf( "%d render drivers:\n", numRenderDrivers );
   for( int i = 0; i < numRenderDrivers; ++i )
   {
      SDL_RendererInfo rendererInfo;
      SDL_GetRenderDriverInfo(i, &rendererInfo);
      printf( "%d ", i );
      PrintRendererInfo(rendererInfo);
   }

   Uint32 rendererFlags = SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC;
   m_pSdlRenderer = SDL_CreateRenderer( &window, -1, rendererFlags );
   if(!m_pSdlRenderer)
   {
      fprintf( stderr, "SDL_CreateRenderer failed: %s\n", SDL_GetError() );
   }

   SDL_RendererInfo rendererInfo;
   if( SDL_GetRendererInfo(m_pSdlRenderer, &rendererInfo) != 0 )
   {
      fprintf( stderr, "SDL_GetRendererInfo failed: %s\n", SDL_GetError() );
   }
   printf( "Created renderer:\n" );
   PrintRendererInfo(rendererInfo);


This produces the following output:
Code:

    GL_VERSION: OpenGL ES 2.0
    GL_RENDERER: VideoCore IV HW
    GL_SHADING_LANGUAGE_VERSION: OpenGL ES GLSL ES 1.00
    GL_EXTENSIONS: GL_OES_compressed_ETC1_RGB8_texture GL_OES_compressed_paletted_texture GL_OES_texture_npot GL_OES_depth24 GL_OES_vertex_half_float GL_OES_EGL_image GL_OES_EGL_image_external GL_EXT_discard_framebuffer GL_OES_rgb8_rgba8 GL_OES_depth32 GL_OES_mapbuffer GL_EXT_texture_format_BGRA8888 GL_APPLE_rgb_422 GL_EXT_debug_marker
    3 render drivers:
    0 Renderer: opengles2 software=0 accelerated=1, presentvsync=1 targettexture=1
    1 Renderer: opengles software=0 accelerated=1, presentvsync=1 targettexture=0
    2 Renderer: software software=1 accelerated=0, presentvsync=0 targettexture=1
    Created renderer:
    Renderer: opengles2 software=0 accelerated=1, presentvsync=1 targettexture=1

It seems that the performance of the application is limited by the number of SDL_RenderDraw*Rect calls. This is a little disappointing.

The application in question is at https://github.com/howprice/sdl2-tetris