The SDL forums have moved to discourse.libsdl.org.
This is just a read-only archive of the previous forums, to keep old links working.


SDL Forum Index
SDL
Simple Directmedia Layer Forums
pixel from SDL_Texture
alejandroicom


Joined: 27 Aug 2014
Posts: 4
Can I obtain the pixel from a SDL_Texture?

I am using SDL2.0. I have some SDL_Textures, then I use SDL_RenderCopy(sdlRenderer, sdlTexture, NULL, NULL); to join all SDL_Textures and present over screen. Now I need obtain the pixel of the last frame (the frame that I present over screen).
MrTAToad


Joined: 13 Feb 2014
Posts: 205
Location: Chichester, England
Unfortunately not - you can only get pixel data from a surface...

What I do in my engine is load and store the graphic and also convert it to a texture - then I can access the data and do hardware rendering.
Naith


Joined: 03 Jul 2014
Posts: 158
If I understand it correctly, it's possible to read pixels from a texture using SDL_LockTexture, save the pixel information as a void pointer, typecast the void pointer into a Uint32 pointer and then read the value of each pixel.

Check this tutorial for more information: http://lazyfoo.net/tutorials/SDL/40_texture_manipulation/index.php
pixel from SDL_Texture
Sik


Joined: 26 Nov 2011
Posts: 905
Yeah, but reading from textures can absolutely destroy rendering
performance, so avoid that unless you're absolutely sure about what
you're doing. Surfaces are to be used CPU-side, textures are to be
used GPU-side.
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
pixel from SDL_Texture
Pallav Nawani


Joined: 19 May 2011
Posts: 122
Location: Dehradun, India
Streaming Textures already maintain a copy of texture data as a SDL_Surface. SDL_LockTexture only gives access to that. Unfortunately, under OpenGL that copy is usually initialised with zeroes. The wiki page https://wiki.libsdl.org/SDL_LockTexture clearly states in the Remarks Portion that: 


"As an optimization, the pixels made available for editing don't necessarily contain the old texture data. This is a write-only operation, and if you need to keep a copy of the texture data you should do that at the application level."


Personally, I feel that if SDL is already maintaining a RAM buffer, then it should make sure it contains the old texture data. If we have to maintain the buffer ourselves, then the streaming textures are useless and this is the reason why I don't use them at all.

Pallav Nawani
IronCode Gaming Private Limited
Website: http://www.ironcode.com
Twitter:  http://twitter.com/Ironcode_Gaming
Facebook: http://www.facebook.com/Ironcode.Gaming
Mobile: 9997478768



On Sat, Aug 30, 2014 at 5:59 AM, Naith wrote:
Quote:
If I understand it correctly, it's possible to read pixels from a texture using SDL_LockTexture, save the pixel information as a void pointer, typecast the void pointer into a Uint32 pointer and then read the value of each pixel.

Check this tutorial for more information: http://lazyfoo.net/tutorials/SDL/40_texture_manipulation/index.php


_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

pixel from SDL_Texture
Mason Wheeler
Guest

You know, every time this topic comes up, I see that same warning come out again and again, and nothing to back it up.


How great an impact does it have on rendering performance? What is the expected data transfer rate from GPU->main memory like? How does it differ between different video card models?



Also, does anyone ever stop to consider, before saying something like this, that there's very little overlap in the set of scenarios in which a person would want to do this and the set of scenarios in which a high frame rate is of paramount importance?


The simple fact is, if it's the only way to do it, and that's what you want to accomplish, then that's how you have to accomplish it. And saying "just keep the surface around" is kind of silly; why would anyone *need* to read back texture data that they already have in the first place? It seems to me that the only reason to try and do something like this is to read the results back off of a texture with a render target attached, in which case the standard "just keep the surface around" line is worse than useless as advice.


Just sayin'...



Mason




On Friday, August 29, 2014 5:45 PM, Sik the hedgehog wrote:



Yeah, but reading from textures can absolutely destroy renderingperformance, so avoid that unless you're absolutely sure about whatyou're doing. Surfaces are to be used CPU-side, textures are to beused GPU-side._______________________________________________SDL mailinghttp://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
pixel from SDL_Texture
Alex Szpakowski
Guest

Knowledge about how the rendering pipeline works backs it up. Smile


GPUs operate asynchronously from CPUs and are heavily pipelined. When you issue a rendering command (e.g. glDrawArrays, or SDL_RenderCopy in SDL_Render) the CPU will submit the command to a queue which the GPU works through on its own time, after the CPU has generated lower level command(s) from the function call.


Because of all the pipelining, GPUs tend to be 1 (or more) whole frames behind the CPU, in terms of what’s being processed right at that moment. What that means for things like reading back the texture data from the CPU to the GPU, is that the CPU will have to block until the GPU has completed that entire frame’s worth of work before the readback function can return, since the GPU generally processes commands in a FIFO manner.


There are ways to make the transfer asynchronous rather than blocking, but you would still need to wait 1+ frames before the data is available. At 60fps, 1 frame is about 17 milliseconds. That would be 17ms just to get a single texel's color from a texture in the GPU.


As mentioned earlier, SDL_Texture objects that were created with the streaming flag keep a CPU-side copy of their data, so this isn’t a problem for that case.

On Sep 2, 2014, at 9:01 PM, Mason Wheeler wrote:
Quote:
You know, every time this topic comes up, I see that same warning come out again and again, and nothing to back it up.


How great an impact does it have on rendering performance? What is the expected data transfer rate from GPU->main memory like? How does it differ between different video card models?


Mason







_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
pixel from SDL_Texture
Pallav Nawani


Joined: 19 May 2011
Posts: 122
Location: Dehradun, India
SDL_Texture objects with streaming flag just keep a buffer. It is considered write only, and it is not guaranteed that the pixels are a copy of the texture data. The buffer is initialised with zeroes IIRC.

Pallav Nawani
IronCode Gaming Private Limited
Website: http://www.ironcode.com
Twitter:  http://twitter.com/Ironcode_Gaming
Facebook: http://www.facebook.com/Ironcode.Gaming
Mobile: 9997478768



On Thu, Sep 4, 2014 at 1:59 AM, Alex Szpakowski wrote:
Quote:
Knowledge about how the rendering pipeline works backs it up. Smile


GPUs operate asynchronously from CPUs and are heavily pipelined. When you issue a rendering command (e.g. glDrawArrays, or SDL_RenderCopy in SDL_Render) the CPU will submit the command to a queue which the GPU works through on its own time, after the CPU has generated lower level command(s) from the function call.


Because of all the pipelining, GPUs tend to be 1 (or more) whole frames behind the CPU, in terms of what’s being processed right at that moment. What that means for things like reading back the texture data from the CPU to the GPU, is that the CPU will have to block until the GPU has completed that entire frame’s worth of work before the readback function can return, since the GPU generally processes commands in a FIFO manner.


There are ways to make the transfer asynchronous rather than blocking, but you would still need to wait 1+ frames before the data is available. At 60fps, 1 frame is about 17 milliseconds. That would be 17ms just to get a single texel's color from a texture in the GPU.


As mentioned earlier, SDL_Texture objects that were created with the streaming flag keep a CPU-side copy of their data, so this isn’t a problem for that case.

On Sep 2, 2014, at 9:01 PM, Mason Wheeler wrote:

Quote:
You know, every time this topic comes up, I see that same warning come out again and again, and nothing to back it up.


How great an impact does it have on rendering performance?  What is the expected data transfer rate from GPU->main memory like?  How does it differ between different video card models?


Mason







_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org




_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

MrTAToad


Joined: 13 Feb 2014
Posts: 205
Location: Chichester, England
Which is why a surface of the graphic is needed unfortunately - it may be inefficient and uses more memory, but if the data retrieve from the appropriate functions isn't what is displayed, then using surfaces is the only way.

Perhaps there needs to be a proper reading function regardless of processor cost.
pixel from SDL_Texture
Jonny D


Joined: 12 Sep 2009
Posts: 932
Well, there *should* be some API to pull the texture data back from the GPU.  With render targets, you can manipulate texture data on the GPU, so it is important to be able to read the changed data later, even if it causes a GPU flush, block, and data transfer as necessary costs.


Jonny D
pixel from SDL_Texture
Sam Lantinga


Joined: 10 Sep 2009
Posts: 1765
There is, it's called SDL_RendererReadPixels() Smile


On Fri, Sep 5, 2014 at 6:02 AM, Jonathan Dearborn wrote:
Quote:
Well, there *should* be some API to pull the texture data back from the GPU.  With render targets, you can manipulate texture data on the GPU, so it is important to be able to read the changed data later, even if it causes a GPU flush, block, and data transfer as necessary costs.


Jonny D


_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org

pixel from SDL_Texture
Sam Lantinga


Joined: 10 Sep 2009
Posts: 1765
Er, SDL_RenderReadPixels()


On Wed, Sep 10, 2014 at 9:15 AM, Sam Lantinga wrote:
Quote:
There is, it's called SDL_RendererReadPixels() Smile


On Fri, Sep 5, 2014 at 6:02 AM, Jonathan Dearborn wrote:
Quote:
Well, there *should* be some API to pull the texture data back from the GPU.  With render targets, you can manipulate texture data on the GPU, so it is important to be able to read the changed data later, even if it causes a GPU flush, block, and data transfer as necessary costs.


Jonny D


_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org




MrTAToad


Joined: 13 Feb 2014
Posts: 205
Location: Chichester, England
Unfortunately that is from a renderer and not from a texture, which would mean a renderer would have to be created to access the data, the texture blitted onto the renderer and then accessed...
pixel from SDL_Texture
Alex Szpakowski
Guest

It’s for the current render target in the renderer, so if the texture is created with SDL_TEXTUREACCESS_TARGET then you can call SDL_SetRenderTarget and SDL_RenderReadPixels.

On Sep 10, 2014, at 7:25 PM, MrTAToad wrote:
Quote:
Unfortunately that is from a renderer and not from a texture, which would mean a renderer would have to be created to access the data, the texture blitted onto the renderer and then accessed...
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org
<DKIM> Re: pixel from SDL_Texture
Sanette
Guest

In my experience, keeping a surface and modifying it and converting to texture is SLOWER (*) than keeping the pixel array (given by RenderReadPixels) and modifying it and updating the texture with SDL_UpdateTexture.

And if you don't want to "keep anything around"; then still, the whole process [changing render target, RenderReadPixels ---> SDL_UpdateTexture] is faster (**) than converting a surface to a texture.

(*) = by a factor 1/2 or even 1/4 depending on the complexity of the image
(**) but onyl slightly faster, by a factor 1.2 or 1.5 in my case.

S.

Le 03/09/2014 02:01, Mason Wheeler a écrit :

Quote:
You know, every time this topic comes up, I see that same warning come out again and again, and nothing to back it up.


How great an impact does it have on rendering performance?  What is the expected data transfer rate from GPU->main memory like?  How does it differ between different video card models?



Also, does anyone ever stop to consider, before saying something like this, that there's very little overlap in the set of scenarios in which a person would want to do this and the set of scenarios in which a high frame rate is of paramount importance?


The simple fact is, if it's the only way to do it, and that's what you want to accomplish, then that's how you have to accomplish it.  And saying "just keep the surface around" is kind of silly; why would anyone *need* to read back texture data that they already have in the first place?  It seems to me that the only reason to try and do something like this is to read the results back off of a texture with a render target attached, in which case the standard "just keep the surface around" line is worse than useless as advice.


Just sayin'...



Mason




On Friday, August 29, 2014 5:45 PM, Sik the hedgehog wrote:



Yeah, but reading from textures can absolutely destroy rendering performance, so avoid that unless you're absolutely sure about what you're doing. Surfaces are to be used CPU-side, textures are to be used GPU-side. _______________________________________________ SDL mailing list http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org









Quote:
_______________________________________________
SDL mailing list

http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org