pixel from SDL_Texture |
MrTAToad
|
Unfortunately not - you can only get pixel data from a surface...
What I do in my engine is load and store the graphic and also convert it to a texture - then I can access the data and do hardware rendering. |
|||||||||||
|
Naith
|
If I understand it correctly, it's possible to read pixels from a texture using SDL_LockTexture, save the pixel information as a void pointer, typecast the void pointer into a Uint32 pointer and then read the value of each pixel.
Check this tutorial for more information: http://lazyfoo.net/tutorials/SDL/40_texture_manipulation/index.php |
|||||||||||
|
pixel from SDL_Texture |
Sik
|
Yeah, but reading from textures can absolutely destroy rendering
performance, so avoid that unless you're absolutely sure about what you're doing. Surfaces are to be used CPU-side, textures are to be used GPU-side. _______________________________________________ SDL mailing list http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org |
|||||||||||
|
pixel from SDL_Texture |
Pallav Nawani
|
Streaming Textures already maintain a copy of texture data as a SDL_Surface. SDL_LockTexture only gives access to that. Unfortunately, under OpenGL that copy is usually initialised with zeroes. The wiki page https://wiki.libsdl.org/SDL_LockTexture clearly states in the Remarks Portion that:Â
"As an optimization, the pixels made available for editing don't necessarily contain the old texture data. This is a write-only operation, and if you need to keep a copy of the texture data you should do that at the application level." Personally, I feel that if SDL is already maintaining a RAM buffer, then it should make sure it contains the old texture data. If we have to maintain the buffer ourselves, then the streaming textures are useless and this is the reason why I don't use them at all. Pallav Nawani IronCode Gaming Private Limited Website:Â http://www.ironcode.com Twitter:Â http://twitter.com/Ironcode_Gaming Facebook: http://www.facebook.com/Ironcode.Gaming Mobile: 9997478768 On Sat, Aug 30, 2014 at 5:59 AM, Naith wrote:
|
|||||||||||||
|
pixel from SDL_Texture |
Mason Wheeler
Guest
|
You know, every time this topic comes up, I see that same warning come out again and again, and nothing to back it up.
How great an impact does it have on rendering performance? What is the expected data transfer rate from GPU->main memory like? How does it differ between different video card models? Also, does anyone ever stop to consider, before saying something like this, that there's very little overlap in the set of scenarios in which a person would want to do this and the set of scenarios in which a high frame rate is of paramount importance? The simple fact is, if it's the only way to do it, and that's what you want to accomplish, then that's how you have to accomplish it. And saying "just keep the surface around" is kind of silly; why would anyone *need* to read back texture data that they already have in the first place? It seems to me that the only reason to try and do something like this is to read the results back off of a texture with a render target attached, in which case the standard "just keep the surface around" line is worse than useless as advice. Just sayin'... Mason On Friday, August 29, 2014 5:45 PM, Sik the hedgehog wrote: Yeah, but reading from textures can absolutely destroy renderingperformance, so avoid that unless you're absolutely sure about whatyou're doing. Surfaces are to be used CPU-side, textures are to beused GPU-side._______________________________________________SDL mailinghttp://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org |
|||||||||||
|
pixel from SDL_Texture |
Alex Szpakowski
Guest
|
Knowledge about how the rendering pipeline works backs it up.
GPUs operate asynchronously from CPUs and are heavily pipelined. When you issue a rendering command (e.g. glDrawArrays, or SDL_RenderCopy in SDL_Render) the CPU will submit the command to a queue which the GPU works through on its own time, after the CPU has generated lower level command(s) from the function call. Because of all the pipelining, GPUs tend to be 1 (or more) whole frames behind the CPU, in terms of what’s being processed right at that moment. What that means for things like reading back the texture data from the CPU to the GPU, is that the CPU will have to block until the GPU has completed that entire frame’s worth of work before the readback function can return, since the GPU generally processes commands in a FIFO manner. There are ways to make the transfer asynchronous rather than blocking, but you would still need to wait 1+ frames before the data is available. At 60fps, 1 frame is about 17 milliseconds. That would be 17ms just to get a single texel's color from a texture in the GPU. As mentioned earlier, SDL_Texture objects that were created with the streaming flag keep a CPU-side copy of their data, so this isn’t a problem for that case. On Sep 2, 2014, at 9:01 PM, Mason Wheeler wrote:
|
|||||||||||||
|
pixel from SDL_Texture |
Pallav Nawani
|
SDL_Texture objects with streaming flag just keep a buffer. It is considered write only, and it is not guaranteed that the pixels are a copy of the texture data. The buffer is initialised with zeroes IIRC.
Pallav Nawani IronCode Gaming Private Limited Website:Â http://www.ironcode.com Twitter:Â http://twitter.com/Ironcode_Gaming Facebook: http://www.facebook.com/Ironcode.Gaming Mobile: 9997478768 On Thu, Sep 4, 2014 at 1:59 AM, Alex Szpakowski wrote:
|
|||||||||||||||
|
MrTAToad
|
Which is why a surface of the graphic is needed unfortunately - it may be inefficient and uses more memory, but if the data retrieve from the appropriate functions isn't what is displayed, then using surfaces is the only way.
Perhaps there needs to be a proper reading function regardless of processor cost. |
|||||||||||
|
pixel from SDL_Texture |
Jonny D
|
Well, there *should* be some API to pull the texture data back from the GPU. Â With render targets, you can manipulate texture data on the GPU, so it is important to be able to read the changed data later, even if it causes a GPU flush, block, and data transfer as necessary costs.
Jonny D |
|||||||||||
|
pixel from SDL_Texture |
Sam Lantinga
|
There is, it's called SDL_RendererReadPixels()
On Fri, Sep 5, 2014 at 6:02 AM, Jonathan Dearborn wrote:
|
|||||||||||||
|
pixel from SDL_Texture |
Sam Lantinga
|
Er, SDL_RenderReadPixels()
On Wed, Sep 10, 2014 at 9:15 AM, Sam Lantinga wrote:
|
|||||||||||||||
|
MrTAToad
|
Unfortunately that is from a renderer and not from a texture, which would mean a renderer would have to be created to access the data, the texture blitted onto the renderer and then accessed...
|
|||||||||||
|
pixel from SDL_Texture |
Alex Szpakowski
Guest
|
It’s for the current render target in the renderer, so if the texture is created with SDL_TEXTUREACCESS_TARGET then you can call SDL_SetRenderTarget and SDL_RenderReadPixels.
On Sep 10, 2014, at 7:25 PM, MrTAToad wrote:
|
|||||||||||||
|
<DKIM> Re: pixel from SDL_Texture |
Sanette
Guest
|
In my experience, keeping a surface and modifying it and converting to texture is SLOWER (*) than keeping the pixel array (given by RenderReadPixels) and modifying it and updating the texture with SDL_UpdateTexture.
And if you don't want to "keep anything around"; then still, the whole process [changing render target, RenderReadPixels ---> SDL_UpdateTexture] is faster (**) than converting a surface to a texture. (*) = by a factor 1/2 or even 1/4 depending on the complexity of the image (**) but onyl slightly faster, by a factor 1.2 or 1.5 in my case. S. Le 03/09/2014 02:01, Mason Wheeler a écrit :
|
|||||||||||||||
|