![]() |
SDL_RenderGeometry implementation | ![]() |
![]() |
SDL_RenderGeometry implementation | ![]() |
Mason Wheeler
Guest
![]() |
![]() |
What exactly does this do if you give it a texture?
From: Gabriel Jacobo To: SDL Development List Sent: Tuesday, February 26, 2013 8:22 AM Subject: [SDL] SDL_RenderGeometry implementation I've been trying for a while to post this in Bugzilla but it seems it's still down..(I'll post it there when it comes back to life). Anyway, attached you'll find a patch (gzip'ed due to size restrictions on the list) that implements the following functions on OpenGL/ES/ES2. extern DECLSPEC int SDL_RenderGeometry(SDL_Renderer * renderer, SDL_Texture *texture, SDL_Vertex *vertices, int num_vertices, int* indices, int num_indices, const SDL_Vector2f *translation); extern DECLSPEC int SDL_EnableScissor(SDL_Renderer * renderer); extern DECLSPEC int SDL_DisableScissor(SDL_Renderer * renderer); extern DECLSPEC int SDL_ScissorRegion(SDL_Renderer * renderer, const SDL_Rect *region); I've used most of this to integrate libRocket in my engine, and given that now I want to integrate the Spine runtime and they use a similar mechanism (via SFML), I figured abstracting the functionality inside SDL would be beneficial to others. SDL_RenderGeometry can be also thought of as a super set of RenderCopy/Ex and the drawing functions too, given it can work with and without a texture (if the texture is null, vertex colored primitives are rendered). As this new addition will encounter a natural "anti bloat", I suggest those interested in seeing this get accepted to voice their opinion. Thanks, -- Gabriel. _______________________________________________ SDL mailing list http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org |
||||||||||
|
![]() |
SDL_RenderGeometry implementation | ![]() |
gabomdq
![]() |
![]() |
2013/2/26 Mason Wheeler
SDL_RenderGeometry renders the provided vertices, optionally sorted using the given indices. For each vertex, a primitive is rendered at each vertex position, textured with the attached texture using the vertices texture coordinates (think of a texture atlas) and colored with the vertex color. If no texture is provided, it all works exactly the same but you get plain colored primitives drawn. -- Gabriel. |
||||||||||||
|
![]() |
SDL_RenderGeometry implementation | ![]() |
Martin Gerhardy
Guest
![]() |
![]() |
Am 26.02.2013 17:35, schrieb Gabriel Jacobo:
Nice - i've worked on something like this, too - but lacked the software version of it. What do you plan here? Do you also want to provide a software version? If not, maybe there should be a GL_ (as in SDL_GL_*) prefix? _______________________________________________ SDL mailing list http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org |
||||||||||||||
|
![]() |
SDL_RenderGeometry implementation | ![]() |
gabomdq
![]() |
![]() |
2013/2/26 Martin Gerhardy
I was hoping no one would mention the software backend :) The short version of my reasoning is that I don't think a software version is feasible, it would be a lot of work, and the performance in all likelihood would be terrible. A Direct3D version (the other major hw accelerated backend) on the other hand would be a nice thing to have... -- Gabriel. |
||||||||||||||||
|
![]() |
SDL_RenderGeometry implementation | ![]() |
MrOzBarry
![]() |
![]() |
I may have a DirectX 11/Direct3D version sitting around here I could submit if needed. Not sure how that would integrate into SDL (does SDL even use DirectX as a backend on windows?)
On Tue, Feb 26, 2013 at 12:20 PM, Gabriel Jacobo wrote:
|
||||||||||||||||||
|
![]() |
SDL_RenderGeometry implementation | ![]() |
gabomdq
![]() |
![]() |
2013/2/26 Alex Barry
FYI, the bug url is http://bugzilla.libsdl.org/show_bug.cgi?id=1734 Do I understand correctly that you have something like SDL_RenderGeometry implemented for D3D? It needs to be adapted to D3D v9 though, which is what SDL uses (and yes, on Windows both OpenGL and D3D are available as renderer backends). -- Gabriel. |
||||||||||||||||||||
|
![]() |
SDL_RenderGeometry implementation | ![]() |
MrOzBarry
![]() |
![]() |
I do have some similar code that could be adapted, but like I said, it's for DirectX 11 - probably after some fiddling, we could get it to work.
On Tue, Feb 26, 2013 at 12:45 PM, Gabriel Jacobo wrote:
|
||||||||||||||||||||||
|
![]() |
SDL_RenderGeometry implementation | ![]() |
Sik
![]() |
![]() |
As for the software implementation: while yeah, it may be possibly
slow... this stands true for other render functions too, and beyond a given size pretty much software rendering in general is slow, so I'm not sure that counts as an excuse :/ I'd rather have a slow software renderer than a broken one. 2013/2/26, Alex Barry:
SDL mailing list http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org |
||||||||||||||||||||||||
|
![]() |
SDL_RenderGeometry implementation | ![]() |
Pallav Nawani
![]() |
![]() |
Thanks for the very nice patch.
I don't think this is bloat at all. IMO this is the only thing missing from SDL Render API right now. Allowing rotation would be even nicer :) On Wed, Feb 27, 2013 at 6:41 AM, Sik the hedgehog wrote:
-- Pallav Nawani IronCode Gaming Private Limited Website: http://www.ironcode.com Twitter: http://twitter.com/Ironcode_Gaming Facebook: http://www.facebook.com/Ironcode.Gaming Mobile: 9997478768 |
||||||||||||||||||||||||||
|
![]() |
SDL_RenderGeometry implementation | ![]() |
Martin Gerhardy
Guest
![]() |
![]() |
Am 27.02.2013 06:18, schrieb Pallav Nawani:
![]() _______________________________________________ SDL mailing list http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org |
||||||||||||
|
![]() |
SDL_RenderGeometry implementation | ![]() |
Jonny D
![]() |
![]() |
Isn't this function for polygon rendering?
The software rendering path should only be considered for result comparison, not performance. Jonny D On Wed, Feb 27, 2013 at 4:52 AM, Martin Gerhardy wrote:
|
||||||||||||||
|
![]() |
SDL_RenderGeometry implementation | ![]() |
gabomdq
![]() |
![]() |
2013/2/27 Jonathan Dearborn
The function is not designed for polygon rendering (in the sense that you can throw a set of sequential vertices and it'll render a closed polygon), but it can certainly be used for it with some data massaging. Regarding the software rendering path, I understand your point, but it's still a lot of work for what's an mostly academic exercise. I respect if people want to give it relevance and even make a software renderer implementation for SDL_RenderGeometry, I just won't be the one to do it :) Also, I don't think accepting some functionality such as this for a subset of the renderers diminishes SDL in any way, you will just get an unsupported error and such situation will be clearly explained in the documentation. It's not that different from the proposed sensor API, which as it is now only works on Android. -- Gabriel. |
||||||||||||
|
![]() |
SDL_RenderGeometry implementation | ![]() |
Sik
![]() |
![]() |
The situation is different, though. The sensor API only works on
Android because the entire SDL hardware support for it only works on Android. Here we'd be talking about having some video rendering functionality supported on some renderers and not supported on others - and not for hardware support reasons. If we applied this logic to the rest of the rendering API then we'd end up with everybody sticking to the few functions that work everywhere because the rest are unpredictable, effectively rendering them useless. Imagine if function A was supported on OpenGL but not Direct3D, but function B was supported in Direct3D but not OpenGL... Would you want such a scenario? And yes, I know this is regarding the software renderer, but this is the kind of stuff we'll have to cope with as long as the software renderer is supported (and I can see why it's still supported, especially given GPU support on some systems can be quite poor, and for programs with reasonably simple graphics it may not be worth throwing out an error when software rendering would do). 2013/2/27, Gabriel Jacobo:
SDL mailing list http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org |
||||||||||||||
|
![]() |
SDL_RenderGeometry implementation | ![]() |
Jonny D
![]() |
![]() |
I think it's important that this functionality "can" be supported on every renderer and isn't hardware-specific. Some other feature might not be the same. In the end, if your code doesn't run fast enough on one platform, then you would either reconsider targeting that platform or rework the bottlenecks. It's not a much different story from non-SDL code.
Jonny D On Wed, Feb 27, 2013 at 9:11 AM, Sik the hedgehog wrote:
|
||||||||||||||||
|
![]() |
SDL_RenderGeometry implementation | ![]() |
Jared Maddox
Guest
![]() |
![]() |
I haven't looked to see if anyone's taken a stab at it in Hg, but here's a rough-draft implementation of a triangle renderer that could be used for a software implementation: typedef struct SDL_Vertex { double x, y; } SDL_Vertex; int SDL_PointToVertex( SDL_Point *p, SDL_Vertex *v ) { if( !p || !v ) { return( -1 ); } v->x = p->x; v->y = p->y; return( 1 ); } int SDL_VertexLength( SDL_Vertex *v, double *ret ) { if( v && ret ) { *ret = sqrt( v->x * v->x + v->y * v->y ); return( 1 ); } return( -1 ); } int SDL_DotProduct( SDL_Vertex *a, SDL_Vertex *b, int *ret ) { if( !a || !b || !ret ) { return( -1 ); } *ret = a->p.x * b->p.x + a->p.y * b->p.y; return( 1 ); } int SDL_VertexScale( SDL_Vertex *v, double factor, SDL_Vertex *ret ) { if( v && ret ) { ret->x = v->x * factor; ret->y = v->y * factor; return( 1 ); } return( -1 ); } int SDL_VertexAdd( SDL_Vertex *a, SDL_Vertex *b, SDL_Vertex *ret ) { if( a && b && ret ) { ret->x = a->x + b->x; ret->y = a->y + b->y; return( 1 ); } return( -1 ); } int SDL_VertexSubtract( SDL_Vertex *a, SDL_Vertex *b, SDL_Vertex *ret ) { if( a && b && ret ) { ret->x = a->x - b->x; ret->y = a->y - b->y; return( 1 ); } return( -1 ); } int SDL_VertexEquality( SDL_Vertex *a, SDL_Vertex *b, int *ret ) { if( a && b && ret ) { ret = a->x == b->x; ret *= a->y == b->y; return( 1 ); } return( -1 ); } int SDL_InlineVertexEquality( SDL_Vertex *a, SDL_Vertex *b ) { static SDL_Vertex zerovert = { 0.0, 0.0 }; int res = 0; SDL_VertexEquality( ( a ? a : &zerovert ), ( b ? b : &zerovert ), &res ); return( res ); }; int SDL_RenderTriangle( const SDL_Point *corners, const texture *tex, const SDL_Point *mapping, SDL_Point *upperleft, texture **result ) { /* This test technique is the first listed here: http://www.blackpawn.com/texts/pointinpoly/default.html */ /* Lots of constants have been optimized out of this, e.g. SDL_BuildTriangleRefs() */ /* This is an implementation of a triangle renderer based on the second */ /* technique for testing whether a pixel is inside of a triangle, as */ /* described here: */ /* http://www.blackpawn.com/texts/pointinpoly/default.html */ /* The basic idea is that you apply some scaling factors to two vectors */ /* that represent two of the triangle's edges. If the value of either of */ /* these is negative, then from the 'origin corner' you moved AWAY from */ /* the body of the triangle, if either is over 1.0 then you went PAST the */ /* body of the triangle, and if the two combined are over 1.0 then you */ /* went past the third edge of the triangle. */ /* I took that set of formulas, aplied them to find the corners (well, */ /* three corners, it IS a parrallelogram, so finding the fourth is just */ /* addition) of individual destination pixels as translated to the source */ /* texture, and built some looping to iterate the destination pixels, AND */ /* to iterate the source pixels (specifically dividing each source pixel */ /* into two, producing a sub-pixel sampling system). */ /* I've tried to optimize everything that only needs to be calculated once */ /* so that it really does only calculate once. One instance involves */ /* testing a "count" variable, and isn't necessarily immediately obvious. */ if ( !corners || SDL_InlineVertexEquality( &( corners[ 0 ] ), &( corners[ 1 ] ) ) || SDL_InlineVertexEquality( &( corners[ 1 ] ), &( corners[ 2 ] ) ) || SDL_InlineVertexEquality( &( corners[ 2 ] ), &( corners[ 0 ] ) ) ) { return( -1 ); } if( !tex ) { return( -2 ); } if ( !mapping || SDL_InlineVertexEquality( &( mapping[ 0 ] ), &( mapping[ 1 ] ) ) || SDL_InlineVertexEquality( &( mapping[ 1 ] ), &( mapping[ 2 ] ) ) || SDL_InlineVertexEquality( &( mapping[ 2 ] ), &( mapping[ 0 ] ) ) ) { return( -3 ); } if( !upperleft ) { return( -4 ); } if( !result ) { result( -5 ); } /* There shouldn't be any references to an 'm', but just in case, it's a */ /* two-vertex array. */ SDL_Vertex dest[ 3 ], src[ 3 ], rbase, rref[ 2 ], read, scratch[ 2 ]; /* We use scan to build bounding boxes, specifically for pixels. */ SDL_Point scan = { 1, 1 }; SDL_Rect bounds; double destdots[ 6 ], srcdots[ 6 ], destdiv, srcdiv, u[ 4 ], v[ 4 ]; double uscan, vscan ui, vi, count = -1.0, rchan, gchan, bchan, achan; /* This describes our workspace. */ SDL_EnclosePoints( corners, 3, (const SDL_Rect*)0, &bounds ); /* And this actually builds it. */ texture *ret = build_texture( bounds.w, bounds.h ); /* Destination reference 1. */ dest[ 0 ].x = ( corners[ 1 ].x - corners[ 0 ].x ) + 1; dest[ 0 ].y = ( corners[ 1 ].y - corners[ 0 ].y ) + 1; /* Destination reference 2. */ dest[ 1 ].x = ( corners[ 2 ].x - corners[ 0 ].x ) + 1; dest[ 1 ].y = ( corners[ 2 ].y - corners[ 0 ].y ) + 1; /* We should be able to calculate these once, so we will. */ SDL_DotProduct( &( dest[ 0 ] ), &( dest[ 0 ] ), &( destdots[ 0 ] ) ); SDL_DotProduct( &( dest[ 0 ] ), &( dest[ 1 ] ), &( destdots[ 2 ] ) ); SDL_DotProduct( &( dest[ 1 ] ), &( dest[ 1 ] ), &( destdots[ 4 ] ) ); SDL_DotProduct( &( dest[ 1 ] ), &( dest[ 0 ] ), &( destdots[ 5 ] ) ); /* Same with this. */ destdiv = ( destdots[ 4 ] * destdots[ 0 ] ) - ( destdots[ 5 ] * destdots[ 2 ] ); /* Source reference 1. */ src[ 0 ].x = mapping[ 1 ].x - mapping[ 0 ].x; src[ 0 ].y = mapping[ 1 ].y - mapping[ 0 ].y; /* Source reference 2. */ src[ 1 ].x = mapping[ 2 ].x - mapping[ 0 ].x; src[ 1 ].y = mapping[ 2 ].y - mapping[ 0 ].y; /* As with dest. */ SDL_DotProduct( &( src[ 0 ] ), &( src[ 0 ] ), &( srcdots[ 0 ] ) ); SDL_DotProduct( &( src[ 0 ] ), &( src[ 1 ] ), &( srcdots[ 2 ] ) ); SDL_DotProduct( &( src[ 1 ] ), &( src[ 1 ] ), &( srcdots[ 4 ] ) ); SDL_DotProduct( &( src[ 1 ] ), &( src[ 0 ] ), &( srcdots[ 5 ] ) ); /* Yada. */ srcdiv = ( srcdots[ 4 ] * srcdots[ 0 ] ) - ( srcdots[ 5 ] * srcdots[ 2 ] ); while( scan.y < bounds.h ) { while( scan.x < bounds.w ) { /* We only need to do this once per loop. This sets dest[ 2 ] to */ /* point to the center of the current pixel. */ dest[ 2 ].x = scan.x - ( 0.5 + corners[ 0 ].x ); dest[ 2 ].y = scan.y - ( 0.5 + corners[ 0 ].y ); /* Calculate the pixel-center itself, in terms of u & v (which are */ /* the ratio of two sides of the triangle which will allow you to */ /* reach the point). */ SDL_DotProduct( &( dest[ 2 ] ), &( dest[ 1 ] ), &( destdots[ 1 ] ) ); SDL_DotProduct( &( dest[ 2 ] ), &( dest[ 0 ] ), &( destdots[ 3 ] ) ); u[ 0 ] = ( ( destdots[ 0 ] * destdots[ 1 ] ) - ( destdots[ 2 ] * destdots[ 3 ] ) ) / div; v[ 0 ] = ( ( destdots[ 4 ] * destdots[ 3 ] ) - ( destdots[ 5 ] * destdots[ 1 ] ) ) / div; /* Test to see if the pixel is in bounds. */ if( !( u[ 0 ] < 0.0 || v[ 0 ] < 0.0 || u[ 0 ] > 1.0 || v[ 0 ]
/* This bit of code is somewhat "heavy", but I've optimized what I */ /* think I can. */ /* This bit involves moving to the corners again, but... */ /* Bottom-left pixel-corner u & v. */ dest[ 2 ].x -= 0.5; dest[ 2 ].y += 0.5; SDL_DotProduct( &( dest[ 2 ] ), &( dest[ 1 ] ), &( destdots[ 1 ] ) ); SDL_DotProduct( &( dest[ 2 ] ), &( dest[ 0 ] ), &( destdots[ 3 ] ) ); u[ 2 ] = ( ( destdots[ 0 ] * destdots[ 1 ] ) - ( destdots[ 2 ] * destdots[ 3 ] ) ) / div; v[ 2 ] = ( ( destdots[ 4 ] * destdots[ 3 ] ) - ( destdots[ 5 ] * destdots[ 1 ] ) ) / div; /* Top-left pixel-corner u & v. */ dest[ 2 ].y -= 1; SDL_DotProduct( &( dest[ 2 ] ), &( dest[ 1 ] ), &( destdots[ 1 ] ) ); SDL_DotProduct( &( dest[ 2 ] ), &( dest[ 0 ] ), &( destdots[ 3 ] ) ); u[ 0 ] = ( ( destdots[ 0 ] * destdots[ 1 ] ) - ( destdots[ 2 ] * destdots[ 3 ] ) ) / div; v[ 0 ] = ( ( destdots[ 4 ] * destdots[ 3 ] ) - ( destdots[ 5 ] * destdots[ 1 ] ) ) / div; /* Top-right pixel-corner u & v. */ dest[ 2 ].x += 1; SDL_DotProduct( &( dest[ 2 ] ), &( dest[ 1 ] ), &( destdots[ 1 ] ) ); SDL_DotProduct( &( dest[ 2 ] ), &( dest[ 0 ] ), &( destdots[ 3 ] ) ); u[ 1 ] = ( ( destdots[ 0 ] * destdots[ 1 ] ) - ( destdots[ 2 ] * destdots[ 3 ] ) ) / div; v[ 1 ] = ( ( destdots[ 4 ] * destdots[ 3 ] ) - ( destdots[ 5 ] * destdots[ 1 ] ) ) / div; /* You may think that the bottom-right pixel is missing here. */ /* It isn't. */ /* We get the three sets of u/v values above so that we can use them */ /* with the src[] vertexes to work out the position of three */ /* corners of the box within the texture that we'll be pulling */ /* from. Those u/v values are tied to the geometry instead of any */ /* space, so we can use them to translate locations like that. We */ /* then work out the vertexes of that box, and (only once, because */ /* that'll cover us for the rest of the iterations) work out how */ /* far to iterate the u & v values we use on THOSE vertexes so that */ /* we average 4 samples per pixel (thereby catching "all" of the */ /* important pixels). The rest is simple application of the formula */ /* P = A + u * ( C - A ) + v * ( B - A ) */ /* iterating u & v with the values that we just worked out, so that */ /* we can figure out which texture pixels to sample. */ /* Pixel base point. */ SDL_VertexScale( &( src[ 0 ] ), v[ 0 ], &( scratch[ 1 ] ) ); SDL_VertexScale( &( src[ 1 ] ), u[ 0 ], &( scratch[ 0 ] ) ); SDL_VertexAdd( &( scratch[ 0 ] ), &( scratch[ 1 ] ), &read ); SDL_VertexAdd( &read, &( mapping[ 0 ] ), &rbase ); /* Pixel relative vector 1. */ SDL_VertexScale( &( src[ 0 ] ), v[ 1 ], &( scratch[ 1 ] ) ); SDL_VertexScale( &( src[ 1 ] ), u[ 1 ], &( scratch[ 0 ] ) ); SDL_VertexAdd( &( scratch[ 0 ] ), &( scratch[ 1 ] ), &read ); SDL_VertexAdd( &read, &( mapping[ 0 ] ), &( scratch[ 0 ] ) ); SDL_VertexSubtract( &( scratch[ 0 ] ), &rbase, &( rref[ 0 ] ) ); /* Pixel relative vector 2. */ SDL_VertexScale( &( src[ 0 ] ), v[ 2 ], &( scratch[ 1 ] ) ); SDL_VertexScale( &( src[ 1 ] ), u[ 2 ], &( scratch[ 0 ] ) ); SDL_VertexAdd( &( scratch[ 0 ] ), &( scratch[ 1 ] ), &read ); SDL_VertexAdd( &read, &( mapping[ 0 ] ), &( scratch[ 0 ] ) ); SDL_VertexSubtract( &( scratch[ 0 ] ), &rbase, &( rref[ 1 ] ) ); /* We only need to calculate ui & vi once, since the pixel size */ /* should be the same everwhere. To do this, we cheat by using */ /* count with a value that it should never have twice. */ if( count == -1.0 ) { double tmp; SDL_VertexLength( &( rref[ 1 ] ), &tmp ); ui = ceil( tmp * 2 ); ui = ui ? 1.0 / ui : 0.5; SDL_VertexLength( &( rref[ 0 ] ), &tmp ); ui = ceil( tmp * 2 ); vi = ceil( scratch ); } /* Zero out the data channels. */ rchan = 0.0; gchan = 0.0; bchan = 0.0; achan = 0.0; /* Zero out the iterators. */ uscan = 0.0; vscan = 0.0; /* Note: this will keep us from calculating ui & vi again. */ count = 0.0; while( uscan < 1.0 ) { while( vscan < 1.0 ) { /* Calculate the current point. */ SDL_VertexScale( &( rref[ 0 ] ), vscan, &( scratch[ 0 ] ) ); SDL_VertexAdd( &( scratch[ 0 ] ), &( mapping[ 0 ] ), &( scratch[ 1 ] ) ); SDL_VertexScale( &( rref[ 1 ] ), uscan, &( scratch[ 0 ] ) ); SDL_VertexAdd( &( scratch[ 0 ] ), &( scratch[ 1 ] ), &read ); /* If we wanted to be more accurate, we would do an edge and */ /* corner test here to produce an intensity scaling factor. */ /* We would then use the factor before these additions... */ rchan += tex->pixels[ floor( read.x ) ][ floor( read.y ) ].r; gchan += tex->pixels[ floor( read.x ) ][ floor( read.y ) ].g; bchan += tex->pixels[ floor( read.x ) ][ floor( read.y ) ].b; achan += tex->pixels[ floor( read.x ) ][ floor( read.y ) ].a; /* And add it to count, instead of 1.0. */ count += 1.0; vscan += vi; } vscan = 0.0; uscan += ui; } /* Now for count's magic: it's intended to scale our colors */ /* back down to more reasonable values. */ ret->pixels[ scan.x - 1 ][ scan.y - 1 ].r = rchan / count; ret->pixels[ scan.x - 1 ][ scan.y - 1 ].g = gchan / count; ret->pixels[ scan.x - 1 ][ scan.y - 1 ].b = bchan / count; ret->pixels[ scan.x - 1 ][ scan.y - 1 ].a = achan / count; } /* If we're outside of the triangle, all that we care about is */ /* iterating to the next pixel. */ /* Increment recieving point. */ ++scan.x; } /* Increment line. */ scan.x = 0; ++scan.y; } *result = ret; return( 1 ); } It doesn't go the full mile when blending pixels, and it only supports RGBA format, but it should hopefully be enough to work from. I'm personally of the opinion that this handles at least part of the most difficult portions of implementing a software renderer. To handle different pixel formats, I'd suggest replacing the parts of the code that actually touch the pixels with function handles. A faster (but low quality) way to get pixel colors would be to replace the code controlled with: if( !( u[ 0 ] < 0.0 || v[ 0 ] < 0.0 || u[ 0 ] > 1.0 || v[ 0 ] > 1.0 || u[ 0 ] + v[ 0 ] > 1.0 ) ) with some code that just gets the color of the texture pixel closest to the center of the destination pixel. _______________________________________________ SDL mailing list http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org |
||||||||||||||
|
![]() |
SDL_RenderGeometry implementation | ![]() |
Jared Maddox
Guest
![]() |
![]() |
Correction: every occurrence of
ret->pixels[ scan.x - 1 ][ scan.y - 1 ] should actually be ret->pixels[ bounds.x + scan.x - 1 ][ bounds.y + scan.y - 1 ] _______________________________________________ SDL mailing list http://lists.libsdl.org/listinfo.cgi/sdl-libsdl.org |
||||||||||
|
![]() |
SDL_RenderGeometry implementation | ![]() |
René Dudfield
Guest
![]() |
![]() |
Hi,
SDL_gfx has some routines that could be useful here :) regards, |
||||||||||
|
![]() |
![]() |
Nathaniel J Fries
![]() |
![]() |
There are some 3D software rasterizers that achieve reasonable performance, so there's no reason to believe that SDL's software renderer couldn't be a production renderer (at least on modern PC systems), even for rotated and vertex-based rendering.
I do have some recommended optimizations: 1) Force all textures to use the same pixel format as the framebuffer. This removes any format conversion overhead, and unless alpha channel or colorkey is involved, enables the use of SDL_CopyBlit to optimize normal blits. 2) Force all normal (non-target) textures to run-length encode any colorkey and alpha=0 pixel sequences. Shouldn't even need to touch these pixels at render-time. 3) Modify RLE format to also separate alpha != 255 sequences from alpha = 255 sequences, allowing a direct copy of any wholly opaque pixels. The result would be that only semi-transparent (which is fairly rare) pixels require anything beyond an optimized block transfer to render. I'm not sure if this would optimize rotations or polygonal rendering much at all (obviously there is the potential of iteration and calculation overhead being lessened), and I'm not familiar enough with these techniques to point out possible optimizations. But, I may be too late, as these changes would probably constitute an ABI change, which Sam says isn't gonna happen. |
||||||||||
|