Skip to content
This repository has been archived by the owner on Feb 11, 2021. It is now read-only.

Commit

Permalink
Some MMX fixes from Patrick Baggett.
Browse files Browse the repository at this point in the history
Original email...

Date: Sat, 10 Sep 2011 13:01:20 -0500
From: Patrick Baggett
To: SDL Development List <sdl@lists.libsdl.org>
Subject: Re: [SDL] SDL_memcpyMMX uses SSE instructions

In SDL_blit_copy.c, the function SDL_memcpyMMX() actually use SSE
instructions.

It is called in this context:

#ifdef __MMX__
    if (SDL_HasMMX() &&
        !((uintptr_t) src & 7) && !(srcskip & 7) &&
        !((uintptr_t) dst & 7) && !(dstskip & 7)) {
        while (h--) {
            SDL_memcpyMMX(dst, src, w);
            src += srcskip;
            dst += dstskip;
        }
        _mm_empty();
        return;
    }
#endif

This implies that the minimum CPU features are just MMX. There is a
separate SDL_memcpySSE() function.


The SDL_memcpyMMX() function does:

#ifdef __SSE__
        _mm_prefetch(src, _MM_HINT_NTA);
#endif

...which tests at compile time if SSE intrinsics are available, not at run
time. It generates the PREFETCHNTA instruction. It also uses _mm_stream_pi()
intrinsic, which generates the MOVNTQ instruction.

If you replace the "MMX" code with:

__m64* d64 = (__m64*)dst;
__m64* s64 = (__m64*)src;
 for(i= len / 64; i--;) {
   d64[0] = s64[0];
   d64[1] = s64[1];
   d64[2] = s64[2];
   d64[3] = s64[3];
   d64[4] = s64[4];
   d64[5] = s64[5];
   d64[6] = s64[6];
   d64[7] = s64[7];
   d64 += 8;
   s64 += 8;
 }

Then MSVC generates the correct movq instructions. GCC (4.5.0) seems to
think that using 2x movl is still better, but then again, GCC isn't actually
that good at optimizing intrinsics as I've found. At least the code won't
crash on my P2 though. :)

Also, there is no requirement for MMX to be aligned to the 8th byte. I
think the author assumed that SSE's 16 byte alignment requirement must
retroactively mean that MMX requires 8 byte alignment. Attached is the full
patch.

Patrick
  • Loading branch information
icculus committed Sep 11, 2011
1 parent a4ceccc commit 3d45248
Showing 1 changed file with 17 additions and 27 deletions.
44 changes: 17 additions & 27 deletions src/video/SDL_blit_copy.c
Expand Up @@ -56,35 +56,27 @@ SDL_memcpySSE(Uint8 * dst, const Uint8 * src, int len)
#ifdef _MSC_VER
#pragma warning(disable:4799)
#endif
/* This assumes 8-byte aligned src and dst */
static __inline__ void
SDL_memcpyMMX(Uint8 * dst, const Uint8 * src, int len)
{
int i;

__m64 values[8];
for (i = len / 64; i--;) {
#ifdef __SSE__
_mm_prefetch(src, _MM_HINT_NTA);
#endif
values[0] = *(__m64 *) (src + 0);
values[1] = *(__m64 *) (src + 8);
values[2] = *(__m64 *) (src + 16);
values[3] = *(__m64 *) (src + 24);
values[4] = *(__m64 *) (src + 32);
values[5] = *(__m64 *) (src + 40);
values[6] = *(__m64 *) (src + 48);
values[7] = *(__m64 *) (src + 56);
_mm_stream_pi((__m64 *) (dst + 0), values[0]);
_mm_stream_pi((__m64 *) (dst + 8), values[1]);
_mm_stream_pi((__m64 *) (dst + 16), values[2]);
_mm_stream_pi((__m64 *) (dst + 24), values[3]);
_mm_stream_pi((__m64 *) (dst + 32), values[4]);
_mm_stream_pi((__m64 *) (dst + 40), values[5]);
_mm_stream_pi((__m64 *) (dst + 48), values[6]);
_mm_stream_pi((__m64 *) (dst + 56), values[7]);
src += 64;
dst += 64;
__m64* d64 = (__m64*)dst;
__m64* s64 = (__m64*)src;

for(i= len / 64; i--;) {

d64[0] = s64[0];
d64[1] = s64[1];
d64[2] = s64[2];
d64[3] = s64[3];
d64[4] = s64[4];
d64[5] = s64[5];
d64[6] = s64[6];
d64[7] = s64[7];

d64 += 8;
s64 += 8;
}

if (len & 63)
Expand Down Expand Up @@ -136,9 +128,7 @@ SDL_BlitCopy(SDL_BlitInfo * info)
#endif

#ifdef __MMX__
if (SDL_HasMMX() &&
!((uintptr_t) src & 7) && !(srcskip & 7) &&
!((uintptr_t) dst & 7) && !(dstskip & 7)) {
if (SDL_HasMMX()) {
while (h--) {
SDL_memcpyMMX(dst, src, w);
src += srcskip;
Expand Down

0 comments on commit 3d45248

Please sign in to comment.