Skip to content

Commit

Permalink
- Ported vlinetallasm4 to AMD64 assembly. Even with the increased num…
Browse files Browse the repository at this point in the history
…ber of

  registers AMD64 provides, this routine still needs to be written as self-
  modifying code for maximum performance. The additional registers do allow
  for further optimization over the x86 version by allowing all four pixels
  to be in flight at the same time. The end result is that AMD64 ASM is about
  2.18 times faster than AMD64 C and about 1.06 times faster than x86 ASM.
  (For further comparison, AMD64 C and x86 C are practically the same for
  this function.) Should I port any more assembly to AMD64, mvlineasm4 is the
  most likely candidate, but it's not used enough at this point to bother.
  Also, this may or may not work with Linux at the moment, since it doesn't
  have the eh_handler metadata. Win64 is easier, since I just need to
  structure the function prologue and epilogue properly and use some
  assembler directives/macros to automatically generate the metadata. And
  that brings up another point: You need YASM to assemble the AMD64 code,
  because NASM doesn't support the Win64 metadata directives.
- Added an SSE version of DoBlending. This is strictly C intrinsics.
  VC++ still throws around unneccessary register moves. GCC seems to be
  pretty close to optimal, requiring only about 2 cycles/color. They're
  both faster than my hand-written MMX routine, so I don't need to feel
  bad about not hand-optimizing this for x64 builds.
- Removed an extra instruction from DoBlending_MMX, transposed two
  instructions, and unrolled it once, shaving off about 80 cycles from the
  time required to blend 256 palette entries. Why? Because I tried writing
  a C version of the routine using compiler intrinsics and was appalled by
  all the extra movq's VC++ added to the code. GCC was better, but still
  generated extra instructions. I only wanted a C version because I can't
  use inline assembly with VC++'s x64 compiler, and x64 assembly is a bit
  of a pain. (It's a pain because Linux and Windows have different calling
  conventions, and you need to maintain extra metadata for functions.) So,
  the assembly version stays and the C version stays out.
- Removed all the pixel doubling r_detail modes, since the one platform they
  were intended to assist (486) actually sees very little benefit from them.
- Rewrote CheckMMX in C and renamed it to CheckCPU.
- Fixed: CPUID function 0x80000005 is specified to return detailed L1 cache
  only for AMD processors, so we must not use it on other architectures, or
  we end up overwriting the L1 cache line size with 0 or some other number
  we don't actually understand.


SVN r1134 (trunk)
  • Loading branch information
Randy Heit committed Aug 9, 2008
1 parent 14e94b8 commit dda5ddd
Show file tree
Hide file tree
Showing 37 changed files with 1,097 additions and 1,276 deletions.
41 changes: 41 additions & 0 deletions docs/rh-log.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,20 @@
August 8, 2008
- Ported vlinetallasm4 to AMD64 assembly. Even with the increased number of
registers AMD64 provides, this routine still needs to be written as self-
modifying code for maximum performance. The additional registers do allow
for further optimization over the x86 version by allowing all four pixels
to be in flight at the same time. The end result is that AMD64 ASM is about
2.18 times faster than AMD64 C and about 1.06 times faster than x86 ASM.
(For further comparison, AMD64 C and x86 C are practically the same for
this function.) Should I port any more assembly to AMD64, mvlineasm4 is the
most likely candidate, but it's not used enough at this point to bother.
Also, this may or may not work with Linux at the moment, since it doesn't
have the eh_handler metadata. Win64 is easier, since I just need to
structure the function prologue and epilogue properly and use some
assembler directives/macros to automatically generate the metadata. And
that brings up another point: You need YASM to assemble the AMD64 code,
because NASM doesn't support the Win64 metadata directives.

August 8, 2008 (Changes by Graf Zahl)
- Replaced the ActorInfo definitions of several internal classes with DECORATE definitions
- Converted teleport fog and destinations to DECORATE.
Expand All @@ -14,6 +31,23 @@ August 8, 2008 (Changes by Graf Zahl)
- Added aWeaponGiver class to generalize the standing AssaultGun.
- converted a_Strifeweapons.cpp to DECORATE, except for the Sigil.

August 7, 2008
- Added an SSE version of DoBlending. This is strictly C intrinsics.
VC++ still throws around unneccessary register moves. GCC seems to be
pretty close to optimal, requiring only about 2 cycles/color. They're
both faster than my hand-written MMX routine, so I don't need to feel
bad about not hand-optimizing this for x64 builds.
- Removed an extra instruction from DoBlending_MMX, transposed two
instructions, and unrolled it once, shaving off about 80 cycles from the
time required to blend 256 palette entries. Why? Because I tried writing
a C version of the routine using compiler intrinsics and was appalled by
all the extra movq's VC++ added to the code. GCC was better, but still
generated extra instructions. I only wanted a C version because I can't
use inline assembly with VC++'s x64 compiler, and x64 assembly is a bit
of a pain. (It's a pain because Linux and Windows have different calling
conventions, and you need to maintain extra metadata for functions.) So,
the assembly version stays and the C version stays out.

August 7, 2008 (Changes by Graf Zahl)
- Converted the rest of a_strifestuff.cpp to DECORATE.
- Fixed: AStalker::CheckMeleeRange did not perform all checks of AActor::CheckMeleeRange.
Expand All @@ -39,6 +73,13 @@ August 7, 2008 (SBARINfO update)
- Fixed: Various bugs I noticed in the fullscreenoffsets code.

August 6, 2008
- Removed all the pixel doubling r_detail modes, since the one platform they
were intended to assist (486) actually sees very little benefit from them.
- Rewrote CheckMMX in C and renamed it to CheckCPU.
- Fixed: CPUID function 0x80000005 is specified to return detailed L1 cache
only for AMD processors, so we must not use it on other architectures, or
we end up overwriting the L1 cache line size with 0 or some other number
we don't actually understand.
- The x87 precision control is now explicitly set for double precision, since
GCC defaults to extended precision instead, unlike Visual C++.

Expand Down
63 changes: 46 additions & 17 deletions src/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -173,11 +173,24 @@ endif( FMOD_LIBRARY )

if( NOT NO_ASM )
find_program( NASM_PATH NAMES ${NASM_NAMES} )
find_program( YASM_PATH yasm )

if( YASM_PATH )
set( ASSEMBLER ${YASM_PATH} )
else( YASM_PATH )
if( X64 )
message( STATUS "Could not find YASM. Disabling assembly code." )
set( NO_ASM ON )
else( X64 )
if( NOT NASM_PATH )
message( STATUS "Could not find YASM or NASM. Disabling assembly code." )
set( NO_ASM ON )
else( NOT NASM_PATH )
set( ASSEMBLER ${NASM_PATH} )
endif( NOT NASM_PATH )
endif( X64 )
endif( YASM_PATH )

if( NOT NASM_PATH )
message( STATUS "Could not find NASM. Disabling assembly code." )
set( NO_ASM ON )
else( NOT NASM_PATH )
# I think the only reason there was a version requirement was because the
# executable name for Windows changed from 0.x to 2.0, right? This is
# how to do it in case I need to do something similar later.
Expand All @@ -188,7 +201,6 @@ if( NOT NO_ASM )
# if( NOT NASM_VER LESS 2 )
# message( SEND_ERROR "NASM version should be 2 or later. (Installed version is ${NASM_VER}.)" )
# endif( NOT NASM_VER LESS 2 )
endif( NOT NASM_PATH )
endif( NOT NO_ASM )

if( NOT NO_ASM )
Expand All @@ -201,22 +213,31 @@ if( NOT NO_ASM )

# Tell CMake how to assemble our files
if( UNIX )
set( NASM_OUTPUT_EXTENSION .o )
set( NASM_FLAGS -f elf -DM_TARGET_LINUX )
set( ASM_OUTPUT_EXTENSION .o )
if( X64 )
set( ASM_FLAGS -f elf64 -DM_TARGET_LINUX )
else( X64 )
set( ASM_FLAGS -f elf -DM_TARGET_LINUX )
endif( X64 )
else( UNIX )
set( NASM_OUTPUT_EXTENSION .obj )
set( NASM_FLAGS -f win32 -DWIN32 )
set( ASM_OUTPUT_EXTENSION .obj )
if( X64 )
set( ASM_FLAGS -f win64 -DWIN32 -DWIN64 )
else( X64 )
set( ASM_FLAGS -f win32 -DWIN32 )
endif( X64 )
endif( UNIX )
if( WIN32 )
set( FIXRTEXT fixrtext )
endif( WIN32 )
message( STATUS "Selected assembler: ${ASSEMBLER}" )
MACRO( ADD_ASM_FILE infile )
set( ASM_OUTPUT_${infile} "${CMAKE_CURRENT_BINARY_DIR}/CMakeFiles/zdoom.dir/${infile}${NASM_OUTPUT_EXTENSION}" )
set( ASM_OUTPUT_${infile} "${CMAKE_CURRENT_BINARY_DIR}/CMakeFiles/zdoom.dir/${infile}${ASM_OUTPUT_EXTENSION}" )
if( WIN32 )
set( FIXRTEXT_${infile} COMMAND ${FIXRTEXT} "${ASM_OUTPUT_${infile}}" )
endif( WIN32 )
add_custom_command( OUTPUT ${ASM_OUTPUT_${infile}}
COMMAND ${NASM_PATH} ${NASM_FLAGS} -i${CMAKE_CURRENT_SOURCE_DIR}/ -o"${ASM_OUTPUT_${infile}}" "${CMAKE_CURRENT_SOURCE_DIR}/${infile}"
COMMAND ${ASSEMBLER} ${ASM_FLAGS} -i${CMAKE_CURRENT_SOURCE_DIR}/ -o"${ASM_OUTPUT_${infile}}" "${CMAKE_CURRENT_SOURCE_DIR}/${infile}"
${FIXRTEXT_${infile}}
DEPENDS ${infile} ${FIXRTEXT} )
set( ASM_SOURCES ${ASM_SOURCES} "${ASM_OUTPUT_${infile}}" )
Expand Down Expand Up @@ -320,14 +341,18 @@ else( WIN32 )
endif( WIN32 )

if( NOT NO_ASM )
ADD_ASM_FILE( a.nas )
ADD_ASM_FILE( misc.nas )
ADD_ASM_FILE( tmap.nas )
ADD_ASM_FILE( tmap2.nas )
ADD_ASM_FILE( tmap3.nas )
if( X64 )
ADD_ASM_FILE( asm_x86_64/tmap3.asm )
else( X64 )
ADD_ASM_FILE( asm_ia32/a.asm )
ADD_ASM_FILE( asm_ia32/misc.asm )
ADD_ASM_FILE( asm_ia32/tmap.asm )
ADD_ASM_FILE( asm_ia32/tmap2.asm )
ADD_ASM_FILE( asm_ia32/tmap3.asm )
endif( X64 )
if( WIN32 )
if( NOT X64 )
ADD_ASM_FILE( win32/wrappers.nas )
ADD_ASM_FILE( win32/wrappers.asm )
endif( NOT X64 )
endif( WIN32 )
endif( NOT NO_ASM )
Expand Down Expand Up @@ -482,6 +507,7 @@ add_executable( zdoom WIN32
v_video.cpp
w_wad.cpp
wi_stuff.cpp
x86.cpp
zstrformat.cpp
zstring.cpp
g_doom/a_arachnotron.cpp
Expand Down Expand Up @@ -705,6 +731,9 @@ if( CMAKE_COMPILER_IS_GNUCXX )

# Compile this one file with SSE2 support.
set_source_files_properties( nodebuild_classify_sse2.cpp PROPERTIES COMPILE_FLAGS "-msse2 -mfpmath=sse" )

# Need to enable intrinsics for this file.
set_source_files_properties( x86.cpp PROPERTIES COMPILE_FLAGS "-msse2 -mmmx" )
endif( CMAKE_COMPILER_IS_GNUCXX )

if( MSVC )
Expand Down
4 changes: 2 additions & 2 deletions src/am_map.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1766,8 +1766,8 @@ void AM_Drawer ()
{
f_x = viewwindowx;
f_y = viewwindowy;
f_w = realviewwidth;
f_h = realviewheight;
f_w = viewwidth;
f_h = viewheight;
f_p = screen->GetPitch ();
}
AM_activateNewScale();
Expand Down
File renamed without changes.
200 changes: 200 additions & 0 deletions src/asm_ia32/misc.asm
Original file line number Diff line number Diff line change
@@ -0,0 +1,200 @@
;*
;* misc.nas
;* Miscellaneous assembly functions
;*
;*---------------------------------------------------------------------------
;* Copyright 1998-2006 Randy Heit
;* All rights reserved.
;*
;* Redistribution and use in source and binary forms, with or without
;* modification, are permitted provided that the following conditions
;* are met:
;*
;* 1. Redistributions of source code must retain the above copyright
;* notice, this list of conditions and the following disclaimer.
;* 2. Redistributions in binary form must reproduce the above copyright
;* notice, this list of conditions and the following disclaimer in the
;* documentation and/or other materials provided with the distribution.
;* 3. The name of the author may not be used to endorse or promote products
;* derived from this software without specific prior written permission.
;*
;* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
;* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
;* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
;* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
;* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
;* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
;* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
;* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
;* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
;* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
;*---------------------------------------------------------------------------
;*

BITS 32

%ifndef M_TARGET_LINUX

%define DoBlending_MMX _DoBlending_MMX
%define BestColor_MMX _BestColor_MMX

%endif

%ifdef M_TARGET_WATCOM
SEGMENT DATA PUBLIC ALIGN=16 CLASS=DATA USE32
SEGMENT DATA
%else
SECTION .data
%endif

Blending256:
dd 0x01000100,0x00000100

%ifdef M_TARGET_WATCOM
SEGMENT CODE PUBLIC ALIGN=16 CLASS=CODE USE32
SEGMENT CODE
%else
SECTION .text
%endif

;-----------------------------------------------------------
;
; DoBlending_MMX
;
; MMX version of DoBlending
;
; (DWORD *from, DWORD *to, count, tor, tog, tob, toa)
;-----------------------------------------------------------

GLOBAL DoBlending_MMX

DoBlending_MMX:
pxor mm0,mm0 ; mm0 = 0
mov eax,[esp+4*4]
shl eax,16
mov edx,[esp+4*5]
shl edx,8
or eax,[esp+4*6]
or eax,edx
mov ecx,[esp+4*3] ; ecx = count
movd mm1,eax ; mm1 = 00000000 00RRGGBB
mov eax,[esp+4*7]
shl eax,16
mov edx,[esp+4*7]
shl edx,8
or eax,[esp+4*7]
or eax,edx
mov edx,[esp+4*2] ; edx = dest
movd mm6,eax ; mm6 = 00000000 00AAAAAA
punpcklbw mm1,mm0 ; mm1 = 000000RR 00GG00BB
movq mm7,[Blending256]
punpcklbw mm6,mm0 ; mm6 = 000000AA 00AA00AA
mov eax,[esp+4*1] ; eax = source
pmullw mm1,mm6 ; mm1 = 000000RR 00GG00BB (multiplied by alpha)
psubusw mm7,mm6 ; mm7 = 000000aa 00aa00aa (one minus alpha)
nop ; Does this actually pair on a Pentium?

; Do four colors per iteration: Count must be a multiple of four.

.loop movq mm2,[eax] ; mm2 = 00r2g2b2 00r1g1b1
add eax,8
movq mm3,mm2 ; mm3 = 00r2g2b2 00r1g1b1
punpcklbw mm2,mm0 ; mm2 = 000000r1 00g100b1
punpckhbw mm3,mm0 ; mm3 = 000000r2 00g200b2
pmullw mm2,mm7 ; mm2 = 0000r1rr g1ggb1bb
add edx,8
pmullw mm3,mm7 ; mm3 = 0000r2rr g2ggb2bb
sub ecx,2
paddusw mm2,mm1
psrlw mm2,8
paddusw mm3,mm1
psrlw mm3,8
packuswb mm2,mm3 ; mm2 = 00r2g2b2 00r1g1b1
movq [edx-8],mm2

movq mm2,[eax] ; mm2 = 00r2g2b2 00r1g1b1
add eax,8
movq mm3,mm2 ; mm3 = 00r2g2b2 00r1g1b1
punpcklbw mm2,mm0 ; mm2 = 000000r1 00g100b1
punpckhbw mm3,mm0 ; mm3 = 000000r2 00g200b2
pmullw mm2,mm7 ; mm2 = 0000r1rr g1ggb1bb
add edx,8
pmullw mm3,mm7 ; mm3 = 0000r2rr g2ggb2bb
sub ecx,2
paddusw mm2,mm1
psrlw mm2,8
paddusw mm3,mm1
psrlw mm3,8
packuswb mm2,mm3 ; mm2 = 00r2g2b2 00r1g1b1
movq [edx-8],mm2
jnz .loop

emms
ret

;-----------------------------------------------------------
;
; BestColor_MMX
;
; Picks the closest matching color from a palette
;
; Passed FFRRGGBB and palette array in same format
; FF is the index of the first palette entry to consider
;
;-----------------------------------------------------------

GLOBAL BestColor_MMX
GLOBAL @BestColor_MMX@8

BestColor_MMX:
mov ecx,[esp+4]
mov edx,[esp+8]
@BestColor_MMX@8:
pxor mm0,mm0
movd mm1,ecx ; mm1 = color searching for
mov eax,257*257+257*257+257*257 ;eax = bestdist
push ebx
punpcklbw mm1,mm0
mov ebx,ecx ; ebx = best color
shr ecx,24 ; ecx = count
and ebx,0xffffff
push esi
push ebp

.loop movd mm2,[edx+ecx*4] ; mm2 = color considering now
inc ecx
punpcklbw mm2,mm0
movq mm3,mm1
psubsw mm3,mm2
pmullw mm3,mm3 ; mm3 = color distance squared

movd ebp,mm3 ; add the three components
psrlq mm3,32 ; into ebp to get the real
mov esi,ebp ; (squared) distance
shr esi,16
and ebp,0xffff
add ebp,esi
movd esi,mm3
add ebp,esi

jz .perf ; found a perfect match
cmp eax,ebp
jb .skip
mov eax,ebp
lea ebx,[ecx-1]
.skip cmp ecx,256
jne .loop
mov eax,ebx
pop ebp
pop esi
pop ebx
emms
ret

.perf lea eax,[ecx-1]
pop ebp
pop esi
pop ebx
emms
ret
Loading

0 comments on commit dda5ddd

Please sign in to comment.