README-X86-64.txt 6.0 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184
  1. //===- README_X86_64.txt - Notes for X86-64 code gen ----------------------===//
  2. AMD64 Optimization Manual 8.2 has some nice information about optimizing integer
  3. multiplication by a constant. How much of it applies to Intel's X86-64
  4. implementation? There are definite trade-offs to consider: latency vs. register
  5. pressure vs. code size.
  6. //===---------------------------------------------------------------------===//
  7. Are we better off using branches instead of cmove to implement FP to
  8. unsigned i64?
  9. _conv:
  10. ucomiss LC0(%rip), %xmm0
  11. cvttss2siq %xmm0, %rdx
  12. jb L3
  13. subss LC0(%rip), %xmm0
  14. movabsq $-9223372036854775808, %rax
  15. cvttss2siq %xmm0, %rdx
  16. xorq %rax, %rdx
  17. L3:
  18. movq %rdx, %rax
  19. ret
  20. instead of
  21. _conv:
  22. movss LCPI1_0(%rip), %xmm1
  23. cvttss2siq %xmm0, %rcx
  24. movaps %xmm0, %xmm2
  25. subss %xmm1, %xmm2
  26. cvttss2siq %xmm2, %rax
  27. movabsq $-9223372036854775808, %rdx
  28. xorq %rdx, %rax
  29. ucomiss %xmm1, %xmm0
  30. cmovb %rcx, %rax
  31. ret
  32. Seems like the jb branch has high likelihood of being taken. It would have
  33. saved a few instructions.
  34. //===---------------------------------------------------------------------===//
  35. It's not possible to reference AH, BH, CH, and DH registers in an instruction
  36. requiring REX prefix. However, divb and mulb both produce results in AH. If isel
  37. emits a CopyFromReg which gets turned into a movb and that can be allocated a
  38. r8b - r15b.
  39. To get around this, isel emits a CopyFromReg from AX and then right shift it
  40. down by 8 and truncate it. It's not pretty but it works. We need some register
  41. allocation magic to make the hack go away (e.g. putting additional constraints
  42. on the result of the movb).
  43. //===---------------------------------------------------------------------===//
  44. The x86-64 ABI for hidden-argument struct returns requires that the
  45. incoming value of %rdi be copied into %rax by the callee upon return.
  46. The idea is that it saves callers from having to remember this value,
  47. which would often require a callee-saved register. Callees usually
  48. need to keep this value live for most of their body anyway, so it
  49. doesn't add a significant burden on them.
  50. We currently implement this in codegen, however this is suboptimal
  51. because it means that it would be quite awkward to implement the
  52. optimization for callers.
  53. A better implementation would be to relax the LLVM IR rules for sret
  54. arguments to allow a function with an sret argument to have a non-void
  55. return type, and to have the front-end to set up the sret argument value
  56. as the return value of the function. The front-end could more easily
  57. emit uses of the returned struct value to be in terms of the function's
  58. lowered return value, and it would free non-C frontends from a
  59. complication only required by a C-based ABI.
  60. //===---------------------------------------------------------------------===//
  61. We get a redundant zero extension for code like this:
  62. int mask[1000];
  63. int foo(unsigned x) {
  64. if (x < 10)
  65. x = x * 45;
  66. else
  67. x = x * 78;
  68. return mask[x];
  69. }
  70. _foo:
  71. LBB1_0: ## entry
  72. cmpl $9, %edi
  73. jbe LBB1_3 ## bb
  74. LBB1_1: ## bb1
  75. imull $78, %edi, %eax
  76. LBB1_2: ## bb2
  77. movl %eax, %eax <----
  78. movq _mask@GOTPCREL(%rip), %rcx
  79. movl (%rcx,%rax,4), %eax
  80. ret
  81. LBB1_3: ## bb
  82. imull $45, %edi, %eax
  83. jmp LBB1_2 ## bb2
  84. Before regalloc, we have:
  85. %reg1025 = IMUL32rri8 %reg1024, 45, implicit-def %eflags
  86. JMP mbb<bb2,0x203afb0>
  87. Successors according to CFG: 0x203afb0 (#3)
  88. bb1: 0x203af60, LLVM BB @0x1e02310, ID#2:
  89. Predecessors according to CFG: 0x203aec0 (#0)
  90. %reg1026 = IMUL32rri8 %reg1024, 78, implicit-def %eflags
  91. Successors according to CFG: 0x203afb0 (#3)
  92. bb2: 0x203afb0, LLVM BB @0x1e02340, ID#3:
  93. Predecessors according to CFG: 0x203af10 (#1) 0x203af60 (#2)
  94. %reg1027 = PHI %reg1025, mbb<bb,0x203af10>,
  95. %reg1026, mbb<bb1,0x203af60>
  96. %reg1029 = MOVZX64rr32 %reg1027
  97. so we'd have to know that IMUL32rri8 leaves the high word zero extended and to
  98. be able to recognize the zero extend. This could also presumably be implemented
  99. if we have whole-function selectiondags.
  100. //===---------------------------------------------------------------------===//
  101. Take the following code
  102. (from http://gcc.gnu.org/bugzilla/show_bug.cgi?id=34653):
  103. extern unsigned long table[];
  104. unsigned long foo(unsigned char *p) {
  105. unsigned long tag = *p;
  106. return table[tag >> 4] + table[tag & 0xf];
  107. }
  108. Current code generated:
  109. movzbl (%rdi), %eax
  110. movq %rax, %rcx
  111. andq $240, %rcx
  112. shrq %rcx
  113. andq $15, %rax
  114. movq table(,%rax,8), %rax
  115. addq table(%rcx), %rax
  116. ret
  117. Issues:
  118. 1. First movq should be movl; saves a byte.
  119. 2. Both andq's should be andl; saves another two bytes. I think this was
  120. implemented at one point, but subsequently regressed.
  121. 3. shrq should be shrl; saves another byte.
  122. 4. The first andq can be completely eliminated by using a slightly more
  123. expensive addressing mode.
  124. //===---------------------------------------------------------------------===//
  125. Consider the following (contrived testcase, but contains common factors):
  126. #include <stdarg.h>
  127. int test(int x, ...) {
  128. int sum, i;
  129. va_list l;
  130. va_start(l, x);
  131. for (i = 0; i < x; i++)
  132. sum += va_arg(l, int);
  133. va_end(l);
  134. return sum;
  135. }
  136. Testcase given in C because fixing it will likely involve changing the IR
  137. generated for it. The primary issue with the result is that it doesn't do any
  138. of the optimizations which are possible if we know the address of a va_list
  139. in the current function is never taken:
  140. 1. We shouldn't spill the XMM registers because we only call va_arg with "int".
  141. 2. It would be nice if we could sroa the va_list.
  142. 3. Probably overkill, but it'd be cool if we could peel off the first five
  143. iterations of the loop.
  144. Other optimizations involving functions which use va_arg on floats which don't
  145. have the address of a va_list taken:
  146. 1. Conversely to the above, we shouldn't spill general registers if we only
  147. call va_arg on "double".
  148. 2. If we know nothing more than 64 bits wide is read from the XMM registers,
  149. we can change the spilling code to reduce the amount of stack used by half.
  150. //===---------------------------------------------------------------------===//