(original) (raw)
Yes, since all operations of amx data can only use amx instructions, So we use x86\_amx type in mid-end/back-end to separate them from normal llvm IR instructions.
So let me come to the beginning:
I think it is OK to use the “x86\_amx type” in mid-end.
From: llvm-dev On Behalf Of
Luo, Yuanke via llvm-dev
Sent: Thursday, March 18, 2021 9:30 PM
To: James Y Knight
Cc: llvm-dev
Subject: Re: \[llvm-dev\] Does middle-end pass need to consider some special type when doing optimization? Or letting back-end to revert the optimization accordingly?
But x86\_amx represent a tile. The semantics of hardware instruction tileloadd is something like ‘llvm.matrix.row.major.load’. How do we lower \`%v = load x86\_amx, x86\_amx\* %ptr\` to tileloadd?
From: James Y Knight <jyknight@google.com>
Sent: Thursday, March 18, 2021 9:09 PM
To: Luo, Yuanke <yuanke.luo@intel.com>
Cc: Florian Hahn <florian\_hahn@apple.com>; Wang, Pengfei <pengfei.wang@intel.com>; llvm-dev <llvm-dev@lists.llvm.org>
Subject: Re: \[llvm-dev\] Does middle-end pass need to consider some special type when doing optimization? Or letting back-end to revert the optimization accordingly?
Since the x86\_amx type has a fixed size of 1024, I would expect \`%v = load x86\_amx, x86\_amx\* %ptr\` to load 1024 bytes of contiguous memory starting at %ptr -- I don't see why this should be invalid?
On Thu, Mar 18, 2021 at 8:53 AM Luo, Yuanke <yuanke.luo@intel.com> wrote:
I mean transforming from “load <256 x i32>\*” to “load x86\_amx\*” is not invalid because x86\_amx represent a tile and “load x86\_amx\*” doesn’t express its semantics without a stride. Now it looks to me “load x86\_amx\*” is invalid.
From: James Y Knight <jyknight@google.com>
Sent: Thursday, March 18, 2021 8:41 PM
To: Luo, Yuanke <yuanke.luo@intel.com>
Cc: Florian Hahn <florian\_hahn@apple.com>; Wang, Pengfei <pengfei.wang@intel.com>; llvm-dev <llvm-dev@lists.llvm.org>
Subject: Re: \[llvm-dev\] Does middle-end pass need to consider some special type when doing optimization? Or letting back-end to revert the optimization accordingly?
Err...are you saying this is the expected semantics of a "load x86\_amx" operation today? That doesn't make much sense...Surely a strided-load operation should be spelled \`llvm.matrix.column.major.load\` in the IR, not \`load\`?
On Thu, Mar 18, 2021 at 8:17 AM Luo, Yuanke via llvm-dev <llvm-dev@lists.llvm.org> wrote:
Thank Florian. I agree with you that pointers to \`x86\_amx\` have different semantics than regular LLVM pointer types. First the x86\_amx pointer point to a 2D array of a big matrix. The data of each row is contiguous, but the data on contiguous row is not contiguous in memory. Below picture shows the x86\_amx load semantics. We need another operand stride to describe the stride of each rows. So the semantics for “load <256xi32>\*” and “load x86\_amx” is different. Because “load <256 x i32>\* assume the memory is contiguous and load a flat vector.
You also mention that there is no documentation of x86\_amx in the langref. I’d like to add x86\_amx to the document. Is there any process to document for a type?
Thanks
Yuanke
From: Florian Hahn <florian\_hahn@apple.com>
Sent: Thursday, March 18, 2021 6:03 PM
To: Wang, Pengfei <pengfei.wang@intel.com>
Cc: llvm-dev <llvm-dev@lists.llvm.org>; Luo, Yuanke <yuanke.luo@intel.com>
Subject: Re: \[llvm-dev\] Does middle-end pass need to consider some special type when doing optimization? Or letting back-end to revert the optimization accordingly?
On Mar 17, 2021, at 10:11, Wang, Pengfei via llvm-dev <llvm-dev@lists.llvm.org> wrote:
Hi,
We are developing prototypes for Intel Advanced Matrix Extensions (AMX) \[1\] programing model in Clang and LLVM \[2\].
We met several cases when the certain type we added are optimized unexpectedly in the middle-end. E.g. optimizing phi + biscast + load:
From
%a = load <256 x i32>, <256 x i32>\* %mem, align 64
… …
%b = phi <256 x i32> \[ %a, %label1 \], \[%someother, %label2\]
%c = bitcast <256 x i32> %b to x86\_amx
To
%a = bitcast <256 x i32>\* %mem to x86\_amx\*
%b = load x86\_amx, x86\_amx\*, align 64
… …
%c = phi x86\_amx \[ %b, %label1 \], \[%someother, %label2\]
To prevent such unexpected transforms, we concretely added the type check in each point of the optimizations.
Roman pointed out the changes are not the right direction \[3\], and thought it’s bug for backend. While we agreed backend might be able to handle it for the functionality, we think it is better to handle it in the midden-end since they are negative optimizations for AMX.
First, let me put some background here:
- x86\_amx\* is different from trivial pointers.
The AMX load instruction is much different from other load instructions. It is not only need the memory address but also the shape / stride of the tile register. We did some extra work in the backend to deduce the shape information from the context. We don’t want the pass to add new x86\_amx related usage because this will result in the difficulty in deduction. That said bitcasting other pointer types to x86\_amx\* is not trivial as assumed here.
The problem appears to be that this difference is not modeled or specified in LLVM IR AFAICT. The current LangRef does not appear to specific \`x86\_amx\` to start with. If pointers to \`x86\_amx\` have different semantics than regular LLVM pointer types, using regular LLVM pointer types for pointers to \`x86\_amx\` may not be appropriate. I’ve not followed the previous AMX discussions closely, but it sounds like it may be good to reconsider how x86\_amx pointers are modeled in LVM IR.
Also note that \`bitcast\` is specified as \`no-op\` (https://llvm.org/docs/LangRef.html#id293) (expect for pointers with different address spaces), but from what you mentioned above this does not match the semantics for \`x86\_amx\*\`. It sounds like this is the underlying problem that should be addressed, because trying to update various middle end optimization tot ry to enforce the special semantics does not seem to be a scalable solution.
As Nuno mentioned, you could try and use a separate address space for \`x86\_amx\` pointers to avoid pointer optimizations.
- The physical tile registers have more limitations.
- No copy instruction between tile registers.
- Spilling / reload a tile register is expensive in light of its size is 1024 bytes.
- The shapes of tile registers need to be pre-configured before use and all data in tile registers will turn into invalid once re-configured. That said we need to dominate as more tile registers as possible to configure their shapes with one configure instruction, otherwise we need to spill and reload the live registers once we need to re-configure.
- The number of tile registers is rather small (only 8) and different shapes cannot be reused.
Based on the limitations, we need to reduce the use / live range of tile registers. But optimizations may increase the opportunity of the use. So even we can handle some combined operation for AMX type, we still prefer to prevent it from the beginning. Unless we can totally roll back the optimization. Which is also not a good solution in my opinion.
- For more information, please refer to discussion in \[3\].
For other optimization points, please refer \[4\]\[5\].
I think the main controversy from Roman is if middle-end pass should consider some special type when doing optimization. I tend to let middle-end do the type check on account of the peculiarity of AMX type. But I’m not sure if we have precedent to handle the similar issue in other targets. I’m open and glad to do it either way so long as we have an elegant solution.
Any suggestions are welcome.
IIUC the main problem is not that middle-end passes perform or not perform optimizations based on certain types. To me it sounds like the actual problem is that pointers to \`x86\_amx\` do not behave like regular LLVM IR pointers and you are trying to enforce extra restrictions for bit casts.
Cheers,
Florian
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
LLVM Developers mailing list
llvm-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev