[llvm-dev] [RFC][VECLIB] how should we legalize VECLIB calls? (original) (raw)

Naoki Shibata via llvm-dev llvm-dev at lists.llvm.org
Sat Oct 13 22:15:02 PDT 2018


Hello Renato,

On 10/14/2018 12:05 AM, Renato Golin wrote:

Hi Naoki,

I'll try to keep it short, as this is not the most important part of this thread. If that's too short, I'll be glad to chat in private. On Sat, 13 Oct 2018 at 09:20, Naoki Shibata <n-sibata at is.naist.jp> wrote: What kind of a standardization process are you talking about? As a developer of SLEEF, I am rather trying to know what is actually needed by the developers of compilers. I am also trying to come up with a new feature with which I can write a paper. I meant ABI standards. An official document, written by the authors of the library (company, community, developer), specifying things like function names and what they do, argument types and how they expand, how errors are handled, special registers used (if any), macros that control behaviour, macros that are defined by the library, etc. This is the important part for compiler writers, not necessarily for users. End users of the compiler do not care at all what the ABIs are, they want their code compiled, correct results and fast execution. Users of your library won't care much either, as if you change the names or mandatory arguments or even internal behaviour, they'll adapt to the new model. Most users only use one version of each library anyway. But when embedding the behaviour of your library (alongside all other similar libraries) in the compiler, and you change the behaviour in the new version, the compiler now has to be compatible with two versions. Furthermore, if you don't follow the same behaviour (as you would have if there's an official ABI document), then we'd only notice you changed when our users start complaining we are breaking their code. A good example of an official ABI document is what ARM publishes for their architecture: http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.subset.swdev.abi/index.html But there are a lot of documents in there, and that's not what I'm asking. Something like the NEON intrinsics list would be a good start: http://infocenter.arm.com/help/topic/com.arm.doc.ihi0073b/IHI0073Barmneonintrinsicsref.pdf But it would be better with a short explanation of what the function does, what the arguments are and what is the results returned.

Your stance seems to be that it is compiler's responsibility to adapt to the changes of math libraries, but is it fair to say that? I would say there should be a kind of standard regarding to how a vector math library is implemented. Then, the compiler can simply assume that the library is implemented conforming to that standard. It would be even better if there is a conformance testing tool.

In order to make the compiler compliant to the C standard, the standard library also needs to be compliant to the C standard. Documenting what is assumed by the compiler should be not too hard, since there is already documents for C standard library and documents for the Vector ABI.

The second thing we need to consider is compiler's compliance to the standard. The troublesome thing is that libraries may not be fully compliant to the C standard. We need to think of accuracy, input domain, whether it produces consistent results, etc. The number of items can increase, and developers of different libraries may be seeing different demands. This is largely irrelevant to the topic of this thread. How you compile your library is up to you, this thread is about the expectation of what are the entry points of the library (functions, arguments) and returned values and types, so that we can replace scalar functions (already checked by the front-end) with vector alternatives (not checked by anyone).

I am saying about this because you said:

One thing is to produce slow code because you didn't do enough, another is because you did too much. People often understand the former, not usually the latter.

The problem is that there are things that are not visible from the current compiler, which are accuracy, input domain and performance. If the compiled program does not give enough accuracy as it is designed, it would also upset the users.

If you want to make the compiler choose the fastest functions in the math libraries, we need at least a way to express how much performance each function in the libraries provides. This is not a trivial thing. The compiler also needs additional code for processing these figures. The merit of LTO is that we can avoid the problem of expressing performance. Since the compiler can see through everything in the library, the existing optimization passes can be used without changes.

For accuracy, the only thing we can do is to make assumptions. The easiest assumption is that the functions in a vector math library conforms to the ANSI C standard and Vector ABI.

Another thing I want to know is how much compliance to the Vector Function ABI is needed. I know Arm is keen in supporting this ABI, but how about Intel? Is there possibility that SVML will comply to the Vector Function ABI in the near future? That's a good questions and is mostly up to all of us to make sure that works in the future. If we all have clear expectations (and an official document goes a long way in providing that), then we'll all have a much easier job. Hope this helps.

Naoki Shibta



More information about the llvm-dev mailing list