Vector type LOAD/STORE with post-increment.

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Vector type LOAD/STORE with post-increment.

Francois Pichet
I am trying to implement vector type load/store with post-increment for an out of tree backend.
I see that that ARM NEON support such load/store so I am using ARM NEON as an example of what to do.

The problem is I can't get any C or C++ code example to actually generate vector load/store with post increment.

I am talking about something like this:
     vldr    d16, [sp, #8]

Does anybody know any C/C++ code example that will generate such code (especially loop)? Is this supported by the auto-vectorizer?

Thanks.

_______________________________________________
LLVM Developers mailing list
[hidden email]         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
Reply | Threaded
Open this post in threaded view
|

Re: Vector type LOAD/STORE with post-increment.

Renato Golin-2
On 19 June 2013 11:32, Francois Pichet <[hidden email]> wrote:
I am talking about something like this:
     vldr    d16, [sp, #8]

Hi Francois,

This is just using the offset, not updating the register (see ARM ARM A8.5). Post-increment only has meaning if you write-back the new value to the register like:

  vldr  d16, [sp], #8

Did you mean write-back? or just offset?


Does anybody know any C/C++ code example that will generate such code (especially loop)? Is this supported by the auto-vectorizer?

It's not simple to go from a loop in C++ to an instruction in machine code, especially when you're considering the vectorizer. Today you can generate a post-indexing load, tomorrow a pre-indexing load, and the next day a simple offset. All depending on how the IR is constructed, changed and lowered, all of which change daily.

The quickest and surest way to generate NEON instructions is with NEON intrinsics, but even so, LLVM is allowed to twist your code to generate better instructions than you have thought possible. You can try to create an IR that can generate post-indexed VLDRs on ARM, but that will not guarantee it'll generate the same on any other backend.

A code that will generate post-indexed loads in ARM, might generate a completely different instruction on Intel, or on your backend. What you have to do is to understand the patterns that vectorized code has in IR, possibly copy what's been done by other backends to lower such IR, and make sure your backend uses the post-indexed load for the cases you care about.

Makes sense?

cheers,
--renato

_______________________________________________
LLVM Developers mailing list
[hidden email]         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
Reply | Threaded
Open this post in threaded view
|

Re: Vector type LOAD/STORE with post-increment.

Francois Pichet



On Wed, Jun 19, 2013 at 7:29 AM, Renato Golin <[hidden email]> wrote:
On 19 June 2013 11:32, Francois Pichet <[hidden email]> wrote:
I am talking about something like this:
     vldr    d16, [sp, #8]

Hi Francois,

This is just using the offset, not updating the register (see ARM ARM A8.5). Post-increment only has meaning if you write-back the new value to the register like:

  vldr  d16, [sp], #8

Did you mean write-back? or just offset?


yes I mean just offset like :
 vldr  d16, [sp], #8


Does anybody know any C/C++ code example that will generate such code (especially loop)? Is this supported by the auto-vectorizer?

It's not simple to go from a loop in C++ to an instruction in machine code, especially when you're considering the vectorizer. Today you can generate a post-indexing load, tomorrow a pre-indexing load, and the next day a simple offset. All depending on how the IR is constructed, changed and lowered, all of which change daily.

The quickest and surest way to generate NEON instructions is with NEON intrinsics, but even so, LLVM is allowed to twist your code to generate better instructions than you have thought possible. You can try to create an IR that can generate post-indexed VLDRs on ARM, but that will not guarantee it'll generate the same on any other backend.

A code that will generate post-indexed loads in ARM, might generate a completely different instruction on Intel, or on your backend. What you have to do is to understand the patterns that vectorized code has in IR, possibly copy what's been done by other backends to lower such IR, and make sure your backend uses the post-indexed load for the cases you care about.

Makes sense?

yes... makes sense.. it is just not trivial to test if the post increment will work when generated via C/C++ code. I was just wondering if there was some simple C/C++ code snippets that would normally generate such post increment load for vector. Apparently not.



_______________________________________________
LLVM Developers mailing list
[hidden email]         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
Reply | Threaded
Open this post in threaded view
|

Re: Vector type LOAD/STORE with post-increment.

Hal Finkel
In reply to this post by Francois Pichet
----- Original Message -----

>
>
> I am trying to implement vector type load/store with post-increment
> for an out of tree backend.
> I see that that ARM NEON support such load/store so I am using ARM
> NEON as an example of what to do.
>
>
> The problem is I can't get any C or C++ code example to actually
> generate vector load/store with post increment.
>
>
> I am talking about something like this:
> vldr d16, [sp, #8]
>
>
>
> Does anybody know any C/C++ code example that will generate such code
> (especially loop)? Is this supported by the auto-vectorizer?

To add slightly to Renato's answer...

Auto-vectorization, which is an IR-level pass, happens well before Pre/Post-increment formation (which happens during DAGCombine). Whether or not vector loads/stores are eligible for post-increment formation will depend on calling setIndexedLoadAction(ISD::POST_INC, ...) and how you've implemented <Target>TargetLowering::getPreIndexedAddressParts.

 -Hal

>
>
> Thanks.
> _______________________________________________
> LLVM Developers mailing list
> [hidden email]         http://llvm.cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
>

--
Hal Finkel
Assistant Computational Scientist
Leadership Computing Facility
Argonne National Laboratory
_______________________________________________
LLVM Developers mailing list
[hidden email]         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev