Optimization feasibility

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
8 messages Options
Reply | Threaded
Open this post in threaded view
|

Optimization feasibility

Joachim Durchholz
Hi all,

I'm in a very preliminary phase of a language project which requires
some specific optimizations to be reasonably efficient.

LLVM already looks very good; I'd just like to know whether I can push
these optimizations through LLVM to the JIT phase (which, as far as I
understand the docs, is a pretty powerful part of LLVM).

The optimizations that I need to get to work are:

* Tail call elimination.
* Constant evaluation. To implement this, the JIT phase would have to
   evaluate the constant and somehow store it so that future runs don't
   need to reevaluate it.
* Dead code elimination, enabled by constant evaluation.
* Monomorphisation, i.e. constant evaluation may establish that some
   data structures aren't polymorphic, so it would be worth generating
   code that keeps integers in registers instead of generating boxed
   representations on the heap. Again, constant evaluation can enable
   this optimization.

I'll be happy to know the answer for each optimization, whether it's
"yes", "not yet", or a flat "no".
The answers probably won't affect whether I use LLVM. They will,
however, tell me how much work I can shove off to LLVM ;-)

Regards,
Jo
_______________________________________________
LLVM Developers mailing list
[hidden email]         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
Reply | Threaded
Open this post in threaded view
|

Re: Optimization feasibility

Gordon Henriksen-3
Hi Jo,

On 2007-12-24, at 14:43, Joachim Durchholz wrote:

> I'm in a very preliminary phase of a language project which requires  
> some specific optimizations to be reasonably efficient.
>
> LLVM already looks very good; I'd just like to know whether I can  
> push these optimizations through LLVM to the JIT phase (which, as  
> far as I understand the docs, is a pretty powerful part of LLVM).

Cool.

> The optimizations that I need to get to work are:
>
> * Tail call elimination.

LLVM already has tail call elimination (i.e., it will transform direct  
recursion into iteration). It also supports emitting tail calls on  
x86, but its support is somewhat weak. This is partially mandated by  
calling conventions, but those implementing functional languages might  
be disappointed. Check the llvmdev archives for details.

> * Constant evaluation. To implement this, the JIT phase would have  
> to evaluate the constant and somehow store it so that future runs  
> don't need to reevaluate it.
> * Dead code elimination, enabled by constant evaluation.

LLVM has both constant propagation and dead code elimination, but if  
your language has a high-level functional concept of "constant" here  
(or you're referring to memoization), your runtime may need to help  
LLVM out.

> * Monomorphisation, i.e. constant evaluation may establish that some  
> data structures aren't polymorphic, so it would be worth generating  
> code that keeps integers in registers instead of generating boxed  
> representations on the heap. Again, constant evaluation can enable  
> this optimization.

LLVM will not rearrange your high-level data structures. If your  
runtime finds a more representation, it can modify the IR and use  
LLVM's JIT to recompile a more specialized function, though.

— Gordon


_______________________________________________
LLVM Developers mailing list
[hidden email]         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
Reply | Threaded
Open this post in threaded view
|

Re: Optimization feasibility

Arnold Schwaighofer
On 25 Dec 2007, at 03:29, Gordon Henriksen wrote:

> Hi Jo,
>
> On 2007-12-24, at 14:43, Joachim Durchholz wrote:
>
>> I'm in a very preliminary phase of a language project which requires
>> some specific optimizations to be reasonably efficient.
>>
>> LLVM already looks very good; I'd just like to know whether I can
>> push these optimizations through LLVM to the JIT phase (which, as
>> far as I understand the docs, is a pretty powerful part of LLVM).
>
> Cool.
>
>> The optimizations that I need to get to work are:
>>
>> * Tail call elimination.

> It also supports emitting tail calls on
> x86, but its support is somewhat weak. This is partially mandated by
> calling conventions, but those implementing functional languages might
> be disappointed. Check the llvmdev archives for details.
>
Hi Joachim,
I am  the person to blame for tail call support and its deficiencies  
on x86.

The current constraints for tail calls on x86 are:

Max 2 registers are used for argument passing (inreg). Tail call  
optimization is performed
provided:
//                * option tailcallopt is enabled
//                * caller/callee are fastcc
//                * elf/pic is disabled (this should be the case on  
mac os x?) OR
//                * elf/pic enabled + callee is in the same module as  
caller + callee has
//                  visibility protected or hidden


an (pointless) example would be:


<<---tailcall.ll --->>
@.str = internal constant [12 x i8] c"result: %d\0A\00" ; <[12 x i8]
*> [#uses=1]

define fastcc i32 @tailcallee(i32 %a1, i32 %a2, i32 %a3, i32 %a4) {
entry:
         ret i32 %a3
}

define fastcc i32 @tailcaller(i32 %in1, i32 %in2) {
entry:
         %tmp11 = tail call fastcc i32 @tailcallee( i32 %in1, i32 %
in2, i32 %in1,
  i32 %in2 )             ; <i32> [#uses=1]
         ret i32 %tmp11
}



define i32 @main(i32 %argc, i8** %argv) {
entry:
        %argc_addr = alloca i32 ; <i32*> [#uses=1]
        %argv_addr = alloca i8** ; <i8***> [#uses=1]
        %retval = alloca i32, align 4 ; <i32*> [#uses=2]
        %tmp = alloca i32, align 4 ; <i32*> [#uses=2]
        %res = alloca i32, align 4 ; <i32*> [#uses=2]
        "alloca point" = bitcast i32 0 to i32 ; <i32> [#uses=0]
        store i32 %argc, i32* %argc_addr
        store i8** %argv, i8*** %argv_addr
        %tmp1 = call fastcc i32 @tailcaller( i32 1, i32 2) ; <i32> [#uses=1]
        store i32 %tmp1, i32* %res
        %tmp2 = getelementptr [12 x i8]* @.str, i32 0, i32 0 ; <i8*> [#uses=1]
        %tmp3 = load i32* %res ; <i32> [#uses=1]
        %tmp4 = call i32 (i8*, ...)* @printf( i8* %tmp2, i32 %tmp3 ) ;  
<i32> [#uses=0]
        store i32 0, i32* %tmp
        %tmp5 = load i32* %tmp ; <i32> [#uses=1]
        store i32 %tmp5, i32* %retval
        br label %return

return: ; preds = %entry
        %retval6 = load i32* %retval ; <i32> [#uses=1]
        ret i32 %retval6
}

declare i32 @printf(i8*, ...)
<<---tailcall.ll --->>

x86Shell:>  llvm-as < tailcall.ll | llc  -tailcallopt | gcc -x  
assembler -
x86Shell:> ./a.out

if you have got any questions regarding tail call stuff  i would be  
happy to help

regards arnold

_______________________________________________
LLVM Developers mailing list
[hidden email]         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
Reply | Threaded
Open this post in threaded view
|

Re: Optimization feasibility

Joachim Durchholz
In reply to this post by Gordon Henriksen-3
Gordon Henriksen schrieb:

> Hi Jo,
>
> On 2007-12-24, at 14:43, Joachim Durchholz wrote:
>
>> I'm in a very preliminary phase of a language project which requires  
>> some specific optimizations to be reasonably efficient.
>>
>> LLVM already looks very good; I'd just like to know whether I can  
>> push these optimizations through LLVM to the JIT phase (which, as  
>> far as I understand the docs, is a pretty powerful part of LLVM).
>
> Cool.
>
>> The optimizations that I need to get to work are:
>>
>> * Tail call elimination.
>
> LLVM already has tail call elimination (i.e., it will transform direct  
> recursion into iteration). It also supports emitting tail calls on  
> x86, but its support is somewhat weak. This is partially mandated by  
> calling conventions, but those implementing functional languages might  
> be disappointed. Check the llvmdev archives for details.

I'm not too picky. Unless there are structural reasons that tail call
support is weak, I think this can be improved if and when the need comes.

>> * Constant evaluation. To implement this, the JIT phase would have  
>> to evaluate the constant and somehow store it so that future runs  
>> don't need to reevaluate it.
>> * Dead code elimination, enabled by constant evaluation.
>
> LLVM has both constant propagation and dead code elimination, but if  
> your language has a high-level functional concept of "constant" here  
> (or you're referring to memoization), your runtime may need to help  
> LLVM out.

If LLVM *can* be helped out, that's enough.

>> * Monomorphisation, i.e. constant evaluation may establish that some  
>> data structures aren't polymorphic, so it would be worth generating  
>> code that keeps integers in registers instead of generating boxed  
>> representations on the heap. Again, constant evaluation can enable  
>> this optimization.
>
> LLVM will not rearrange your high-level data structures.

Of course not. LLVM is supposed to be low-level after all :-)

> If your runtime finds a more representation, it can modify the IR and
> use LLVM's JIT to recompile a more specialized function, though.

Fair enough. Self-modifying code isn't exactly what will make virus
scanners and the like happy, but I doubt there's a good way for caching
JIT results without making them unhappy, so there...

Anyway, as I said: LLVM is a good candidate, and I think I have at least
a rough ideas how much leverage it will give me.
I'll be back when I have more to report.
And, of course, as questions arise :-)

Regards,
Jo

_______________________________________________
LLVM Developers mailing list
[hidden email]         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
Reply | Threaded
Open this post in threaded view
|

Re: Optimization feasibility

Joachim Durchholz
In reply to this post by Arnold Schwaighofer
Hi Arnold,

Arnold Schwaighofer schrieb:

> On 25 Dec 2007, at 03:29, Gordon Henriksen wrote:
>
>> It also supports emitting tail calls on
>> x86, but its support is somewhat weak. This is partially mandated by
>> calling conventions, but those implementing functional languages might
>> be disappointed. Check the llvmdev archives for details.
>>
> Hi Joachim,
> I am  the person to blame for tail call support and its deficiencies  
> on x86.
>
> The current constraints for tail calls on x86 are:
>
> Max 2 registers are used for argument passing (inreg).

On a register-starved architecture like the x86, that's not too serious
of a problem.

 > Tail call optimization is performed provided:
> * option tailcallopt is enabled

OK.

> * caller/callee are fastcc

Not sure what that means - I have to dig into the docs yet.

> * elf/pic is disabled (this should be the case on mac os x?) OR

Dunno, I don't have (or want) a Mac. (They are fine machines and all,
but too expensive for my budget.)

> * elf/pic enabled + callee is in the same module as caller + callee has
>   visibility protected or hidden

Well, I'm not sure whether I will need or want to work with LLVM
modules, so I don't know whether that will become a problem or not.

The audience I have in mind would mostly use Intel 32-bit, however, it
would probably be a good idea to cover the 64-bit variants as well,
since 32-bit will be dead or dying when (if) my project gets traction.
In other words, I'll probably want to stay away from
architecture-specific optimizations, unless they are really, really
important, cover a really, really large part of the user base, and
really, really don't place architectural constraints on the rest of the
system.

> if you have got any questions regarding tail call stuff  i would be  
> happy to help

Not at the moment, no. My priorities are more on the frontend side right
now; after that, I'll see how much interest the thing will generate, and
*then* I'll start to look into optimization possibilities.
Besides, most of the project is just vaporware in my head right now. I
wouldn't even know which questions to ask yet.

The one answer I really needed to know was whether the LLVM community is
interested in this kind of problem. The response has convinced me that
this is the case, so I'll use LLVM as a foundation.

Regards,
Jo

_______________________________________________
LLVM Developers mailing list
[hidden email]         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
Reply | Threaded
Open this post in threaded view
|

Re: Optimization feasibility

Evan Cheng-2
In reply to this post by Arnold Schwaighofer

On Dec 25, 2007, at 9:07 AM, Arnold Schwaighofer wrote:

> On 25 Dec 2007, at 03:29, Gordon Henriksen wrote:
>
>> Hi Jo,
>>
>> On 2007-12-24, at 14:43, Joachim Durchholz wrote:
>>
>>> I'm in a very preliminary phase of a language project which requires
>>> some specific optimizations to be reasonably efficient.
>>>
>>> LLVM already looks very good; I'd just like to know whether I can
>>> push these optimizations through LLVM to the JIT phase (which, as
>>> far as I understand the docs, is a pretty powerful part of LLVM).
>>
>> Cool.
>>
>>> The optimizations that I need to get to work are:
>>>
>>> * Tail call elimination.
>
>> It also supports emitting tail calls on
>> x86, but its support is somewhat weak. This is partially mandated by
>> calling conventions, but those implementing functional languages  
>> might
>> be disappointed. Check the llvmdev archives for details.
>>
> Hi Joachim,
> I am  the person to blame for tail call support and its deficiencies
> on x86.

Hi Arnold,

Speaking of tail call support... Do you have any plan to enhance it in  
the near future?

Thanks,

Evan

>
>
> The current constraints for tail calls on x86 are:
>
> Max 2 registers are used for argument passing (inreg). Tail call
> optimization is performed
> provided:
> //                * option tailcallopt is enabled
> //                * caller/callee are fastcc
> //                * elf/pic is disabled (this should be the case on
> mac os x?) OR
> //                * elf/pic enabled + callee is in the same module as
> caller + callee has
> //                  visibility protected or hidden
>
>
> an (pointless) example would be:
>
>
> <<---tailcall.ll --->>
> @.str = internal constant [12 x i8] c"result: %d\0A\00" ; <[12 x i8]
> *> [#uses=1]
>
> define fastcc i32 @tailcallee(i32 %a1, i32 %a2, i32 %a3, i32 %a4) {
> entry:
>         ret i32 %a3
> }
>
> define fastcc i32 @tailcaller(i32 %in1, i32 %in2) {
> entry:
>         %tmp11 = tail call fastcc i32 @tailcallee( i32 %in1, i32 %
> in2, i32 %in1,
>  i32 %in2 )             ; <i32> [#uses=1]
>         ret i32 %tmp11
> }
>
>
>
> define i32 @main(i32 %argc, i8** %argv) {
> entry:
> %argc_addr = alloca i32 ; <i32*> [#uses=1]
> %argv_addr = alloca i8** ; <i8***> [#uses=1]
> %retval = alloca i32, align 4 ; <i32*> [#uses=2]
> %tmp = alloca i32, align 4 ; <i32*> [#uses=2]
> %res = alloca i32, align 4 ; <i32*> [#uses=2]
> "alloca point" = bitcast i32 0 to i32 ; <i32> [#uses=0]
> store i32 %argc, i32* %argc_addr
> store i8** %argv, i8*** %argv_addr
> %tmp1 = call fastcc i32 @tailcaller( i32 1, i32 2) ; <i32> [#uses=1]
> store i32 %tmp1, i32* %res
> %tmp2 = getelementptr [12 x i8]* @.str, i32 0, i32 0 ; <i8*>  
> [#uses=1]
> %tmp3 = load i32* %res ; <i32> [#uses=1]
> %tmp4 = call i32 (i8*, ...)* @printf( i8* %tmp2, i32 %tmp3 ) ;
> <i32> [#uses=0]
> store i32 0, i32* %tmp
> %tmp5 = load i32* %tmp ; <i32> [#uses=1]
> store i32 %tmp5, i32* %retval
> br label %return
>
> return: ; preds = %entry
> %retval6 = load i32* %retval ; <i32> [#uses=1]
> ret i32 %retval6
> }
>
> declare i32 @printf(i8*, ...)
> <<---tailcall.ll --->>
>
> x86Shell:>  llvm-as < tailcall.ll | llc  -tailcallopt | gcc -x
> assembler -
> x86Shell:> ./a.out
>
> if you have got any questions regarding tail call stuff  i would be
> happy to help
>
> regards arnold
>
> _______________________________________________
> LLVM Developers mailing list
> [hidden email]         http://llvm.cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev

_______________________________________________
LLVM Developers mailing list
[hidden email]         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
Reply | Threaded
Open this post in threaded view
|

Re: Optimization feasibility

Arnold Schwaighofer
On 2 Jan 2008, at 20:23, Evan Cheng wrote:
> Hi Arnold,
>
> Speaking of tail call support... Do you have any plan to enhance it in
> the near future?

I was thinking about improving the way arguments are lowered (see the  
readme.txt in the Target/X86 directory). But at the moment i am quite  
busy finishing my studies. So probably not in the near future. (e.g  
before the upcoming llvm 2.2 release) But if there is immediate need  
i could be motivated to improve sooner.

regards arnold
_______________________________________________
LLVM Developers mailing list
[hidden email]         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
Reply | Threaded
Open this post in threaded view
|

Re: Optimization feasibility

Evan Cheng-2

On Jan 3, 2008, at 7:45 AM, Arnold Schwaighofer wrote:

> On 2 Jan 2008, at 20:23, Evan Cheng wrote:
>> Hi Arnold,
>>
>> Speaking of tail call support... Do you have any plan to enhance it  
>> in
>> the near future?
>
> I was thinking about improving the way arguments are lowered (see the
> readme.txt in the Target/X86 directory). But at the moment i am quite
> busy finishing my studies. So probably not in the near future. (e.g
> before the upcoming llvm 2.2 release) But if there is immediate need
> i could be motivated to improve sooner.

There isn't real immediate urgent need. Take your time.

Thanks,

Evan

>
>
> regards arnold
> _______________________________________________
> LLVM Developers mailing list
> [hidden email]         http://llvm.cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev

_______________________________________________
LLVM Developers mailing list
[hidden email]         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev