LLD improvement plan

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
92 messages Options
12345
Reply | Threaded
Open this post in threaded view
|

LLD improvement plan

Rui Ueyama

Hi guys, After working for a long period of time on LLD, I think I found a few things that we should improve in the LLD design for both development ease and runtime performance. I would like to get feedback on this proposal. Thanks! Problems with the current LLD architecture The current LLD architecture has, in my opinion, two issues.

The atom model is not the best model for some architectures The atom model makes sense only for Mach-O, but it’s used everywhere. I guess that we originally expected that we would be able to model the linker’s behavior beautifully using the atom model because the atom model seemed like a superset of the section model. Although it *can*, it turned out that it’s not necessarily natural and efficient model for ELF or PE/COFF on which section-based linking is expected. On ELF or PE/COFF, sections are units of atomic data. We divide a section into smaller “atoms” and then restore the original data layout later to preserve section’s atomicity. That complicates the linker internals. Also it slows down the linker because of the overhead of creating and manipulating atoms. In addition to that, since section-based linking is expected on the architectures, some linker features are defined in terms of sections. An example is “select largest section” in PE/COFF. In the atom model, we don’t have a notion of sections at all, so we had to simulate such features using atoms in tricky ways.

One symbol resolution model doesn’t fit all The symbol resolution semantics are not the same on three architectures (ELF, Mach-O and PE/COFF), but we only have only one "core" linker for the symbol resolution. The core linker implements the Unix linker semantics; the linker visits a file at a time until all undefined symbols are resolved. For archive files having circular dependencies, you can group them to tell the linker to visit them more than once. This is not the only model to create a linker. It’s not the simplest nor fastest. It’s just that the Unix linker semantics is designed this way, and we all follow for compatibility. For PE/COFF, the linker semantics are different. The order of files in the command line doesn’t matter. The linker scans all files first to create a map from symbols to files, and use the map to resolve all undefined symbols. The PE/COFF semantics are currently simulated using the Unix linker semantics and groups. That made the linker inefficient because of the overhead to visit archive files again and again. Also it made the code bloated and awkward. In short, we generalize too much, and we share code too much.

Proposal

  1. Re-architect the linker based on the section model where it’s appropriate.
  2. Stop simulating different linker semantics using the Unix model. Instead, directly implement the native behavior.
When it’s done, the atom model will be used only for Mach-O. The other two will be built based on the section model. PE/COFF will have a different "core" linker than Unix’s. I expect this will simplify the design and also improve the linker’s performance (achieving better performance is probably the best way to convince people to try LLD). I don’t think we can gradually move from the atom model to the section model because atoms are everywhere. They are so different that we cannot mix them together at one place. Although we can reuse the design and the outline the existing code, this is going to be more like a major rewriting rather than updating. So I propose developing section-based ports as new "ports" of LLD. I plan to start working on PE/COFF port first because I’m familiar with the code base and the amount of code is less than the ELF port. Also, the fact that the ELF port is developed and maintained by many developers makes porting harder compared to PE/COFF, which is written and maintained only by me. Thus, I’m going to use PE/COFF as an experiment platform to see how it works. Here is a plan.
  1. Create a section-based PE/COFF linker backend as a new port
  2. If everything is fine, do the same thing for ELF. We may want to move common code for a section-based linker out of the new PE/COFF port to share it with ELF.
  3. Move the library for the atom model to the sub-directory for the Mach-O port.
The resulting linker will share less code between ports. That’s not necessarily a bad thing -- we actually think it’s a good thing because in order to share code we currently have too many workarounds. This change should fix the balance so that we get (1) shared code that’s naturally able to be shared by multiple ports, and (2) simpler, faster code.
Work Estimation It’s hard to tell, but I’m probably able to create a PE/COFF linker in a few weeks, which works reasonably well and ready for code review as a first set of patches. I have already built a complete linker for Windows, so the hardest part (understanding it) is already done.
Once it’s done, I can get a better estimation for ELF.
Caveat Why not define a section as an atom and keep using the atom model? If we do this, we would have to allow atoms to have more than one name. Each name would have an offset in the atom (to represent symbols whose offset from the section start is not zero). But still we need to copy section attributes to each atom. The resulting model no longer looks like the atom model, but a mix of the atom model and the section model, and that comes with the cost of both designs. I think it’s too complicated.
Notes
We want to make sure there’s no existing LLD users who depend on the atom model for ELF, or if there’s such users, we want to come up with a transition path for them.

_______________________________________________
LLVM Developers mailing list
[hidden email]         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
Reply | Threaded
Open this post in threaded view
|

Re: LLD improvement plan

Michael Spencer-4
On Fri, May 1, 2015 at 12:31 PM, Rui Ueyama <[hidden email]> wrote:
> Caveat Why not define a section as an atom and keep using the atom model? If
> we do this, we would have to allow atoms to have more than one name. Each
> name would have an offset in the atom (to represent symbols whose offset
> from the section start is not zero). But still we need to copy section
> attributes to each atom. The resulting model no longer looks like the atom
> model, but a mix of the atom model and the section model, and that comes
> with the cost of both designs. I think it’s too complicated.

Rafael and I have been discussing this change recently. It makes atoms
actually atomic, and also splits out symbols, which has been needed.
The main reason I like this over each target having its own model is
because it gives us a common textual representation to write tests
with.

As for symbol resolution. It seems the actual problem is name lookup,
not the core resolver semantics.

I'd rather not end up with basically 3 separate linkers in lld.

- Michael Spencer

_______________________________________________
LLVM Developers mailing list
[hidden email]         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
Reply | Threaded
Open this post in threaded view
|

Re: LLD improvement plan

Rui Ueyama
On Fri, May 1, 2015 at 1:32 PM, Michael Spencer <[hidden email]> wrote:
On Fri, May 1, 2015 at 12:31 PM, Rui Ueyama <[hidden email]> wrote:
> Caveat Why not define a section as an atom and keep using the atom model? If
> we do this, we would have to allow atoms to have more than one name. Each
> name would have an offset in the atom (to represent symbols whose offset
> from the section start is not zero). But still we need to copy section
> attributes to each atom. The resulting model no longer looks like the atom
> model, but a mix of the atom model and the section model, and that comes
> with the cost of both designs. I think it’s too complicated.

Rafael and I have been discussing this change recently. It makes atoms
actually atomic, and also splits out symbols, which has been needed.
The main reason I like this over each target having its own model is
because it gives us a common textual representation to write tests
with.

If you allow multiple symbols in one atom, is the new definition of atom different from section? If so, in what way?

As for symbol resolution. It seems the actual problem is name lookup,
not the core resolver semantics.

What's the difference between name lookup and the core resolver semantics?
 
I'd rather not end up with basically 3 separate linkers in lld.

I basically agree. However, if you take a look at the code of the PE/COFF port, you'll find something weird here and there.

_______________________________________________
LLVM Developers mailing list
[hidden email]         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
Reply | Threaded
Open this post in threaded view
|

Re: LLD improvement plan

Michael Spencer-4
On Fri, May 1, 2015 at 1:42 PM, Rui Ueyama <[hidden email]> wrote:

> On Fri, May 1, 2015 at 1:32 PM, Michael Spencer <[hidden email]>
> wrote:
>>
>> On Fri, May 1, 2015 at 12:31 PM, Rui Ueyama <[hidden email]> wrote:
>> > Caveat Why not define a section as an atom and keep using the atom
>> > model? If
>> > we do this, we would have to allow atoms to have more than one name.
>> > Each
>> > name would have an offset in the atom (to represent symbols whose offset
>> > from the section start is not zero). But still we need to copy section
>> > attributes to each atom. The resulting model no longer looks like the
>> > atom
>> > model, but a mix of the atom model and the section model, and that comes
>> > with the cost of both designs. I think it’s too complicated.
>>
>> Rafael and I have been discussing this change recently. It makes atoms
>> actually atomic, and also splits out symbols, which has been needed.
>> The main reason I like this over each target having its own model is
>> because it gives us a common textual representation to write tests
>> with.
>
>
> If you allow multiple symbols in one atom, is the new definition of atom
> different from section? If so, in what way?

It's pretty much the same. I read what you said as having a different
section representation for each target.

>
>> As for symbol resolution. It seems the actual problem is name lookup,
>> not the core resolver semantics.
>
>
> What's the difference between name lookup and the core resolver semantics?
>

Name lookup would be how it finds what symbols to consider for
resolving. The core resolver semantics is mostly
SymbolTable::addByName.

- Michael Spencer

>>
>> I'd rather not end up with basically 3 separate linkers in lld.
>
>
> I basically agree. However, if you take a look at the code of the PE/COFF
> port, you'll find something weird here and there.

_______________________________________________
LLVM Developers mailing list
[hidden email]         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
Reply | Threaded
Open this post in threaded view
|

Re: LLD improvement plan

Rui Ueyama
On Fri, May 1, 2015 at 2:10 PM, Michael Spencer <[hidden email]> wrote:
On Fri, May 1, 2015 at 1:42 PM, Rui Ueyama <[hidden email]> wrote:
> On Fri, May 1, 2015 at 1:32 PM, Michael Spencer <[hidden email]>
> wrote:
>>
>> On Fri, May 1, 2015 at 12:31 PM, Rui Ueyama <[hidden email]> wrote:
>> > Caveat Why not define a section as an atom and keep using the atom
>> > model? If
>> > we do this, we would have to allow atoms to have more than one name.
>> > Each
>> > name would have an offset in the atom (to represent symbols whose offset
>> > from the section start is not zero). But still we need to copy section
>> > attributes to each atom. The resulting model no longer looks like the
>> > atom
>> > model, but a mix of the atom model and the section model, and that comes
>> > with the cost of both designs. I think it’s too complicated.
>>
>> Rafael and I have been discussing this change recently. It makes atoms
>> actually atomic, and also splits out symbols, which has been needed.
>> The main reason I like this over each target having its own model is
>> because it gives us a common textual representation to write tests
>> with.
>
>
> If you allow multiple symbols in one atom, is the new definition of atom
> different from section? If so, in what way?

It's pretty much the same. I read what you said as having a different
section representation for each target.

No, the class definition for the section for PE/COFF and ELF would be the same. Also if we can use that for Mach-O, we should use that too. And if it's the same as section, I'd call it section rather than atom.
 

>
>> As for symbol resolution. It seems the actual problem is name lookup,
>> not the core resolver semantics.
>
>
> What's the difference between name lookup and the core resolver semantics?
>

Name lookup would be how it finds what symbols to consider for
resolving. The core resolver semantics is mostly
SymbolTable::addByName.

Yeah, how we resolve (conflicting) symbols is mostly the same, and the difference is in how we find files defining undefined symbols.


- Michael Spencer

>>
>> I'd rather not end up with basically 3 separate linkers in lld.
>
>
> I basically agree. However, if you take a look at the code of the PE/COFF
> port, you'll find something weird here and there.


_______________________________________________
LLVM Developers mailing list
[hidden email]         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
Reply | Threaded
Open this post in threaded view
|

Re: LLD improvement plan

Rafael Espíndola
In reply to this post by Rui Ueyama

I am on the airport waiting to go on vacations, but I must say I am extremely happy to see this happen!

I agree with the proposed direction and steps:

Implement section based linking for coff.

Use that for elf.

If it makes sense, use it for macho.

On May 1, 2015 3:32 PM, "Rui Ueyama" <[hidden email]> wrote:

Hi guys, After working for a long period of time on LLD, I think I found a few things that we should improve in the LLD design for both development ease and runtime performance. I would like to get feedback on this proposal. Thanks! Problems with the current LLD architecture The current LLD architecture has, in my opinion, two issues.

The atom model is not the best model for some architectures The atom model makes sense only for Mach-O, but it’s used everywhere. I guess that we originally expected that we would be able to model the linker’s behavior beautifully using the atom model because the atom model seemed like a superset of the section model. Although it *can*, it turned out that it’s not necessarily natural and efficient model for ELF or PE/COFF on which section-based linking is expected. On ELF or PE/COFF, sections are units of atomic data. We divide a section into smaller “atoms” and then restore the original data layout later to preserve section’s atomicity. That complicates the linker internals. Also it slows down the linker because of the overhead of creating and manipulating atoms. In addition to that, since section-based linking is expected on the architectures, some linker features are defined in terms of sections. An example is “select largest section” in PE/COFF. In the atom model, we don’t have a notion of sections at all, so we had to simulate such features using atoms in tricky ways.

One symbol resolution model doesn’t fit all The symbol resolution semantics are not the same on three architectures (ELF, Mach-O and PE/COFF), but we only have only one "core" linker for the symbol resolution. The core linker implements the Unix linker semantics; the linker visits a file at a time until all undefined symbols are resolved. For archive files having circular dependencies, you can group them to tell the linker to visit them more than once. This is not the only model to create a linker. It’s not the simplest nor fastest. It’s just that the Unix linker semantics is designed this way, and we all follow for compatibility. For PE/COFF, the linker semantics are different. The order of files in the command line doesn’t matter. The linker scans all files first to create a map from symbols to files, and use the map to resolve all undefined symbols. The PE/COFF semantics are currently simulated using the Unix linker semantics and groups. That made the linker inefficient because of the overhead to visit archive files again and again. Also it made the code bloated and awkward. In short, we generalize too much, and we share code too much.

Proposal

  1. Re-architect the linker based on the section model where it’s appropriate.
  2. Stop simulating different linker semantics using the Unix model. Instead, directly implement the native behavior.
When it’s done, the atom model will be used only for Mach-O. The other two will be built based on the section model. PE/COFF will have a different "core" linker than Unix’s. I expect this will simplify the design and also improve the linker’s performance (achieving better performance is probably the best way to convince people to try LLD). I don’t think we can gradually move from the atom model to the section model because atoms are everywhere. They are so different that we cannot mix them together at one place. Although we can reuse the design and the outline the existing code, this is going to be more like a major rewriting rather than updating. So I propose developing section-based ports as new "ports" of LLD. I plan to start working on PE/COFF port first because I’m familiar with the code base and the amount of code is less than the ELF port. Also, the fact that the ELF port is developed and maintained by many developers makes porting harder compared to PE/COFF, which is written and maintained only by me. Thus, I’m going to use PE/COFF as an experiment platform to see how it works. Here is a plan.
  1. Create a section-based PE/COFF linker backend as a new port
  2. If everything is fine, do the same thing for ELF. We may want to move common code for a section-based linker out of the new PE/COFF port to share it with ELF.
  3. Move the library for the atom model to the sub-directory for the Mach-O port.
The resulting linker will share less code between ports. That’s not necessarily a bad thing -- we actually think it’s a good thing because in order to share code we currently have too many workarounds. This change should fix the balance so that we get (1) shared code that’s naturally able to be shared by multiple ports, and (2) simpler, faster code.
Work Estimation It’s hard to tell, but I’m probably able to create a PE/COFF linker in a few weeks, which works reasonably well and ready for code review as a first set of patches. I have already built a complete linker for Windows, so the hardest part (understanding it) is already done.
Once it’s done, I can get a better estimation for ELF.
Caveat Why not define a section as an atom and keep using the atom model? If we do this, we would have to allow atoms to have more than one name. Each name would have an offset in the atom (to represent symbols whose offset from the section start is not zero). But still we need to copy section attributes to each atom. The resulting model no longer looks like the atom model, but a mix of the atom model and the section model, and that comes with the cost of both designs. I think it’s too complicated.
Notes
We want to make sure there’s no existing LLD users who depend on the atom model for ELF, or if there’s such users, we want to come up with a transition path for them.

_______________________________________________
LLVM Developers mailing list
[hidden email]         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
Reply | Threaded
Open this post in threaded view
|

Re: LLD improvement plan

Nick Kledzik
In reply to this post by Rui Ueyama

On May 1, 2015, at 12:31 PM, Rui Ueyama <[hidden email]> wrote:
The atom model is not the best model for some architectures

The atom model is a good fit for the llvm compiler model for all architectures.  There is a one-to-one mapping between llvm::GlobalObject (e.g. function or global variable) and lld:DefinedAtom.  

The problem is the ELF/PECOFF file format.   (Actually mach-o is also section based, but we have refrained from adding complex section-centric features to it, so mapping it to atoms is not too hard).

I’d rather see our effort put to moving ahead to an llvm based object file format (aka “native” format) which bypasses the impedance mismatch of going through ELF/COFF.  



One symbol resolution model doesn’t fit all

Yes, the Resolver was meant to call out to the LinkingContext object to direct it on how to link.  Somehow that got morphed into “there should be a universal data model that when the Resolver process the input data, the right platform specific linking behavior falls out”.  


-Nick


_______________________________________________
LLVM Developers mailing list
[hidden email]         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
Reply | Threaded
Open this post in threaded view
|

Re: LLD improvement plan

Rafael Espíndola


On May 1, 2015 9:42 PM, "Nick Kledzik" <[hidden email]> wrote:
>
>
> On May 1, 2015, at 12:31 PM, Rui Ueyama <[hidden email]> wrote:
>>
>> The atom model is not the best model for some architectures
>
>
> The atom model is a good fit for the llvm compiler model for all architectures.  There is a one-to-one mapping between llvm::GlobalObject (e.g. function or global variable) and lld:DefinedAtom.  

That is not the input to the linker and therefore irrelevant.

> The problem is the ELF/PECOFF file format.   (Actually mach-o is also section based, but we have refrained from adding complex section-centric features to it, so mapping it to atoms is not too hard).

The objective is to build an elf and coff linker. The input has sections and splitting them is a total waste of time and extra design complexity.

> I’d rather see our effort put to moving ahead to an llvm based object file format (aka “native” format) which bypasses the impedance mismatch of going through ELF/COFF.  

Absolutely not. We have to be able to handle elf and coff and do it well.

Also, gold shows that elf at least works extremely well. With function sections the compiler is in complete control of the size of the units the linker uses. With my recent work on MC the representation is also very efficient. I have no reason to believe coff is any different.

Cheers,
Rafael


_______________________________________________
LLVM Developers mailing list
[hidden email]         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
Reply | Threaded
Open this post in threaded view
|

Re: LLD improvement plan

Chandler Carruth-2
In reply to this post by Nick Kledzik
On Fri, May 1, 2015 at 6:46 PM Nick Kledzik <[hidden email]> wrote:

On May 1, 2015, at 12:31 PM, Rui Ueyama <[hidden email]> wrote:
The atom model is not the best model for some architectures

The atom model is a good fit for the llvm compiler model for all architectures.  There is a one-to-one mapping between llvm::GlobalObject (e.g. function or global variable) and lld:DefinedAtom.

I'm not sure how that's really relevant.

On some architectures, the unit at which linking is defined to occur isn't a global object. A classic example of this are architectures that have a hard semantic reliance grouping two symbols together and linking either both or neither of them.
 
The problem is the ELF/PECOFF file format.   (Actually mach-o is also section based, but we have refrained from adding complex section-centric features to it, so mapping it to atoms is not too hard).

I’d rather see our effort put to moving ahead to an llvm based object file format (aka “native” format) which bypasses the impedance mismatch of going through ELF/COFF.

We still have to be able to (efficiently) link existing ELF and COFF objects though? While I'm actually pretty interested in some better object file format, I also want a better linker for the world we live in today...
 
 



One symbol resolution model doesn’t fit all

Yes, the Resolver was meant to call out to the LinkingContext object to direct it on how to link.  Somehow that got morphed into “there should be a universal data model that when the Resolver process the input data, the right platform specific linking behavior falls out”.  


-Nick

_______________________________________________
LLVM Developers mailing list
[hidden email]         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev

_______________________________________________
LLVM Developers mailing list
[hidden email]         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
Reply | Threaded
Open this post in threaded view
|

Re: LLD improvement plan

Alex Rosenberg
On May 1, 2015, at 7:06 PM, Chandler Carruth <[hidden email]> wrote:

On Fri, May 1, 2015 at 6:46 PM Nick Kledzik <[hidden email]> wrote:

On May 1, 2015, at 12:31 PM, Rui Ueyama <[hidden email]> wrote:
The atom model is not the best model for some architectures

The atom model is a good fit for the llvm compiler model for all architectures.  There is a one-to-one mapping between llvm::GlobalObject (e.g. function or global variable) and lld:DefinedAtom.

I'm not sure how that's really relevant.

On some architectures, the unit at which linking is defined to occur isn't a global object. A classic example of this are architectures that have a hard semantic reliance grouping two symbols together and linking either both or neither of them.
 
The problem is the ELF/PECOFF file format.   (Actually mach-o is also section based, but we have refrained from adding complex section-centric features to it, so mapping it to atoms is not too hard).

I’d rather see our effort put to moving ahead to an llvm based object file format (aka “native” format) which bypasses the impedance mismatch of going through ELF/COFF.

We still have to be able to (efficiently) link existing ELF and COFF objects though? While I'm actually pretty interested in some better object file format, I also want a better linker for the world we live in today...

For us, this is secondary. A major part of the reason we started lld was to embrace the atom model, that is to bring the linker closer to the compiler. We have a lot of long-term goals that involve altering the traditional compiler/linker flow, with a goal toward actual improvements in developer workflow. Just iterating again on the exact same design we've had since the '70s is not good enough.

The same is true of other legacy we're inheriting like linker scripts. While we want them to work and work efficiently, we should consider them part of necessary legacy to support and not make them fundamental to our internal design. This planning will allow us latitude to make fundamental improvements. We make similar decisions across LLVM all the time, take our attitude toward __builtin_constant_p() or nested functions for example.

We've been at this for several years. We had goals and deadlines that we're not meeting. We've abandoned several significant design points so far because Rui is making progress on PECOFF and jettisons things and we let it slide because of his rapid pace.

Core command line? GONE.
Round-trip testing? GONE.
Native file format? GONE.
And now we're against the Atom model?

I don't want a new linker that just happens to be so mired in the same legacy that we've ended up with nothing but a gratuitous rewrite with a better license.

We want:

* A new clean command line that obviates the need for linker scripts and their incestuous design requirements.
* lld is thoroughly tested, including the efficient new native object file format it provides.
* lld is like the rest of LLVM and can be remixed such that it's built-in to the Clang driver, should we choose to.
* We can have the linker drive compilation such that objects don't leave the disk cache before being consumed by the linker.

Perhaps we should schedule an in-person lld meeting. Almost everybody is in the Bay Area. I'm happy to host if we think this will help.

Alex

_______________________________________________
LLVM Developers mailing list
[hidden email]         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
Reply | Threaded
Open this post in threaded view
|

Re: LLD improvement plan

Rui Ueyama
It is not a secondary goal for me to create a linker that works very well with existing file formats. I'm trying to create a practical tool with clean codebase and with the LLVM library. So I can't agree with you that it's secondary.

I don't also share the view that we are trying to solve the problem that's solved in '70s. How fast we can link a several hundred megabyte executable is, for example, a pretty modern problem that we have today.

I don't oppose to the idea of creating a new file format that you think better than the existing ones. We may come up with a better design, implement that to the compiler, set out the foundation, and create a linker based on that. However, what's actually happening is coming up with a new idea which is not necessarily best to represent existing file formats, set out the foundation based on that idea, and let LLD developers create a linker for the existing formats base on the foundation (which is, again, is not suitable for the formats). And there was no efforts made in the recent few years for "the new format". I have to say that something is not correct here.

On Sun, May 3, 2015 at 1:50 PM, Alex Rosenberg <[hidden email]> wrote:
On May 1, 2015, at 7:06 PM, Chandler Carruth <[hidden email]> wrote:

On Fri, May 1, 2015 at 6:46 PM Nick Kledzik <[hidden email]> wrote:

On May 1, 2015, at 12:31 PM, Rui Ueyama <[hidden email]> wrote:
The atom model is not the best model for some architectures

The atom model is a good fit for the llvm compiler model for all architectures.  There is a one-to-one mapping between llvm::GlobalObject (e.g. function or global variable) and lld:DefinedAtom.

I'm not sure how that's really relevant.

On some architectures, the unit at which linking is defined to occur isn't a global object. A classic example of this are architectures that have a hard semantic reliance grouping two symbols together and linking either both or neither of them.
 
The problem is the ELF/PECOFF file format.   (Actually mach-o is also section based, but we have refrained from adding complex section-centric features to it, so mapping it to atoms is not too hard).

I’d rather see our effort put to moving ahead to an llvm based object file format (aka “native” format) which bypasses the impedance mismatch of going through ELF/COFF.

We still have to be able to (efficiently) link existing ELF and COFF objects though? While I'm actually pretty interested in some better object file format, I also want a better linker for the world we live in today...

For us, this is secondary. A major part of the reason we started lld was to embrace the atom model, that is to bring the linker closer to the compiler. We have a lot of long-term goals that involve altering the traditional compiler/linker flow, with a goal toward actual improvements in developer workflow. Just iterating again on the exact same design we've had since the '70s is not good enough.

The same is true of other legacy we're inheriting like linker scripts. While we want them to work and work efficiently, we should consider them part of necessary legacy to support and not make them fundamental to our internal design. This planning will allow us latitude to make fundamental improvements. We make similar decisions across LLVM all the time, take our attitude toward __builtin_constant_p() or nested functions for example.

We've been at this for several years. We had goals and deadlines that we're not meeting. We've abandoned several significant design points so far because Rui is making progress on PECOFF and jettisons things and we let it slide because of his rapid pace.

Core command line? GONE.
Round-trip testing? GONE.
Native file format? GONE.
And now we're against the Atom model?

I don't want a new linker that just happens to be so mired in the same legacy that we've ended up with nothing but a gratuitous rewrite with a better license.

We want:

* A new clean command line that obviates the need for linker scripts and their incestuous design requirements.
* lld is thoroughly tested, including the efficient new native object file format it provides.
* lld is like the rest of LLVM and can be remixed such that it's built-in to the Clang driver, should we choose to.
* We can have the linker drive compilation such that objects don't leave the disk cache before being consumed by the linker.

Perhaps we should schedule an in-person lld meeting. Almost everybody is in the Bay Area. I'm happy to host if we think this will help.

Alex


_______________________________________________
LLVM Developers mailing list
[hidden email]         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
Reply | Threaded
Open this post in threaded view
|

Re: LLD improvement plan

Davide Italiano
In reply to this post by Alex Rosenberg
On Sun, May 3, 2015 at 1:50 PM, Alex Rosenberg <[hidden email]> wrote:

> On May 1, 2015, at 7:06 PM, Chandler Carruth <[hidden email]> wrote:
>
> On Fri, May 1, 2015 at 6:46 PM Nick Kledzik <[hidden email]> wrote:
>>
>>
>> On May 1, 2015, at 12:31 PM, Rui Ueyama <[hidden email]> wrote:
>>
>> The atom model is not the best model for some architectures
>>
>>
>> The atom model is a good fit for the llvm compiler model for all
>> architectures.  There is a one-to-one mapping between llvm::GlobalObject
>> (e.g. function or global variable) and lld:DefinedAtom.
>
>
> I'm not sure how that's really relevant.
>
> On some architectures, the unit at which linking is defined to occur isn't a
> global object. A classic example of this are architectures that have a hard
> semantic reliance grouping two symbols together and linking either both or
> neither of them.
>
>>
>> The problem is the ELF/PECOFF file format.   (Actually mach-o is also
>> section based, but we have refrained from adding complex section-centric
>> features to it, so mapping it to atoms is not too hard).
>>
>> I’d rather see our effort put to moving ahead to an llvm based object file
>> format (aka “native” format) which bypasses the impedance mismatch of going
>> through ELF/COFF.
>
>
> We still have to be able to (efficiently) link existing ELF and COFF objects
> though? While I'm actually pretty interested in some better object file
> format, I also want a better linker for the world we live in today...
>
>
> For us, this is secondary. A major part of the reason we started lld was to
> embrace the atom model, that is to bring the linker closer to the compiler.
> We have a lot of long-term goals that involve altering the traditional
> compiler/linker flow, with a goal toward actual improvements in developer
> workflow. Just iterating again on the exact same design we've had since the
> '70s is not good enough.
>
> The same is true of other legacy we're inheriting like linker scripts. While
> we want them to work and work efficiently, we should consider them part of
> necessary legacy to support and not make them fundamental to our internal
> design. This planning will allow us latitude to make fundamental
> improvements. We make similar decisions across LLVM all the time, take our
> attitude toward __builtin_constant_p() or nested functions for example.
>
> We've been at this for several years. We had goals and deadlines that we're
> not meeting. We've abandoned several significant design points so far
> because Rui is making progress on PECOFF and jettisons things and we let it
> slide because of his rapid pace.
>
> Core command line? GONE.
> Round-trip testing? GONE.
> Native file format? GONE.
> And now we're against the Atom model?
>
> I don't want a new linker that just happens to be so mired in the same
> legacy that we've ended up with nothing but a gratuitous rewrite with a
> better license.
>
> We want:
>
> * A new clean command line that obviates the need for linker scripts and
> their incestuous design requirements.
> * lld is thoroughly tested, including the efficient new native object file
> format it provides.
> * lld is like the rest of LLVM and can be remixed such that it's built-in to
> the Clang driver, should we choose to.
> * We can have the linker drive compilation such that objects don't leave the
> disk cache before being consumed by the linker.
>
> Perhaps we should schedule an in-person lld meeting. Almost everybody is in
> the Bay Area. I'm happy to host if we think this will help.
>
> Alex
>
> _______________________________________________
> LLVM Developers mailing list
> [hidden email]         http://llvm.cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
>

There are projects (like FreeBSD) which need a new linker.
In the FreeBSD case it is mainly motivated by a licensing issue, but I
feel like this doesn't mean that the linker needs to be slower or
harder to hack on because we want to treat as first class citizen a
format that has been largely unmaintained in the last 6 months at
least and as second class citizen widespread formats like ELF. I'm
personally excited about the idea of a new format and I would like to
spend some time thinking about it, although I always try to be
pragmatic.
I will be happy to discuss this further in person.

--
Davide

"There are no solved problems; there are only problems that are more
or less solved" -- Henri Poincare

_______________________________________________
LLVM Developers mailing list
[hidden email]         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
Reply | Threaded
Open this post in threaded view
|

Re: LLD improvement plan

James Courtier-Dutton-4
In reply to this post by Rui Ueyama
On 1 May 2015 at 20:31, Rui Ueyama <[hidden email]> wrote:

>
> One symbol resolution model doesn’t fit all The symbol resolution semantics
> are not the same on three architectures (ELF, Mach-O and PE/COFF), but we
> only have only one "core" linker for the symbol resolution. The core linker
> implements the Unix linker semantics; the linker visits a file at a time
> until all undefined symbols are resolved. For archive files having circular
> dependencies, you can group them to tell the linker to visit them more than
> once. This is not the only model to create a linker. It’s not the simplest
> nor fastest. It’s just that the Unix linker semantics is designed this way,
> and we all follow for compatibility. For PE/COFF, the linker semantics are
> different. The order of files in the command line doesn’t matter. The linker
> scans all files first to create a map from symbols to files, and use the map
> to resolve all undefined symbols. The PE/COFF semantics are currently
> simulated using the Unix linker semantics and groups. That made the linker
> inefficient because of the overhead to visit archive files again and again.
> Also it made the code bloated and awkward. In short, we generalize too much,
> and we share code too much.
>

Why can't LLD be free to implement a resolving algorithm that performs better.
The PE/COFF method you describe seems more efficient that the existing
ELF method.
What is stopping LLD from using the PE/COFF method for ELF. It could
also do further optimizations such as caching the resolved symbols.
To me,the existing algorithms read as ELF == Full table scan,  PE/COEF
== Indexed.

Also, could some of the symbol resolution be done at compile time?
E.g. If I include stdio.h, I know which link time library that is
associated with, so I can resolve those symbols at compile time.
Maybe we could store that information in the pre-compiled headers file
format, and subsequently in the .o files.
This would then leave far fewer symbols to resolve at link time.

Kind Regards

James

_______________________________________________
LLVM Developers mailing list
[hidden email]         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
Reply | Threaded
Open this post in threaded view
|

Re: LLD improvement plan

Joerg Sonnenberger
On Mon, May 04, 2015 at 09:29:16AM +0100, James Courtier-Dutton wrote:
> Also, could some of the symbol resolution be done at compile time?
> E.g. If I include stdio.h, I know which link time library that is
> associated with, so I can resolve those symbols at compile time.

Where would you get that information from? No such tagging exists in
standard C or even the extended dialect of C clang is implementing.

Joerg
_______________________________________________
LLVM Developers mailing list
[hidden email]         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
Reply | Threaded
Open this post in threaded view
|

Re: LLD improvement plan

James Y Knight-2
In reply to this post by Alex Rosenberg
And now we're against the Atom model?

I'm quite new to the llvm community, and basically unfamiliar with LLD, so maybe I'm simply uninformed. If so, I will now proceed to demonstrate that to an entire list of people. :)

I've read the doc on http://lld.llvm.org/design.html, but the list of features it says that you get with LLD/Atoms and don't get with the "old generation" of linkers that use "sections"...are all things that ELF linkers already do using sections, and do not require anything finer grained than sections. Sections in ELF objects can actually be as fine-grained as you want them to be -- just as an "Atom". Doc also says, "An atom is an indivisible chunk of code or data." -- which is also what a section is for ELF.

AFAICT, atoms in LLD are simply a restricted form of ELF sections: restricted to having a single symbol associated with them. It doesn't appear that they're actually enabling any new features that no other linker can do.

I'm not very familiar with Mach-O, but it sounds like, contrary to ELF, Mach-O files cannot be generated with one section per global object, but that Mach-O sections (at least as used by OSX) *are* expected to be subdivided/rearranged/etc, and are not atomic. Given that set of properties for the input file format, of course it makes sense that you'd want to subdivide Mach-O "sections" within the linker into smaller atomic pieces to work on them.

But for ELF, the compiler can/will output separate sections for each function/global variable, and the contents of a section should never be mangled. It can also emit multiple symbols into a single section. That an ELF section *may* contain multiple functions/globals which need to stay together is not a problem with the file format -- it's an advantage -- an additional flexibility of representation.

I gather the current model in LLD doesn't support an atomic unit with multiple symbols cleanly. And that that's the main issue that would be good to fix here.

But, rather than talking about "eliminating the atom model" -- which seems to be contentious -- maybe it would be more peaceful to just say that the desired change is to "allow atoms to have multiple global symbols associated, and have more metadata"? It appears to me that it amounts to essentially the same thing, but may not be as contentious if described that way.

If that change was made, you'd just need to know that LLD has slightly unique terminology; "ELF section" == "LLD Atom". (but "Mach-O section" turns into multiple "LLD Atom"s).

Am I wrong?

James


_______________________________________________
LLVM Developers mailing list
[hidden email]         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
Reply | Threaded
Open this post in threaded view
|

Re: LLD improvement plan

Reid Kleckner-2
In reply to this post by Nick Kledzik
Most of what I wanted to say has been said, but I wanted to explicitly call out COMDAT groups as something that we want that doesn't fit the atom model very well.

Adding first class COMDATs was necessary for implementing large parts of the Microsoft C++ ABI, but it also turns out that it's really handy on other platforms. We've made a number of changes to Clang's IRgen to do things like eliminate duplicate dynamic initialization for static data members of class templates and share code for complete, base, and deleting destructors.

Basically, COMDAT groups are a tool that the compiler can use to change the way things are linked without changing the linker. They allow the compiler to add new functionality and reduce coupling between the compiler and the linker. This is a real tradeoff worth thinking about.

I think for many platforms (Windows, Linux) Clang is not the system compiler and we need to support efficiently linking against existing libraries for a long time to come. There are other platforms (Mac, PS4) with a single toolchain where controlling the linker allows adding new functionality quickly.

I think Alex is right, we should probably meet some time and figure out what people need and how to support both kinds of platform well.

Reid

On Fri, May 1, 2015 at 6:42 PM, Nick Kledzik <[hidden email]> wrote:

On May 1, 2015, at 12:31 PM, Rui Ueyama <[hidden email]> wrote:
The atom model is not the best model for some architectures

The atom model is a good fit for the llvm compiler model for all architectures.  There is a one-to-one mapping between llvm::GlobalObject (e.g. function or global variable) and lld:DefinedAtom.  

The problem is the ELF/PECOFF file format.   (Actually mach-o is also section based, but we have refrained from adding complex section-centric features to it, so mapping it to atoms is not too hard).

I’d rather see our effort put to moving ahead to an llvm based object file format (aka “native” format) which bypasses the impedance mismatch of going through ELF/COFF.  



One symbol resolution model doesn’t fit all

Yes, the Resolver was meant to call out to the LinkingContext object to direct it on how to link.  Somehow that got morphed into “there should be a universal data model that when the Resolver process the input data, the right platform specific linking behavior falls out”.  


-Nick


_______________________________________________
LLVM Developers mailing list
[hidden email]         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev



_______________________________________________
LLVM Developers mailing list
[hidden email]         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
Reply | Threaded
Open this post in threaded view
|

Re: LLD improvement plan

Rui Ueyama
In reply to this post by James Courtier-Dutton-4
On Mon, May 4, 2015 at 1:29 AM, James Courtier-Dutton <[hidden email]> wrote:
On 1 May 2015 at 20:31, Rui Ueyama <[hidden email]> wrote:
>
> One symbol resolution model doesn’t fit all The symbol resolution semantics
> are not the same on three architectures (ELF, Mach-O and PE/COFF), but we
> only have only one "core" linker for the symbol resolution. The core linker
> implements the Unix linker semantics; the linker visits a file at a time
> until all undefined symbols are resolved. For archive files having circular
> dependencies, you can group them to tell the linker to visit them more than
> once. This is not the only model to create a linker. It’s not the simplest
> nor fastest. It’s just that the Unix linker semantics is designed this way,
> and we all follow for compatibility. For PE/COFF, the linker semantics are
> different. The order of files in the command line doesn’t matter. The linker
> scans all files first to create a map from symbols to files, and use the map
> to resolve all undefined symbols. The PE/COFF semantics are currently
> simulated using the Unix linker semantics and groups. That made the linker
> inefficient because of the overhead to visit archive files again and again.
> Also it made the code bloated and awkward. In short, we generalize too much,
> and we share code too much.
>

Why can't LLD be free to implement a resolving algorithm that performs better.
The PE/COFF method you describe seems more efficient that the existing
ELF method.
What is stopping LLD from using the PE/COFF method for ELF. It could
also do further optimizations such as caching the resolved symbols.
To me,the existing algorithms read as ELF == Full table scan,  PE/COEF
== Indexed.

The two semantics are not compatible. The results of the two are not always the same.

For example, this is why we have to pass -lc after object files instead of the beginning of the command line. "ln -lc foo.o" would just skip libc because when it visits the library, there's no undefined symbols to be resolved. foo.o would then add undefined symbols that could have been resolved using libc, but it's too late. The link would fail. This is how linkers on Unix works. There are other differences resulting from the difference, so we cannot change that unless we break the compatibility.

Also, could some of the symbol resolution be done at compile time?
E.g. If I include stdio.h, I know which link time library that is
associated with, so I can resolve those symbols at compile time.
Maybe we could store that information in the pre-compiled headers file
format, and subsequently in the .o files.
This would then leave far fewer symbols to resolve at link time.

You can link against an alternative libc, for example, so that's not usually doable.

_______________________________________________
LLVM Developers mailing list
[hidden email]         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
Reply | Threaded
Open this post in threaded view
|

Re: LLD improvement plan

Sean Silva-2
In reply to this post by Rui Ueyama


On Sun, May 3, 2015 at 3:13 PM, Rui Ueyama <[hidden email]> wrote:
It is not a secondary goal for me to create a linker that works very well with existing file formats. I'm trying to create a practical tool with clean codebase and with the LLVM library. So I can't agree with you that it's secondary.

I don't also share the view that we are trying to solve the problem that's solved in '70s. How fast we can link a several hundred megabyte executable is, for example, a pretty modern problem that we have today.

I don't think Alex was trying to say that we're solving the same problem. I think he his trying to say that we are trying to solve our current problems with the same tool flow as was used in the 70's.

-- Sean Silva
 

I don't oppose to the idea of creating a new file format that you think better than the existing ones. We may come up with a better design, implement that to the compiler, set out the foundation, and create a linker based on that. However, what's actually happening is coming up with a new idea which is not necessarily best to represent existing file formats, set out the foundation based on that idea, and let LLD developers create a linker for the existing formats base on the foundation (which is, again, is not suitable for the formats). And there was no efforts made in the recent few years for "the new format". I have to say that something is not correct here.

On Sun, May 3, 2015 at 1:50 PM, Alex Rosenberg <[hidden email]> wrote:
On May 1, 2015, at 7:06 PM, Chandler Carruth <[hidden email]> wrote:

On Fri, May 1, 2015 at 6:46 PM Nick Kledzik <[hidden email]> wrote:

On May 1, 2015, at 12:31 PM, Rui Ueyama <[hidden email]> wrote:
The atom model is not the best model for some architectures

The atom model is a good fit for the llvm compiler model for all architectures.  There is a one-to-one mapping between llvm::GlobalObject (e.g. function or global variable) and lld:DefinedAtom.

I'm not sure how that's really relevant.

On some architectures, the unit at which linking is defined to occur isn't a global object. A classic example of this are architectures that have a hard semantic reliance grouping two symbols together and linking either both or neither of them.
 
The problem is the ELF/PECOFF file format.   (Actually mach-o is also section based, but we have refrained from adding complex section-centric features to it, so mapping it to atoms is not too hard).

I’d rather see our effort put to moving ahead to an llvm based object file format (aka “native” format) which bypasses the impedance mismatch of going through ELF/COFF.

We still have to be able to (efficiently) link existing ELF and COFF objects though? While I'm actually pretty interested in some better object file format, I also want a better linker for the world we live in today...

For us, this is secondary. A major part of the reason we started lld was to embrace the atom model, that is to bring the linker closer to the compiler. We have a lot of long-term goals that involve altering the traditional compiler/linker flow, with a goal toward actual improvements in developer workflow. Just iterating again on the exact same design we've had since the '70s is not good enough.

The same is true of other legacy we're inheriting like linker scripts. While we want them to work and work efficiently, we should consider them part of necessary legacy to support and not make them fundamental to our internal design. This planning will allow us latitude to make fundamental improvements. We make similar decisions across LLVM all the time, take our attitude toward __builtin_constant_p() or nested functions for example.

We've been at this for several years. We had goals and deadlines that we're not meeting. We've abandoned several significant design points so far because Rui is making progress on PECOFF and jettisons things and we let it slide because of his rapid pace.

Core command line? GONE.
Round-trip testing? GONE.
Native file format? GONE.
And now we're against the Atom model?

I don't want a new linker that just happens to be so mired in the same legacy that we've ended up with nothing but a gratuitous rewrite with a better license.

We want:

* A new clean command line that obviates the need for linker scripts and their incestuous design requirements.
* lld is thoroughly tested, including the efficient new native object file format it provides.
* lld is like the rest of LLVM and can be remixed such that it's built-in to the Clang driver, should we choose to.
* We can have the linker drive compilation such that objects don't leave the disk cache before being consumed by the linker.

Perhaps we should schedule an in-person lld meeting. Almost everybody is in the Bay Area. I'm happy to host if we think this will help.

Alex


_______________________________________________
LLVM Developers mailing list
[hidden email]         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev



_______________________________________________
LLVM Developers mailing list
[hidden email]         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
Reply | Threaded
Open this post in threaded view
|

Re: LLD improvement plan

Chris Lattner-2
In reply to this post by Rui Ueyama
On May 1, 2015, at 12:31 PM, Rui Ueyama <[hidden email]> wrote:
Proposal
  1. Re-architect the linker based on the section model where it’s appropriate.
  2. Stop simulating different linker semantics using the Unix model. Instead, directly implement the native behavior.
Preface: I have never personally contributed code to LLD, so don’t take anything I’m about to say too seriously.  This is not a mandate or anything, just an observation/idea.


I think that there is an alternative solution to these exact same problems.  What you’ve identified here is that there are two camps of people working on LLD, and they have conflicting goals:

- Camp A: LLD is infrastructure for the next generation of awesome linking and toolchain features, it should take advantage of how compilers work to offer new features, performance, etc without deep concern for compatibility.

- Camp B: LLD is a drop in replacement system linker (notably for COFF and ELF systems), which is best of breed and with no compromises w.r.t. that goal.


I think the problem here is that these lead to natural and inescapable tensions, and Alex summarized how Camp B has been steering LLD away from what Camp A people want.  This isn’t bad in and of itself, because what Camp B wants is clearly and unarguably good for LLVM.  However, it is also not sufficient, and while innovation in the linker space (e.g. a new “native” object file format generated directly from compiler structures) may or may not actually “work” or be “worth it”, we won’t know unless we try, and that won’t fulfill its promise if there are compromises to Camp B.

So here’s my counterproposal: two different linkers.

Lets stop thinking about lld as one linker, and instead think of it is two different ones.  We’ll build a Camp B linker which is the best of breed section based linker.  It will support linker scripts and do everything better than any existing section based linker.  The first step of this is to do what Rui proposes and rip atoms out of the model.

We will also build a no-holds-barred awesome atom based linker that takes advantage of everything it can from LLVM’s architecture to enable innovative new tools without worrying too much about backwards compatibility.

These two linkers should share whatever code makes sense, but also shouldn’t try to share code that doesn’t make sense.  The split between the semantic model of sections vs atoms seems like a very natural one to me.

One question is: does it make sense for these to live in the same lld subproject, or be split into two different subprojects?  I think the answer to that question is driven from whether there is shared code common between the two linkers that doesn’t make sense to sink down to the llvm subproject itself.

What do you think?

-Chris


_______________________________________________
LLVM Developers mailing list
[hidden email]         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
Reply | Threaded
Open this post in threaded view
|

Re: LLD improvement plan

Joerg Sonnenberger
On Mon, May 04, 2015 at 12:52:55PM -0700, Chris Lattner wrote:
> I think the problem here is that these lead to natural and inescapable
> tensions, and Alex summarized how Camp B has been steering LLD away
> from what Camp A people want.  This isn’t bad in and of itself, because
> what Camp B wants is clearly and unarguably good for LLVM.  However,
> it is also not sufficient, and while innovation in the linker space
> (e.g. a new “native” object file format generated directly from
> compiler structures) may or may not actually “work” or be “worth it”,
> we won’t know unless we try, and that won’t fulfill its promise if
> there are compromises to Camp B.

It has been said in this thread before, but I fail to see how the atom
model is an actual improvement over the fine grained section model. It
seems to be artifically restricted for no good reasons.

> Lets stop thinking about lld as one linker, and instead think of it is
> two different ones.  We’ll build a Camp B linker which is the best of
> breed section based linker.  It will support linker scripts and do
> everything better than any existing section based linker.  The first
> step of this is to do what Rui proposes and rip atoms out of the model.

This is another item that has been irritating me. While it is a very
laudable goal to not depend on linker scripts for the common case, not
having the functionality of fine grained output control is certainly a
problem. They are crucial for embedded developers and also at least
significant for anything near a system kernel.

> We will also build a no-holds-barred awesome atom based linker that
> takes advantage of everything it can from LLVM’s architecture to enable
> innovative new tools without worrying too much about backwards
> compatibility.

I'd say that a good justificatiton for way an atom based linker is/can
be better would be a good start...

Joerg

_______________________________________________
LLVM Developers mailing list
[hidden email]         http://llvm.cs.uiuc.edu
http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
12345