[llvm-dev] Thank you from the Glow Developers

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

[llvm-dev] Thank you from the Glow Developers

Joel E. Denny via llvm-dev
Hello LLVM community,

We have been working hard on a new domain specific optimizing compiler, and we
are pleased to announce that we have recently open sourced the project!  We
would like to introduce you to Glow, an optimizing compiler for neural networks!

This new compiler is built on the hard work of this community and we would like
to thank all of the contributors to the LLVM project.  We hope that the project
will be beneficial to others as well, which would not have been possible without
your work.

You can find the sources to it at http://github.com/pytorch/glow and read up on
the work in the associated paper we have released at https://arxiv.org/pdf/1805.00907.

Thank you all!

The Glow Developers

_______________________________________________
LLVM Developers mailing list
[hidden email]
http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
Reply | Threaded
Open this post in threaded view
|

Re: [llvm-dev] Thank you from the Glow Developers

Joel E. Denny via llvm-dev
Very cool! The first thing that jumps out to me is how tidy and modular the code structure is. The code feels very familiar (stylistically, organizationally, etc.) to me as an LLVM developer.

One thing that wasn't at all clear to me is how this is different/similar to TensorFlow XLA (previously mentioned on this list). Can you briefly compare and contrast this with TensorFlow XLA?

-- Sean Silva


On Thu, May 3, 2018, 6:14 PM Saleem Abdulrasool via llvm-dev <[hidden email]> wrote:
Hello LLVM community,

We have been working hard on a new domain specific optimizing compiler, and we
are pleased to announce that we have recently open sourced the project!  We
would like to introduce you to Glow, an optimizing compiler for neural networks!

This new compiler is built on the hard work of this community and we would like
to thank all of the contributors to the LLVM project.  We hope that the project
will be beneficial to others as well, which would not have been possible without
your work.

You can find the sources to it at http://github.com/pytorch/glow and read up on
the work in the associated paper we have released at https://arxiv.org/pdf/1805.00907.

Thank you all!

The Glow Developers
_______________________________________________
LLVM Developers mailing list
[hidden email]
http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev

_______________________________________________
LLVM Developers mailing list
[hidden email]
http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
Reply | Threaded
Open this post in threaded view
|

Re: [llvm-dev] Thank you from the Glow Developers

Joel E. Denny via llvm-dev
Hi Sean,

Sorry for the delay.

On Sat, May 5, 2018 at 1:23 PM Sean Silva <[hidden email]> wrote:
Very cool! The first thing that jumps out to me is how tidy and modular the code structure is. The code feels very familiar (stylistically, organizationally, etc.) to me as an LLVM developer.

Thanks!  We absolutely took inspiration from the wonderful work in LLVM :). I’m glad that you found it familiar and tidy.

One thing that wasn't at all clear to me is how this is different/similar to TensorFlow XLA (previously mentioned on this list). Can you briefly compare and contrast this with TensorFlow XLA?

That is a very keen observation.  There are many similarities between the two projects.  However, there are some differences too.

Both are interested in performing cross-node optimizations to address memory usage and execution time.  In order to accomplish their goals, both have their own IR and optimization passes.

At the same time, there seem to be some differences as well.  In the case of glow we have focused on the use of Caffe2 models and have support for the ONNX format as well.  We have been trying to focus on providing a more target independent model and have been considering some heterogeneous execution models as well.  XLA is definitely a more mature compiler compared to glow.

I think that there are sufficient similarities and differences that there are ample opportunities for collaboration as these projects grow further.

-- Sean Silva


On Thu, May 3, 2018, 6:14 PM Saleem Abdulrasool via llvm-dev <[hidden email]> wrote:
Hello LLVM community,

We have been working hard on a new domain specific optimizing compiler, and we
are pleased to announce that we have recently open sourced the project!  We
would like to introduce you to Glow, an optimizing compiler for neural networks!

This new compiler is built on the hard work of this community and we would like
to thank all of the contributors to the LLVM project.  We hope that the project
will be beneficial to others as well, which would not have been possible without
your work.

You can find the sources to it at http://github.com/pytorch/glow and read up on
the work in the associated paper we have released at https://arxiv.org/pdf/1805.00907.

Thank you all!

The Glow Developers
_______________________________________________
LLVM Developers mailing list
[hidden email]
http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
--
Saleem Abdulrasool
compnerd (at) compnerd (dot) org

_______________________________________________
LLVM Developers mailing list
[hidden email]
http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev