Very cool! The first thing that jumps out to me is how tidy and modular the code structure is. The code feels very familiar (stylistically, organizationally, etc.) to me as an LLVM developer.
Thanks! We absolutely took inspiration from the wonderful work in LLVM :). I’m glad that you found it familiar and tidy.
One thing that wasn't at all clear to me is how this is different/similar to TensorFlow XLA(previously mentioned on this list). Can you briefly compare and contrast this with TensorFlow XLA?
That is a very keen observation. There are many similarities between the two projects. However, there are some differences too.
Both are interested in performing cross-node optimizations to address memory usage and execution time. In order to accomplish their goals, both have their own IR and optimization passes.
At the same time, there seem to be some differences as well. In the case of glow we have focused on the use of Caffe2 models and have support for the ONNX format as well. We have been trying to focus on providing a more target independent model and have been considering some heterogeneous execution models as well. XLA is definitely a more mature compiler compared to glow.
I think that there are sufficient similarities and differences that there are ample opportunities for collaboration as these projects grow further.
-- Sean Silva
On Thu, May 3, 2018, 6:14 PM Saleem Abdulrasool via llvm-dev <[hidden email]> wrote:
Hello LLVM community,
We have been working hard on a new domain specific optimizing compiler, and we
are pleased to announce that we have recently open sourced the project! We
would like to introduce you to Glow, an optimizing compiler for neural networks!
This new compiler is built on the hard work of this community and we would like
to thank all of the contributors to the LLVM project. We hope that the project
will be beneficial to others as well, which would not have been possible without