GSoC/GCI Archive
Google Summer of Code 2012

The LLVM Compiler Infrastructure

Web Page: http://llvm.org/OpenProjects.html

Mailing List: http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev

The LLVM Project is a collection of modular and reusable compiler and toolchain technologies. Despite its name, LLVM has little to do with traditional virtual machines, though it does provide helpful libraries that can be used to build them.

LLVM began as a research project at the University of Illinois, with the goal of providing a modern, SSA-based compilation strategy capable of supporting both static and dynamic compilation of arbitrary programming languages. Since then, LLVM has grown to be an umbrella project consisting of a number of different subprojects, many of which are being used in production by a wide variety of commercial and open sourceprojects as well as being widely used in academic research. Code in the LLVM project is licensed under the "UIUC" BSD-Style license.

The primary sub-projects of LLVM are:

  1. The LLVM Core libraries provide a modern source- and target-independent optimizer, along with code generation support for many popular CPUs (as well as some less common ones!) These libraries are built around a well specified code representation known as the LLVM intermediate representation ("LLVM IR"). The LLVM Core libraries are well documented, and it is particularly easy to invent your own language (or port an existing compiler) to use LLVM as an optimizer and code generator.

  2. Clang is an "LLVM native" C/C++/Objective-C compiler, which aims to deliver amazingly fast compiles (e.g. about 3x faster than GCCwhen compiling Objective-C code in a debug configuration), extremely useful error and warning messages and to provide a platform for building great source level tools. The Clang Static Analyzer is a tool that automatically finds bugs in your code, and is a great example of the sort of tool that can be built using the Clang frontend as a library to parse C/C++ code.

  3. dragonegg integrates the LLVM optimizers and code generator with the GCC 4.5 parsers. This allows LLVM to compile Ada, Fortran, and other languages supported by the GCC compiler frontends, and access to C features not supported by Clang (such as OpenMP).

  4. The LLDB project builds on libraries provided by LLVM and Clang to provide a great native debugger. It uses the Clang ASTs and expression parser, LLVM JIT, LLVM disassembler, etc so that it provides an experience that "just works". It is also blazing fast and much more memory efficient than GDB at loading symbols.

  5. The libc++ and libc++ ABI projects provide a standard conformant and high-performance implementation of the C++ Standard Library, including full support for C++'0x.

  6. The compiler-rt project provides highly tuned implementations of the low-level code generator support routines like "__fixunsdfdi" and other calls generated when a target doesn't have a short sequence of native instructions to implement a core IR operation.

  7. The vmkit project is an implementation of the Java and .NET Virtual Machines that is built on LLVM technologies.

  8. The polly project implements a suite of cache-locality optimizations as well as auto-parallelism and vectorization using a polyhedral model.

  9. The libclc project aims to implement the OpenCL standard library.

  10. The klee project implements a "symbolic virtual machine" which uses a theorem prover to try to evaluate all dynamic paths through a program in an effort to find bugs and to prove properties of functions. A major feature of klee is that it can produce a testcase in the event that it detects a bug.

  11. The SAFECode project is a memory safety compiler for C/C++ programs. It instruments code with run-time checks to detect memory safety errors (e.g., buffer overflows) at run-time. It can be used to protect software from security attacks and can also be used as a memory safety error debugging tool like Valgrind.

In addition to official subprojects of LLVM, there are a broad variety of other projects that use components of LLVM for various tasks. Through these external projects you can use LLVM to compile Ruby, Python, Haskell, Java, D, PHP, Pure, Lua, and a number of other languages. A major strength of LLVM is its versatility, flexibility, and reusability, which is why it is being used for such a wide variety of different tasks: everything from doing light-weight JIT compiles of embedded languages like Lua to compiling Fortran code for massive super computers.

As much as everything else, LLVM has a broad and friendly community of people who are interested in building great low-level tools. If you are interested in getting involved, a good first place is to skim the LLVM Blog and to sign up for the LLVM Developer mailing list. For information on how to send in a patch, get commit access, and copyright and license topics, please see the LLVM Developer Policy.

Projects

  • Adding a data prefetching transformation to LLVM Polly – generating load-balanced and coarse-grain loop pipelinable code for more task level parallelism and data locality I propose adding a prefetching transformation to LLVM Polly. Such transformation splits the innermost loop into three task level pipelinable parts. They are head, which prefetches data, body, which performs calculation, and foot, which stores back data, and their loads (execution time) are balanced. The transformed code in some sense mimics the behavior of cache, but it is much more than cache because it is timely, accurate and simple. This transformation can well benefit those architectures with on-chip scratch-pad memory and capable of task level parallelism, such as GPU and FPGA. It will also work on multi-core CPU with non-blocking data cache prefetch instruction. Therefore, it will enable LLVM Polly to perform a much larger range of locality optimization.
  • Common memory safety instrumentation and optimization passes for LLVM The goal of this project is to modify SAFECode and AddressSanitizer (ASan) to use a common set of memory safety instrumentation and optimization passes to increase code reuse. These tools and other similar ones use varying runtime methods, but are fundamentally trying to do the same thing: check whether each memory access is safe. It is desirable to optimize away redundant runtime checks to improve such tools' runtime performance. This means that there is a need for shared memory safety instrumentation and optimization passes.
  • Extending Polly with Automatic GPGPU Code Generation Polly provides primary infrastructure of automatic parallelization for LLVM. In this project, I propose to extend Polly to support GPGPU code generation. The generated LLVM IR can be compiled or jitted to execute on modern heterogeneous platform, composed of CPU and GPU.
  • Integrate Baggy Bounds Checking into SAFECode Baggy Bounds Checking (BBC) is an efficient bounds checking technique that pad and align objects to powers of two and enable allocation bounds. It uses a contiguous array as bounds table to enable efficient bounds lookup and thus has low overhead at runtime. This project is aims to integrate BBC into SAFECode.
  • Profile-Guided Optimization Enhancements LLVM already contains a profiling framework, but only a handful of transforms make use of the metadata. Further, it even contains a path profiling framework, but no transforms make use of it. This "Google Summer of Code" proposal lays out an achievable plan to enhance profiling in LLVM and to use profiling metadata in key transformations where it can have a strong positive effect.