GSoC/GCI Archive
Google Summer of Code 2013 LLVM Compiler Infrastructure

FastPolly: Reducing LLVM-Polly Compile-Time Overhead

by Star Tan for LLVM Compiler Infrastructure

LLVM-Polly is a promising polyhedral optimizer for data-locality and parallelism. However, experimental results show that Polly analysis and optimization can lead to significant compile-time overhead. On average, Polly optimization will increase the compile time by 393% for PolyBench benchmarks and by 53% for MediaBench benchmarks. That means if you want to gain from Polly, you have to pay 4 times extra compile-time overhead. Even if you do not want to gain much from Polly, you still have to pay 53% compile-time overhead. Such expensive compile-time overhead would make Polly much less attractive to LLVM users. I argue that maintaining fast compile time when Polly is enabled is very important, especially if we think of enabling Polly in default for all LLVM users. Based on this assumption, this project tries to reduce Polly compile-time overhead by revising a large number of Polly passes. First, I will revising some hot Polly passes that dominate the total compile-time overhead; Second, I will revisit Polly canonicalization passes and try to let the Polly bail out early, so Polly will not cause much overhead when it cannot optimize program; Third, I will revisit and improve Polly optimization passes and code generation passes, so Polly can be much faster when it can optimize program. I hope this project can benefit both LLVM users and Polly users. For LLVM users who care more about compile-time overhead, it enables Polly to provide extra performance gains within little extra compile-time overhead. For Polly users who care more about code quality, this project will significantly reduce the compile-time overhead without performance loss.