GSoC/GCI Archive
Google Code-in 2013 KDE

KDevelop: Python support: write a few benchmarks

completed by: Benjamin Kaiser

mentors: Sven Brauch

kdev-python has quite good unit test coverage, but it would be nice to also have some benchmarks. The two areas that need benchmarking are analyzing files, and calculating completion items. I think the best way to benchmark this is to pick common things, repeat them a few thousand times, and let kdev-python analyze that. Example:

It would be interesting to know how fast different assignments are. Let's pick "a = 1" (a variation would be "a = 1, 2, 3" or "d = (1, 2, 3); a, b, c = d"). You could now benchmark this in two ways; either keep the same declaration, and create a file which goes like "a = 1 a = 1 a = 1 a = 1..." (with newlines), then analyze that; or alter the declaration name each time, like "a1 = 1 a2 = 1 a3 = 1...".

The same goes for code completion: It would be interesting to see how fast gathering different kinds of completion items is, for example from function declarations. The way to do this is the same, just that you put the analysis outside the benchmark and benchmark just gathering the completion items.

I think a good duration for a benchmark is ~40-50ms in this case; adjust the amount of items in the files you generate to roughly match this. It's short enough to avoid slowing down running the tests considerably, but long enough to avoid distortion of the results through corner effects.

For this task, you should write benchmarks for:

analysis: 2 different kinds of assignments (distinct names, plus one with non-distinct names), function declarations (distinct function names), 2 different control structures (if, while).

code completion: local variables, classes, function declarations.

If you have questions, ask me.

Of course, you're welcome to also submit patches which improve the performance of what you tested ;)