Welcome to the 4th edition of our performance analysis and tuning challenge. If you haven’t participated in our challenges before, we highly encourage you to read the introductory post first.
The fourth edition of the contest will be run by Ivica Bogosavljevic from Johny’s Software Lab blog. Ivica also writes about software performance, so feel free to go and check out his blog, there is a ton of useful content there.
The benchmark for the 4th edition is canny edge detection algorithm. The source code, compilation, and run script as well as the test image are available in Denis’ github repo.
Canny is an image edge detector algorithm that exists for a long time. You can find more information about it in the Wikipedia article. They say an image is worth a thousand words, so here is the before and after image so you can get the impression of how it works.
The implementation used for the challenge is available online. The same version (with very few changes) is available in the repository.
To download and build canny do the following:
$ git clone https://github.com/dendibakh/perf_challenge4.git $ cd perf_challenge4 $ cd canny_baseline $ mkdir build $ cd build # cmake also honors the following env variables: # export CC=/usr/bin/clang # export CXX=/usr/bin/clang++ $ cmake .. -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ $ make
To run the benchmark:
If the program finished correctly, and the image it produced is good, you will see information about the runtime and a message
You may also find useful Denis’ python script for conducting multiple experiments. See the decription inside it.
The target configuration for this challenge is Skylake CPU (e.g. Intel Core i7-6700) + 64-bit Linux (e.g. Ubuntu 20.04) + Clang 10. Although you are free to use whatever environment you have access to. It’s fine if you solve the challenge on Intel, AMD, or ARM CPU. Also, you can do your experiments on Windows1 or Mac since
cmake is used for building the benchmark. The reason why we define the target configuration is to have a unified way to assess all the submissions. In the end, it is not about getting the best score, but about practicing performance optimizations.
Here is the workflow that I recommend:
time, my personal preference is
multitimewhich available in the repositories.
perf record, but again, Intel’s Advisor or Intel’s VTune profiler are my go-to choices, especially for less experienced engineers who are still trying to get a feel on performance tuning.
Canny is a typical image processing algorithm that runs through the image, sometimes row-wise, sometimes column-wise, and processes pixels. Processing is done in several stages. Collecting the performance profile will help you focus on the right functions; collecting information about stalled cycles will help you understand why that code is slow.
I also have a few general hints:
If you feel you’re stuck, don’t hesitate to ask questions or look for support elsewhere. I don’t have much time to answer every question promptly, but I will do my best. You can send questions to me directly using the contact form on my web site or to Denis.
If the produced image is correct it will print
Validation successful. A slight tolerance between the reference output image and the image produced by your algorithm is allowed in order to fully exploit the hardware’s resources.
We will not use submissions for any commercial purposes. However, we can use the submissions for educational purposes.
The baseline we will be measuring against is Skylake client CPU (e.g. Intel Core i7-6700) with 64-bit Linux and Clang 10 compiler used with options
-ffast-math -O3 -march=core-avx2.
We conduct performance challenges via Denis’ mailing list, so it’s a good idea to subscribe (if you haven’t already) if you would like to submit your solution. The benchmark consists of a single file, so you can just send the modified
canny_source.c source file via email to Ivica or Denis. The general rules and guidelines for submissions are described here. We also ask you to provide textual description of all the transformations you have made. It will be much easier for us to analyze your submission.
We are collecting submissions until 28th February 2021.
If you know someone who might be interested in participating in this challenge, please spread the word about it!
Good luck and have fun!
P.S. I’m also open to your comments and suggestions. Especially if you have a proposal of a benchmark for the next edition of the challenge, please let me know. Finding a good benchmark isn’t easy.
Unfortunately, neither Denis nor Ivica work closely with Windows, so sorry, we have limited support for Windows. At least we know that it is possible to compile the source code with the MSVC compiler (19.28.29335) from Visual Studio 2019. But you need to fix cmake or add the optimizations options to the VS project yourself. We highly encourage you to contribute your changes back to the benchmark, so that other people will benefit from it. ↩