Coffee Space 

I was recently experimenting with tcc for a project, and
wanted to benchmark just how good it really is when considering
other options. I of course though about gcc, but later
found clang to be something also worth testing. I think
discovered fil-c as a supposed memory safe version of C,
which was also worth a consideration.
For tcc I am using “tinycc”, an unofficial
mirror that appears to be actively maintained.
For fil-c I am using the pre-compiled v0.673,
specifically a pre-compiled version:
filc-0.673-linux-x86_64.tar.xzis the traditional, self-contained “pizfix” distribution. It only includes the compiler, doesn’t require root, and is based on the musl libc. This is recommended for quickly trying out Fil-C.
As this version of fil-c is complied with musl libc,
this isn’t quite apples-to-apples, but it is good enough to give us an
idea.
Each test is run 10 times against a set of 9 compiler flags:
0001 FLAGS = [ 0002 "", # No optimisation 0003 "-O1", # Optimize 0004 "-O2", # Optimize even more 0005 "-O3", # Optimize yet more 0006 "-O0", # Reduce compilation time and make debugging produce the expected results 0007 "-Os", # Optimize for size 0008 "-Ofast", # Disregard strict standards compliance 0009 "-Og", # Optimize while keeping in mind debugging experience 0010 "-Oz", # Optimize aggressively for size rather than speed 0011 ]
For 10 runs, 9 flags and 4 compilers, we see tests per graph. With 18 graphs that’s 6,480 tests. It takes a while.
For the first test, we compile TCC and compare the final binary size and how long it took to compile. The TCC binary was chosen because it’s a relatively large project and it is written in compatible C.
clang, gcc and tcc produce
similar binary sizes, but fil-c produces are greatly larger
binary.
Similarly, fil-c takes significantly longer to compile,
whilst tcc enjoys a reduction in compilation time.
fil-c is likely to be bloated due to the memory safety
features, so it really depends on how much you value those. For roaring
performance, it does seem to not do as well.
For the next test we compile Links, as it was one of the tests that the TCC author used to benchmark the performance of TCC. In Fabrice Bellard’s test, he got the following performance figures:
| Compiler | Time(s) | lines/second | MBytes/second |
|---|---|---|---|
| TinyCC 0.9.22 | 2.27 | 859000 | 29.6 |
| GCC 3.2 -O0 | 20.0 | 98000 | 3.4 |
For binary size, each is relatively similar, but it is strange how
compiling options appear to make little or no difference to
tcc or fil-c.
For compilation time, tcc appears to lose the edge that
it had, and fil-c appears to be somewhat consistent.
For the next test we compile and execute the performance test for u-database, a flat-binary database key-value store written in C. It will happily read and write hundreds of thousands of records a second (sub microsecond per transaction). In this high-performance application, small changes in compiler and flags can be felt.
When compiling my u-database library, there appears to be no real binary size change for each compiler.
The optimisation flags do appear to hit gcc and
fil-c strangely enough, otherwise there is an unexpected
uniformity across the software.
With the execution time, it for some reason appears that the longer it took to compile, the longer it also takes to run.
For the next test we compile a series of single-file headers from a project called “smollibs”. Each of them is a tiny C99 library with a basic example test program.
For the next graphs, we will evaluate them together…
There appears to be a “striping” affect on the graphs, particularly
around the -Ofast flag. Looking at the GNU explanation for
the flags:
-OfastDisregard strict standards compliance.
-Ofastenables all-O3optimizations. It also enables optimizations that are not valid for all standard-compliant programs. It turns on-ffast-math,-fallow-store-data-racesand the Fortran-specific-fstack-arrays, unless-fmax-stack-var-sizeis specified, and-fno-protect-parens. It turns off-fsemantic-interposition.
It doesn’t appear to specifically be the -O3 flag as we
also test this, so it’s safe to assume that one of the other flags
causes the binary size to increase.
Rather than making a case for a specific compiler, these tests seem to be making a case for testing which compiler specifically works for your use-case!
For the last comparison, I wanted to see how well each compiler
performed when compiling rclc, the C-based wrapper for ROS2. The first
issue is that there is not an easy-to-point-to file that represents the
output of the compilation process, colcon hides files away
and I kind of ran out of time.
clang and gcc gave relatively predictable
compile times over all of the test flags, but tcc failed to
compile the project unfortunately. I believe it is because it
is a C99 based compiler, whereas rclc uses more modern C features.
fil-c also fails to compile, but I didn’t have time to try
and set it up correctly.
Should you switch to tcc as your daily driver C
compiler? Probably not, but it’s not as terrible an option as
you may have original considered! But it will produce valid C code for
many projects, it is mostly faster to compile, and the execution time is
comparable. For speed or memory restricted applications, you may very
well consider tcc as a valid option!
In the future it would be good to get rclc working correctly, and to compare libc implementations. Additionally it is important to consider shared vs static library building - I suspect that we see a move towards static libraries as disk size is cheap, RAM is plentiful, and the cost of sharing memory across increasing numbers of cores is more heavily felt.