Classic C++ problem with #include directives

Posted on .

A few days ago we had a perplexing bug in my day job that turned out to be related to a classic C++ problem with #include directives. Hopefully, C++ modules will solve part of the mess that are #include directives, but no official specification exists yet and they didn’t make it into C++17.

One of the classic questions of #include directives is: what is the difference between an #include directive using double quotes and one using angles? The C standard mentions headers using angles may not be implemented as actual files. For example, there’s no need for a cstdio file to exist when someone uses #include <cstdio>. Double quotes, OTOH, refer to files, and in both cases the implementation is free to choose how to do a lot of things. In practice, different compilers do things differently. From now on, I’ll be talking about Unix systems and GCC/Clang.

GCC and Clang both do a very simple thing. Most standard headers (every single one?) are actual files that exist on disk. The difference between double quotes and angles is that double quotes make the preprocessor look for the file first in the same directory the source file including it is located. If not found, with both angles and double quotes, the compiler proceeds to search for those headers in the list of directories passed using the -I flag. If it’s still not found, a list of standard system directories is used.

Combining the theory and practice above, some people use angles for every standard or system header, and quotes for every “user header” that belongs to the project they’re building, even if the included header is not in the same directory as the source file. Other people like myself prefer to use double quotes when the file is actually in the same directory as the source file including it, probably meaning the header belongs to the same module. Headers belonging to other modules, even if local to the project, are included using angles and passing the corresponding -I option to the compiler that would be needed in any case. Of course, there’s people that don’t fall inside of either camp.

A second typical questions would be: is there any difference between using angles and passing the current directory (dot) using -I., and using quotes without passing the -I option? Yes there is, when the file you’re compiling is not in the directory you’re calling the compiler from. In that case, angles combined with -I. will search for the header in the directory you’re calling the compiler from (the current working directory), while double quotes will make the preprocessor search for the header in the source file directory. For example, imagine the following hierarchy:

header.h
subdir/main.c
subdir/header.h

When compiling subdir/main.c from the base directory, if subdir/main.c uses #include <header.h> and you pass the -I. option, header.h, in the base directory, will be included. If subdir/main.c uses #include "header.h", subdir/header.h will be included instead.

Subtle but important. A small part of our build system used an auxiliary tool that generated a temporary file in /tmp, and ran the preprocessor on it. When that temporary file used #include directives with double quotes, the included file was searched for in /tmp first. Normally, it didn’t exist there. But one day it did because someone had put a temporary copy in /tmp to do some tests. The included file had outdated contents in /tmp and, furthermore, everything happened to compile and build just fine. Problems only appeared in runtime and it was very hard to find out what was happening because the original source code didn’t match what was being compiled into the binary. It took us a whole day and the solution, in our case, involved using a temporary directory inside /tmp instead of creating the temporary files directly there.

GTX 760 upgraded to a GTX 1070

Posted on . Updated on .

It’s been a long time since my last post, so here’s a new one to tell you I switched graphics cards from a GTX 760 to a GTX 1070.

All started when NVIDIA launched their first Pascal cards back at the start of this past summer. I was pretty serious about getting the amazing GTX 1070 (as powerful as the previous GTX 980ti, nonetheless!) once aftermarket cards became available and AMD had released their Polaris cards too, which I hoped would bring NVIDIA prices down a bit. Taking into account VAT, import taxes and everything, I was hoping GTX 1070 cards would be available for 400-ish euros. Around 410 or 420 was what I expected and the lower the better, of course.

I had never owned a card that expensive, but I thought the improvements this generation brought were worth it. I also planned to sell my old card on eBay to cover some costs (which I did, for around 75 euros). When the time came, the cards were much more expensive (they still are) and it seemed everybody was short on stock permanently. Prices were actually closer to 500 euros in many cases, at least in Spain, with some specific cards going over that figure.

Time passed and AMD released their RX480 cards, while NVIDIA released the GTX 1060 model to compete with them. Both were much more affordable while being incredibly powerful too. A GTX 1060 has similar performance to a GTX 980. RX480 cards had mixed performance. In many interesting old DX11 games performance was below a GTX 1060 and closer to a GTX 970 (which is still not bad at all). In some DX11 titles performance was more or less the same. In a few selected DX11 and most DX12 and Vulkan games performance was clearly superior to a GTX 1060 and closer to the GTX 1070.

So at the end of the summer I had the same doubt many other gamers were having, from what I’ve been reading: pondering if I should go for the RX480 (8GB version) or the GTX 1060 (6GB version). In Spain both cards were priced similarly with some stores having cheaper 1060 cards and others having cheaper RX480 cards, with both being no more than 20 euros apart. I read a lot about async compute, the state of Linux drivers, DX12, Vulkan, DX11 drivers, etc. My take from everything I read? NVIDIA, for all practical purposes, supports async compute even if their solution relies a bit more on software than hardware at this point. We still need more samples from games and comparisons are not totally fair at the time I’m writing this.

For example, Vulkan is noticeably faster in Doom using a GTX 1070, but only when the frame rate is incredibly high and you start to be constrained by CPU performance. Back to the RX480 and GTX 1060, OpenGL performance with NVIDIA drivers was already stellar, so it’s harder to notice the difference. Vulkan performance also improved when NVIDIA released new drivers with support for a newer version of the Vulkan API. And Doom still does not support NVIDIA intrinsic shaders while it supports AMD ones, so it’s hard to tell which card is better. Games implementing DX12 so far have been problematic too, with many of them being traditionally AMD-centric or having inferior performance when compared to DX11. Again, hard to compare. Also, NVIDIA cards in general have less compute performance when compared to AMD cards, but more texel performance. In other words, NVIDIA cards should be slower at shaders but noticeably faster at geometry and texturing. It’s hard to find the sweet spot and performance varies from game to game.

I finally reverted back to my initial plan and got a GTX 1070 for several reasons. First off, a decent aftermarket GTX 1070 was temporarily available in Spain for 420 euros for several days, so I took that chance to grab one. I’m going to use it to play a lot of already-released OpenGL or DX11 games, like The Witcher 3, Arkham Knight, Far Cry 4 and Primal, Fallout 4, etc. Support for the card was already available in NVIDIA drivers for Linux, while AMD support has been introduced more slowly, albeit in a better way (kernel version 4.8 was the best choice but was only released for Fedora 24 last week). 3D performance with open source drivers will be much better with Mesa 12.1, but that’s yet to be released officially and pushed to Fedora. Also, a GTX 1070 uses about the same amount of power as an RX480. In that aspect, NVIDIA cards are more efficient.

That’s more or less why I stayed on “Team Green” like I’ve done since the days of Voodoo cards, but I’m not an NVIDIA fan by any means. In general, I’m not a fan of any brand at all, in any market. I don’t look at the brand stickers on the things I buy, but I do mind their performance, features and price. In fact, being a FOSS enthusiast, I can only congratulate AMD for releasing a phenomenal card like the RX480, improving Mesa and getting support for their cards integrated in the Linux kernel as fast as they did. It’s really amazing. I’m crossing my fingers for them and I sincerely hope Zen ends up being a very nice CPU line that brings some needed competition to Intel. And the same goes for Vega cards and NVIDIA. I’ve owned several AMD Athlon processors in the past, and I look forward to building an all-AMD computer in the future with nice FOSS drivers for Linux and very decent drivers for Windows. They’re definitely on the right path as of now and it’s better for everybody if they stay on it.

From a Nexus 4 to a Moto G4

Posted on .

I recently switched mobile phones, getting rid of my old Nexus 4 and buying a brand new Moto G4. The reasons for the switch more or less obey what I’ve been saying about mobile phones in a few other posts in the past.

Basically and repeating myself, I’m disappointed with mobile. I really thought with the arrival of the Nexus 4 not too long ago (November 2012 according to Wikipedia) some things would start to be different about smartphones. They wouldn’t be disposable products like most other mobile phones were and hardware would be supported and patched for many years to come as long as it made sense. And I missed that prediction by a long shot. Android updates are a mess and we don’t get long term support in any device I know of, if there’s any. Google, ever changing its policy, seems to have settled for 2 years of system updates after the device is introduced and a third year of security updates. For me, system updates are irrelevant (and in my opinion are a frequent source of bugs and instability), but knowing you could have security updates for the next, say, 5 years (sort of like in enterprise Linux distributions) would be a big advance and nobody does that.

So when I was tired of having problems with my Nexus 4 and security support for it had in theory expired, I looked for alternatives. I must say I was a bit disappointed with the range of Moto devices this year, the first one after Lenovo took over. The increased pricing scheme and reduced availability of the Moto G4 Play version (which will not be available in Spain at least until September 1st), and the long delay in launching a new version of the Moto E, which is still not available, forced me to spend more money than expected. I actually bought it from Amazon.co.uk not long after the Brexit vote when the pound sterling lost value and, despite shipping, I still managed to save about 20 euros.

Anyway, this time I’m psychologically prepared to dump it after two years if needed and I still plan to go even cheaper as time passes. Certainly, the 200-euros Moto G4 is definitely much better than the 300-euros Nexus 4, and in two or three years I’m sure a 150 or 100-euros phone will cut it.

By refusing to waste time doing a factory reset in my Nexus 4, these were the problems I was having. I know a factory reset may have fixed some but not all of them.

  • Inability to recover mobile data connectivity after being on WiFi for a long time. Fixed by going into airplane mode and back.

  • Phone ringtone and notifications ringtone resetting to weird and faulty values from time to time. I almost always noticed this when the phone rang with a weird tone.

  • There’s a bug that reboots the phone when connecting to some WiFi routers. This happened to me with a specific router.

  • Flashlight mode or airplane mode activating by themselves from time to time.

  • The phone rebooting itself or powering off without apparent reason from time to time. This impacted me more than any other problem when I had to use the phone as an alarm clock. It powered off overnight and I overslept in two occasions.

What I love about my new Moto G4, apart from getting rid of the previous problems obviously:

  • Improved battery life.

  • The ability to see pending notifications and bringing up the unlock screen by tilting the phone. This guarantees minimal use of physical buttons.

  • 4G support as most modern phones have. It’s not about download speeds really, but latency is much lower on 4G and I notice this specially when browsing the web and tapping a link to open a new page.

  • It’s still a vanilla Android experience.

  • Much stronger vibration that can be felt and heard easily without disturbing anyone. The Nexus 4 was incredibly weak in this aspect.

  • The camera and flashlight gestures.

  • The phone in general being more powerful, which again can be noticed when using a web browser.

  • The improved screen resolution is nice too but it’s not a deal breaker.

I’ll put my old Nexus 4 phone up on eBay soon with a starting price of 50 euros, but contact me privately if you’re interested and that price plus shipping works for you. I don’t expect anyone bidding much higher. Battery life is not great but with light use I was still getting more than a day when fully charged.

I guess I am a GNU Make zealot now

Posted on .

We’ve been working on a new project at my day job for several months now and we’ve spent part of that time writing a new build system for our software. Our old system was actually very old, based on imake (with sugar on top) and too complicated for our tastes. It also had several important technical limitations, the biggest one being parallel builds not working very well or at all, and it was hard to fix them.

We evaluated different possibilities like SCons or CMake but finally went with good ol' GNU Make, and I’ve come to appreciate it more and more as I dig into it. I believe GNU Make sits at an intermediate level between plain POSIX Make and a more complex system like autotools or the previously mentioned SCons or CMake. It has several features that make it a bit more complex, powerful and useful than plain POSIX Make, and I think these features could be leveraged to create a higher level build system if needed.

Specifically, I perceive there are two major details in GNU Make that are game changers among many other minor details. The first one is multiple passes. In GNU Make, a Makefile can contain “include” directives that tell make to read the contents of another Makefile. But here comes the shift: if the included file does not exist, but the available Makefile rules explain how to create it, it will be created and the whole process will start again. It may not look important but, together with the “wildcard” function and the C and C++ compiler -M family of options, it’s very powerful. Let me give you an example. Imagine a simple C or C++ program in which every source file is contained in a single directory together with the Makefile. In GNU Make it’s perfectly possible to write a short and generic Makefile that will build the program and doesn’t need to change ever. That Makefile can dynamically obtain the list of source files with the “wildcard” function, it can create a dependency file for each one of them using the compiler’s -M family of options (these dependency files are essentially mini-Makefiles) to properly calculate dependencies and the right build order, and GNU Make will compile the objects and link them in an executable file for you. You can then proceed to add or remove source code files as needed, reorganizing code and the Makefile would not change. Such a Makefile would look very similar to the one I rewrote for my darts miniproject.

The second important detail is the “eval” function. GNU Make allows creating macros that receive arguments and, in the most crude version, generate text. The “eval” function allows this text to be evaluated as part of the Makefile, which essentially means rules can be generated on the fly depending on runtime conditions. This can be very powerful and is yet another trick in a bag that already contained the previous multipass inclusion explained above, a number of functions to manipulate text, word lists and strings, conditional evaluation of parts of the Makefile and pattern rules.

Our old build system could build the whole set of libraries and executable files in about 15 minutes. After we changed the build system to something much simpler using GNU Make, with parallel build support always in mind, our build time was down to 4 minutes in the same 8-core machine.

But we didn’t stop there. The first version of the new build system used recursive Makefiles. I sincerely believe from the human point of view recursive Makefiles are easier to reason about than non-recursive Makefiles when you have a complex directory hierarchy in a non-toy, real-world complex project, and that’s why so many people and projects use them, despite most GNU Make users having at least heard of the Recursive Make Considered Harmful paper (from 1998!). Obviously, when we were naive new GNU Make users and we started creating the new build system, we did it recursively. And while GNU Make gave us a lot of flexibility and power, and we were very satisfied with the move and the results, we could see how the recursive Makefiles approach was giving us a couple of minor headaches with parallel builds and needlessly increasing build times by a small fraction.

At that moment and for the first time I read the paper instead of just skimming through it and I could only feel like I was rediscovering an old treasure. Every problem we were having with our recursive Makefiles in our real-world complex project was reflected there, and the paper offered at least hints at possible solutions and explained the basics of how to create a non-recursive build system using GNU Make.

Our non-recursive solution would have to be a bit more complex but it could work, so we started a second journey writing another new build system based on GNU Make, this time non-recursive. The new new build system (not a typo) cut our build time down to less than 3 minutes on the same machine (the actual time is around 2:40 but it varies a bit up and down as we modify the codebase) and, while more complex in logic, it’s actually shorter in the total number of build system lines.

In fact, I believe the total build time could be brought down to around 2 minutes if “gnatmake” worked better. “gnatmake” is part of the GCC suite and it’s the only reasonable way to compile Ada code in Linux. Our code is a mix of C++ and Ada. I could write a whole post criticizing “gnatmake” but that would be a bit off-topic here and would make this post even longer. Let’s just say the compiler normally needs to collaborate with the build system, like “gcc” and “g++” do by being able to analyze dependencies, or be integrated with it, and “gnatmake” is not very good at that. With both being GNU projects, it’s a bit surprising to see they don’t work very well together. “gnatmake” cannot analyze source code dependencies for Ada without actually compiling code, which is a big drawback, and can’t effectively communicate with GNU Make about the desired level of build parallelism (i.e. the -j option). The first of those points is crucial. Calculating and generating the dependency files for C++ takes around 10 seconds and, after that, “top” reveals the build system fully using all 8 cores until the C++ part is done, which takes another 50 seconds. Overlapping at the end of that, the Ada part kicks in but “gnatmake”, by analyzing dependencies on the fly while it compiles code, rarely uses more than 3 or 4 processor cores.

Single voting district in the Spanish 2016-06 general election

Posted on .

In the previous Spanish general election I analyzed what would have happened if we were to use a single voting district for the whole country instead of one district for every one of the 50 provinces. I’m repeating the analysis for yesterday’s election out of curiosity.

I’ve uploaded my full results as I did the last time, but I’m also including inline results for the four major parties and PACMA, the only other party with a significant difference in the number of seats it got.

Party Actual seats SVD seats

PP

137

119

PSOE

85

81

UNIDOS PODEMOS

71

76

C’s

32

47

PACMA

0

4

You can see the current voting districts did more harm to C’s than any other party. In that sense it’s suffering what the left party IU has suffered for years.

In this election IU joined forces with Podemos to form Unidos Podemos and that alliance clearly didn’t work out. In the past elections, a single voting district would’ve gotten them 74+13 seats while they would’ve only got 76 this time by going together. Clearly, it doesn’t add up and we can see they lost a big chunk of votes.