From a Nexus 4 to a Moto G4

Posted on .

I recently switched mobile phones, getting rid of my old Nexus 4 and buying a brand new Moto G4. The reasons for the switch more or less obey what I’ve been saying about mobile phones in a few other posts in the past.

Basically and repeating myself, I’m disappointed with mobile. I really thought with the arrival of the Nexus 4 not too long ago (November 2012 according to Wikipedia) some things would start to be different about smartphones. They wouldn’t be disposable products like most other mobile phones were and hardware would be supported and patched for many years to come as long as it made sense. And I missed that prediction by a long shot. Android updates are a mess and we don’t get long term support in any device I know of, if there’s any. Google, ever changing its policy, seems to have settled for 2 years of system updates after the device is introduced and a third year of security updates. For me, system updates are irrelevant (and in my opinion are a frequent source of bugs and instability), but knowing you could have security updates for the next, say, 5 years (sort of like in enterprise Linux distributions) would be a big advance and nobody does that.

So when I was tired of having problems with my Nexus 4 and security support for it had in theory expired, I looked for alternatives. I must say I was a bit disappointed with the range of Moto devices this year, the first one after Lenovo took over. The increased pricing scheme and reduced availability of the Moto G4 Play version (which will not be available in Spain at least until September 1st), and the long delay in launching a new version of the Moto E, which is still not available, forced me to spend more money than expected. I actually bought it from Amazon.co.uk not long after the Brexit vote when the pound sterling lost value and, despite shipping, I still managed to save about 20 euros.

Anyway, this time I’m psychologically prepared to dump it after two years if needed and I still plan to go even cheaper as time passes. Certainly, the 200-euros Moto G4 is definitely much better than the 300-euros Nexus 4, and in two or three years I’m sure a 150 or 100-euros phone will cut it.

By refusing to waste time doing a factory reset in my Nexus 4, these were the problems I was having. I know a factory reset may have fixed some but not all of them.

  • Inability to recover mobile data connectivity after being on WiFi for a long time. Fixed by going into airplane mode and back.

  • Phone ringtone and notifications ringtone resetting to weird and faulty values from time to time. I almost always noticed this when the phone rang with a weird tone.

  • There’s a bug that reboots the phone when connecting to some WiFi routers. This happened to me with a specific router.

  • Flashlight mode or airplane mode activating by themselves from time to time.

  • The phone rebooting itself or powering off without apparent reason from time to time. This impacted me more than any other problem when I had to use the phone as an alarm clock. It powered off overnight and I overslept in two occasions.

What I love about my new Moto G4, apart from getting rid of the previous problems obviously:

  • Improved battery life.

  • The ability to see pending notifications and bringing up the unlock screen by tilting the phone. This guarantees minimal use of physical buttons.

  • 4G support as most modern phones have. It’s not about download speeds really, but latency is much lower on 4G and I notice this specially when browsing the web and tapping a link to open a new page.

  • It’s still a vanilla Android experience.

  • Much stronger vibration that can be felt and heard easily without disturbing anyone. The Nexus 4 was incredibly weak in this aspect.

  • The camera and flashlight gestures.

  • The phone in general being more powerful, which again can be noticed when using a web browser.

  • The improved screen resolution is nice too but it’s not a deal breaker.

I’ll put my old Nexus 4 phone up on eBay soon with a starting price of 50 euros, but contact me privately if you’re interested and that price plus shipping works for you. I don’t expect anyone bidding much higher. Battery life is not great but with light use I was still getting more than a day when fully charged.

I guess I am a GNU Make zealot now

Posted on .

We’ve been working on a new project at my day job for several months now and we’ve spent part of that time writing a new build system for our software. Our old system was actually very old, based on imake (with sugar on top) and too complicated for our tastes. It also had several important technical limitations, the biggest one being parallel builds not working very well or at all, and it was hard to fix them.

We evaluated different possibilities like SCons or CMake but finally went with good ol' GNU Make, and I’ve come to appreciate it more and more as I dig into it. I believe GNU Make sits at an intermediate level between plain POSIX Make and a more complex system like autotools or the previously mentioned SCons or CMake. It has several features that make it a bit more complex, powerful and useful than plain POSIX Make, and I think these features could be leveraged to create a higher level build system if needed.

Specifically, I perceive there are two major details in GNU Make that are game changers among many other minor details. The first one is multiple passes. In GNU Make, a Makefile can contain “include” directives that tell make to read the contents of another Makefile. But here comes the shift: if the included file does not exist, but the available Makefile rules explain how to create it, it will be created and the whole process will start again. It may not look important but, together with the “wildcard” function and the C and C++ compiler -M family of options, it’s very powerful. Let me give you an example. Imagine a simple C or C++ program in which every source file is contained in a single directory together with the Makefile. In GNU Make it’s perfectly possible to write a short and generic Makefile that will build the program and doesn’t need to change ever. That Makefile can dynamically obtain the list of source files with the “wildcard” function, it can create a dependency file for each one of them using the compiler’s -M family of options (these dependency files are essentially mini-Makefiles) to properly calculate dependencies and the right build order, and GNU Make will compile the objects and link them in an executable file for you. You can then proceed to add or remove source code files as needed, reorganizing code and the Makefile would not change. Such a Makefile would look very similar to the one I rewrote for my darts miniproject.

The second important detail is the “eval” function. GNU Make allows creating macros that receive arguments and, in the most crude version, generate text. The “eval” function allows this text to be evaluated as part of the Makefile, which essentially means rules can be generated on the fly depending on runtime conditions. This can be very powerful and is yet another trick in a bag that already contained the previous multipass inclusion explained above, a number of functions to manipulate text, word lists and strings, conditional evaluation of parts of the Makefile and pattern rules.

Our old build system could build the whole set of libraries and executable files in about 15 minutes. After we changed the build system to something much simpler using GNU Make, with parallel build support always in mind, our build time was down to 4 minutes in the same 8-core machine.

But we didn’t stop there. The first version of the new build system used recursive Makefiles. I sincerely believe from the human point of view recursive Makefiles are easier to reason about than non-recursive Makefiles when you have a complex directory hierarchy in a non-toy, real-world complex project, and that’s why so many people and projects use them, despite most GNU Make users having at least heard of the Recursive Make Considered Harmful paper (from 1998!). Obviously, when we were naive new GNU Make users and we started creating the new build system, we did it recursively. And while GNU Make gave us a lot of flexibility and power, and we were very satisfied with the move and the results, we could see how the recursive Makefiles approach was giving us a couple of minor headaches with parallel builds and needlessly increasing build times by a small fraction.

At that moment and for the first time I read the paper instead of just skimming through it and I could only feel like I was rediscovering an old treasure. Every problem we were having with our recursive Makefiles in our real-world complex project was reflected there, and the paper offered at least hints at possible solutions and explained the basics of how to create a non-recursive build system using GNU Make.

Our non-recursive solution would have to be a bit more complex but it could work, so we started a second journey writing another new build system based on GNU Make, this time non-recursive. The new new build system (not a typo) cut our build time down to less than 3 minutes on the same machine (the actual time is around 2:40 but it varies a bit up and down as we modify the codebase) and, while more complex in logic, it’s actually shorter in the total number of build system lines.

In fact, I believe the total build time could be brought down to around 2 minutes if “gnatmake” worked better. “gnatmake” is part of the GCC suite and it’s the only reasonable way to compile Ada code in Linux. Our code is a mix of C++ and Ada. I could write a whole post criticizing “gnatmake” but that would be a bit off-topic here and would make this post even longer. Let’s just say the compiler normally needs to collaborate with the build system, like “gcc” and “g++” do by being able to analyze dependencies, or be integrated with it, and “gnatmake” is not very good at that. With both being GNU projects, it’s a bit surprising to see they don’t work very well together. “gnatmake” cannot analyze source code dependencies for Ada without actually compiling code, which is a big drawback, and can’t effectively communicate with GNU Make about the desired level of build parallelism (i.e. the -j option). The first of those points is crucial. Calculating and generating the dependency files for C++ takes around 10 seconds and, after that, “top” reveals the build system fully using all 8 cores until the C++ part is done, which takes another 50 seconds. Overlapping at the end of that, the Ada part kicks in but “gnatmake”, by analyzing dependencies on the fly while it compiles code, rarely uses more than 3 or 4 processor cores.

Single voting district in the Spanish 2016-06 general election

Posted on .

In the previous Spanish general election I analyzed what would have happened if we were to use a single voting district for the whole country instead of one district for every one of the 50 provinces. I’m repeating the analysis for yesterday’s election out of curiosity.

I’ve uploaded my full results as I did the last time, but I’m also including inline results for the four major parties and PACMA, the only other party with a significant difference in the number of seats it got.

Party Actual seats SVD seats

PP

137

119

PSOE

85

81

UNIDOS PODEMOS

71

76

C’s

32

47

PACMA

0

4

You can see the current voting districts did more harm to C’s than any other party. In that sense it’s suffering what the left party IU has suffered for years.

In this election IU joined forces with Podemos to form Unidos Podemos and that alliance clearly didn’t work out. In the past elections, a single voting district would’ve gotten them 74+13 seats while they would’ve only got 76 this time by going together. Clearly, it doesn’t add up and we can see they lost a big chunk of votes.

Revisiting 256-color terminal support in Fedora

Posted on .

I finally went ahead and upgraded my main box to Fedora 24. I only had one issue, related to 256-color terminal support.

In Fedora 24 GNU Screen is able to detect it’s running under a terminal with 256-color support and it’s been compiled itself with that support. It uses that information to choose an appropriate terminfo entry and value for the TERM environment variable. Instead of setting TERM to “screen” and letting “/etc/profile.d/256term.sh” kick in, Screen sets TERM to “screen.xterm-256color” when running under xterm, for example.

This has several advantages. For example, you don’t need to launch a login shell within xterm so 256term.sh is used. Any regular interactive shell will work and have 256-colors support. When launching Vim inside Screen, you don’t need to pass the -T option to preserve the TERM value as I explained in a previous post. Any new Screen window will also have the right TERM value.

Regression: if you do use -T, somewhat confusingly, Screen will refuse to use the given value it’s chosen itself, complaining it’s too long.

$ echo $TERM
screen.xterm-256color
$ screen -T "$TERM"
-T: terminal name too long. (max. 20 char)

Finally, if you use SSH to connect to remote machines from a terminal with that TERM value, remote programs may be confused about the type of terminal. This happened to me and the best way to solve it, in my opinion, is to use a wrapper script for “ssh” that sets a more “classic” value for TERM before launching the real “ssh”. This is what I use as “$HOME/bin/ssh”:

#!/bin/sh
case "$TERM" in
   *screen*) export TERM=screen ;;
   *xterm*)  export TERM=xterm  ;;
   *) ;;
esac
exec /usr/bin/ssh "$@"

Fedora 24 and ImageMagick

Posted on .

Fedora 24 was released yesterday. According to the latest version of my ever-changing policy on Fedora upgrades, I should not upgrade to it until Fedora 25 beta is announced. Will I resist the urge to switch or will I just wait a couple of weeks until the mirrors are updated and common problems reported? I’m not sure but, in any case, congratulations to the Fedora project and everyone involved!

Now, there’s a thing that’s been worrying me for a few weeks. I’ll say in advance the security policy and response to vulnerabilities by the Fedora project is outstanding and amazing. Most times, when you find out about a vulnerability, either patches are already out for Fedora or the issue is already being worked on. But not in this specific case. Some weeks ago a set of interesting vulnerabilities were found in ImageMagick and received wide publicity on the Internet. Most of them only affect people who use ImageMagick to process user-submitted images in a way they cannot control, like in the backend of an image hosting service, for example. But a few days later we also got a vulnerability that could be triggered by attempting to process an attacker-controlled image, like an image you could have downloaded from anywhere or received by email.

With the first set of vulnerabilities, the recommended action was to modify the ImageMagick policy file, but no package upgrade has been published for Fedora 23 or 24 with an updated policy, as far as I know, so you need to modify the policy file yourself. The vulnerabilities discovered later have not had a corresponding fix in the form of a new package either. RHEL updates have been published, though, from what I see in the bug tracker, but Fedora has so far been left in the cold. I commented in the issue tracker a few weeks ago but nobody has replied so far. Any suggestions on how to proceed?