Intel NUC as a home server

Posted on . Updated on .

By now you may have read my posts about finding and choosing an online backup service. Those cover the part about off-site backups. I’m also making changes to the way I do on-site backups. Recently, I bought an Intel NUC, model DN2820FYK that I’m using as my home server for that purpose.

Maybe I’m out of touch with the low-power PC community, but I only recently found out about these devices, which are one of the few ways to get your hands on a low-power Intel processor powering a “normal” PC, be it either a Bay Trail-M processor or a Haswell lower-power processor with the U or Y suffixes. The most efficient ones always use BGA and can’t be bought on their own.

So the solution Intel gives you are these NUCs. They’re aimed at the hobbyist market, because it requires work on your part, but at the same time they’re very easy to set up. You just need to buy the kit, a DDR3L memory stick (only one slot is available and the L is very important) and a 2.5" SATA hard drive. Optionally, a wifi adapter or antenna, I think, if you want to use wifi instead of its wired Gigabit port. You can then install an OS and use it as a normal computer by connecting it to an HDMI screen, USB mouse and USB keyboard.

Regarding the hard drive, all models come with a SATA connector, I think, but only the ones with taller boxes have room enough to fit a 2.5" hard drive inside. For some uses, the smaller models save space and can be booted from a USB stick in any case.

The following image shows the back of my device, with sockets for power, HDMI, Ethernet, USB 2.0 and audio.

And this is the front of the device. As you can see, it’s a bit bigger than my hand. The socket on the front is a USB 3.0 port.

Compared to my Raspberry Pi, it has the following disadvantages:

  • It’s more power hungry. The CPU itself uses 7.5W but the whole box by itself has a 36W power supply, while a Raspberry Pi with a hard drive typically needs 2A from its 5V USB power socket, that is, 10W.

  • It’s more noisy. Noise is one of the few slightly disappointing aspects of the device. I was genuinely expecting an almost silent PC. The noise is more of a whisper than anything really, and the hard drive inside makes more noise when reading or writing data, but it’s not silent as we’ve been spoiled to expect from mobile phones, ultrabooks and some workstations. I believe the noise comes from the power supply. I would have zero problems sleeping in the same room it’s in.

  • It’s cheap yet obviously more expensive. The kit plus the 4 GB memory stick plus a hard drive would cost you about $250.

But it has the following advantages:

  • First off, it’s an Intel x86_64 computer with an Intel graphics chip, bluetooth, wifi, gigabit network, etc. While the CPU itself is a single core 2.x GHz chip and may not run Crysis smoothly, this is insanely more powerful than a Raspberry Pi.

  • As a consequence of the previous point, you can install almost anything on it. For example, I installed Slackware Linux like I have in my desktop computer, with the network installation option by booting from USB. Notice this thing supports EFI booting with Secure Boot, but also has a legacy BIOS mode if you want to to avoid any trouble.

  • It supports way more RAM. I bought a 4GB stick but you could go with 8GB too. More than enough for a home server of almost any kind.

  • The SATA connector makes it possible to monitor the hard drive using SMART and maybe get a warning when the disk is about to fail. As far as I know, you can’t monitor SMART over USB in a standard way, which is a minor grief I had with my Raspberry Pi. Some Windows programs let you do it using nonstandard mechanisms, but under Linux you’re out of luck. If I’m wrong on this, please let me know in the comments. It’s a very interesting topic.

  • It comes with a proper box, a board to mount it on the back of a TV, a proper power supply and includes a DC adapter with interchangeable plugs. No additional purchases needed in this regard.

The PC market may be dying, or at least shrinking, but I’m very glad Intel is selling these kits and I hope they continue to do so with future CPU models and microarchitectures. When the desktop computer I’m typing this text in dies, I think I’d be delighted if I could buy one of these to serve as my workstation. Right now the CPUs powering them are a bit underpowered, but Intel should be able to create, in a few years, a 2.5GHz low power quad core CPU suitable to be put inside one of these boxes.

New prices for Google Drive and thoughts on hubiC

Posted on .

Almost a month after my post about online backup services, Google decided to spice up competition in online storage by lowering their Google Drive prices. It’s probably a jab aimed at Dropbox.

The new prices are somewhat interesting. Compared to the solution I chose a month ago, hubiC, Google offers less storage for free and way less storage for 10 EUR/USD a month. However, the sweet spot of 100 GB has comparable prices, with hubiC being 1 EUR a month (1.39 USD as of now) and Google being 1.99 USD a month. I need to store about 60 GB of data for now.

In the time I’ve been using hubiC, I must admit the service has been a bit unreliable. It’s been unavailable for several hours a couple of times and, for some reason, moving files from one folder to another from the web interface (which is the one I use) seems to take ages and is prone to failure.

On the other hand, in my brief experience so far with Google Drive (I’m uploading data as we speak), the web interface works perfectly and the service is very reliable.

As I’m encrypting data locally before uploading and using a separate account for this specific purpose, privacy is not a concern. My intention is to keep uploading data to Google Drive until I reach the free limit of 15 GB. Then, if everything keeps working as now, I’ll probably switch from hubiC to Google Drive. The few cents of difference will probably be worth it.

Some people are concerned about Google shutting the service down, but I don’t think Google is getting rid of Google Drive in the short or medium run, like they did with other services. The storage is shared for your mail, documents and pictures, and it’s an integral part of the Google account experience. When used without encryption (like I’m sure most people do), it’s also a source of information for Google about you and your social life, which helps in advertising.

TextSecure released with data channel support

Posted on .

Yesterday Open WhisperSystems released the new and highly anticipated version of TextSecure with data channel support. For me, this is one of the most important events related to cryptography and privacy we’ve seen in several years.

TextSecure is a text message asynchronous communications app. It’s totally free and open source software released under the terms of the GPLv3, both the client and server. In previous versions, the communication backend were SMS messages. Yesterday’s release can use the data channel, that is, your mobile data plan, which in many cases means reduced costs for users. The user interface has been improved. TextSecure is today the most prominent examples of cryptography done right. It could barely be easier to use. As Moxie Marlinspike mentions in yesterday’s blog entry, one of their goals was to bring crypto to the masses. No doubt this release helps normalize crypto usage worldwide, which is fantastic news. iOS and desktop clients are in the works.

Technically speaking, I can only praise the app. Messages are encrypted end to end. Servers don’t have access to the message contents. Furthermore, it has an optional feature to store messages encrypted on your device in an attempt to keep them safe should your phone be physically compromised. This can be changed on the fly at any moment. It can import your SMS conversations and history during the setup phase, and act as the default SMS app if you wish, providing opportunistic message encryption if your recipient is a TextSecure user too. Its crypto foundation is solid and praised, contrary to alternatives like Telegram.

The new release appears in the perfect moment. After WhatsApp was acquired by Facebook and after suffering a 4-hours outage that prompted many to start looking for possible alternatives. I started using it yesterday and will promote its usage among friends and family. I suggest you give it a try and do the same if you think it works well.

My most sincere public congratulations to the TextSecure team. Using it together with the VoIP application RedPhone, by the same authors, could mean your mobile communications are all private, secure and as convenient as always, starting today.

My search for an online backup service

Posted on .

I’ve been trying to find a good online backup service for several weeks now and I’m choosing hubiC for now. I found out about them recently because they were on the front page of Hacker News.

My needs are pretty simple. Most of my configuration, scripts, software and documents are backed up online in appropriate places where I don’t have to pay any extra, and all of that amounts to less than 1 GiB of data.

My personal images, videos and music are a totally different beast. They take around 60 GiB, which is not much considering what other people have, but more than what’s available for free in most services. They are backed up in a personal hard drive, but I’ve always felt the need to keep a copy off-site that would be safe from home disasters.

A homemade solution could be a hard drive at a friend’s house that could be accessed remotely with rsync or a similar tool, but I don’t like the idea of bothering anyone for this. Maybe I’ll look into it in the future. Ideally, I’m also looking for a solution that preserves my privacy.

For this simple situation, these are the solutions I considered and what I perceive are the pros and cons.

  • Flickr was the first option I considered. The main advantage is having 1 TiB of space for free. On the other hand, videos are limited to 30 seconds and images have to be uploaded unencrypted. The process of uploading the existing images from the web interface was a bit clumsy when I tried it some weeks ago. They could only be uploaded in groups of 20 or so, which needs constant interaction. Music cannot be uploaded.

  • Dropbox is nice because it’s popular and would allow for local encryption before uploading. It has some proprietary clients but allows access through a web interface. So far, so good. Price is $10 a month in the 100 GiB plan, which leaves the price at $0.10 per GiB, or $0.08 if you choose yearly payments. It’s not a bad option and they have additional plans that scale linearly for 200 GiB and 500 GiB.

  • Mega, the service by Kim Dotcom is also similar and has nice prices. Forgetting about their native client-side encryption for a moment (because you can encrypt the files yourself before uploading), their plan of 500 GiB for €8.33 a month is very competitive (around $0.02 per GiB). It has bandwidth limits but as far as I know they’re more than enough for backup purposes. The bad side is Kim Dotcom himself. You never know if they’re going to take that service down like they did with Megaupload and you risk losing your online backup. I doubt that’s going to happen this time, however.

  • Amazon Glacier. At barely $0.01 per GiB a month it’s one of the cheapest options when talking about pure storage. Note these storage costs scale with the real usage instead of providing a fixed-price plan with a given amount of available space. However, it’s more technically oriented as all they give you is an API, even if there are free and open source tools that ease using it. For example, I’ve read good things about git-annex, which lets you combine Glacier with other personal storage solutions and uses git as the storage backend. Also, boto is a Python library for AWS that has a Glacier module and can be used to build custom applications. None of that would free you from Glacier’s weird retrieval fees and mechanisms, which are its main disadvantage.

  • is technically one of the best solutions, in my humble opinion. Your storage space can be accessed with standard Unix tools like rsync, SSH, SFTP, etc. Support is excellent, or so I’ve heard, and you speak directly with engineers. From my perspective it’s what I’d like to have in an ideal world. However, their prices are not in the lowest ranges. With available discounts you can get as low as $0.10 per GiB a month, which is on par with Dropbox. Prices are for real usage, more or less. You request the specific volume size you want and they make the space available for you. It can be increased or decreased with some granularity.

  • Tarsnap is different but excellent too considering technical aspects. Encryption is integrated in the service and performed client-side. The remote end has no chance of decrypting your personal data. This is not just a promise. The client tools are CLI and open source, and the service is managed by Colin Percival, the FreeBSD security officer. The problem is price. Based on Amazon S3, you are charged for real usage and bandwidth, at $0.30 per GiB each.

  • Finally we arrive at hubiC. I chose it because the prices are ridiculously low, at €1 a month for 100 GiB it’s about as cheap as Amazon Glacier, without its disadvantages. Unlimited bandwidth as far as I know, and access with a web interface and proprietary clients (Dropbox style) for the main platforms. The service is run by OVH, a big French hosting company, and seems to be trustworthy.

My modus operandi with hubiC is tarballing and encrypting picture sets locally and then uploading those to their service. So far I’ve uploaded around 7 GiB of data and will jump to one of their paid plans, if there are no surprises, as soon as I reach my free limit of 25 GiB.

Revisiting C/C++ basics: assert

Posted on . Updated on .

I work with C/C++ code five days a week but I don’t post much on the topic, so I decided to write a piece about assert() today. This is in part motivated because I made a mistake this week, got sloppy while programming something close to a deadline and forgot one of assert()'s features I normally don’t worry about. Not an issue as testing caught the bug but it made me relearn some basic concepts and I wanted to share them.

The assert basics

As you know, assert() is a standard macro declared in stdlib.h and cstdlib. It’s normally used to make sure certain conditions are met at specific points in the code. Failing to meet that condition is considered a bug in the program logic, and assert() prints a (developer) helpful error message before aborting the program.

Standard assert() macro has an often forgotten feature. If the NDEBUG macro is defined prior to including assert.h or cassert, assert() is turned into a no-op and its expression is not evaluated.

This behavior can cause bugs that only appear in release builds when the expression to evaluate has side effects. This is all mentioned clearly in most manpages documenting the macro. Some frameworks define NDEBUG for release builds. Notably, CMake comes to my mind, which defines NDEBUG for its Release, RelWithDebInfo and MinSizeRel build types.

The framework we use to build the software normally does not define NDEBUG under any circumstances. However, what I was writing this week was built using a different environment that defined NDEBUG and it caught me by surprise.

In addition, sometimes we want to assert an expression even in release builds. More so when we don’t want or can’t prove that testing the assertion slows the program down significantly. This is highly unlikely in most cases and I consider leaving the asserts in release builds a good practice then. Furthermore and specially if asserts are to be evaluated in release builds, if the assertion fails we probably want the program to crash in a controlled manner, maybe collecting printing additional information so the user can report the problem, both for GUI and CLI applications.


To avoid release-build-only bugs, we should make sure the asserted expression does not have any side effects. For example:

class Foo
        // ...
        size_t size() const;
        bool test_and_set(int value);
// ...

// Later in the code...

Foo foo;
assert(foo.size() > 0);      // Good. size() is const and has no side effects.
assert(foo.test_and_set(1)); // BAD! test_and_set probably has side effects.

The test_and_set method will not be called if NDEBUG is defined in a release build prior to including the assert.h or cassert standard headers.

Apart from that, we may want to assert expressions even in release builds with NDEBUG and provide a way to crash in a controlled manner. This can be achieved by using a set of custom assertion macros, with separate cases for assertions that should be removed from release builds and assertions that should be present in release builds too.

The following code contains an attempt at creating such a set of macros. It is partly copied without shame from an excellent post on the topic at Stack Overflow.

The standard assert() macro is required to be a non-void expression with a value that could be used in a boolean test, for example. The following macros aim for this behavior too, which discards using common macro compositions like “do … while(0)”. ASSERT is a macro that will always be evaluated, while DASSERT mimics standard assert() and is a no-op when NDEBUG has been defined, hence only being evaluated in debug builds and not release builds.

Users can provide their own assertion handlers to perform additional actions on assertion failure as well as controlling whether the program crashes or not via abort().

Hopefully the header comments should be clear enough.

 * This header defines two macros and a function.
 * ASSERT is a macro that will always check the expression passed to it. If the
 * expression evaluates to false, it will run the assertion handler. If the
 * assertion handler returns true, it will halt the program by calling
 * abort(3).
 * DASSERT is a macro that does the same except that it will be a no-op if
 * NDEBUG is defined, like standard assert(3). The assert expression will only
 * be evaluated in debug environments.
 * set_assert_handler() is a function that allows users to pick their own
 * assertion handler different from the default provided one. Assertion
 * handlers should conform to the following prototype:
 *     int (*function)(const char *expression, const char *file, int line);
 * The default assertion handler will print the expression, line and file to
 * standard error and return true so the program is aborted. The value returned
 * from set_assert_handler() is the old assertion handler function in case it
 * needs to be restored. set_assert_handler() is NOT reentrant.

typedef int (*assert_handler_ptr_t)(const char *, const char *, int);

#ifdef __cplusplus
extern "C" {
assert_handler_ptr_t set_assert_handler(assert_handler_ptr_t);
extern assert_handler_ptr_t assert_handler_ptr;
#ifdef __cplusplus

#ifdef __cplusplus
#define HALT() (std::abort())
#define HALT() (abort())

#define ASSERT_HANDLER(x, y, z) ((*assert_handler_ptr)(x, y, z))

#define ASSERT(x) (!(x) && ASSERT_HANDLER(#x, __FILE__, __LINE__) && (HALT(), 1))

#ifndef NDEBUG
#define DASSERT(x) ASSERT(x)
#define DASSERT(x) (1)

#include <stdio.h>
#include "xassert.h"

#ifdef __cplusplus
extern "C" {

static int
assert_handler_default(const char *expr, const char *file, int line)
            "Assertion failed on %s line %d: %s\n"
            "Please report this problem. Aborting program.\n",
            file, line, expr);
    return 1;

set_assert_handler(assert_handler_ptr_t handler)
    assert_handler_ptr_t old = assert_handler_ptr;
    assert_handler_ptr = handler;
    return old;

assert_handler_ptr_t assert_handler_ptr = assert_handler_default;

#ifdef __cplusplus

Comments about possible improvements or problems with the above code are welcome.

Further thoughts

Messages printed by assertion macros are usually not user-friendly. However, I like to use assert() sometimes to check for conditions that should not be present when running the program normally and cannot be attributed to simple user configuration or runtime errors.

For example, suppose a given program uses an image file as a resource and looks for it in a standard location known at installation time. Under normal circumstances, the image should not be missing or corrupted at all. If the image is missing, it’s because either something went wrong with the installation process or someone deleted it by accident, or there’s a disk problem, etc. When opening the image, you probably want to assert() it could be opened without any problems, but a failure to assert that condition is not a program bug.

If that assertion fails, the user may not be at fault and the error message may not be helpful at all. For these cases, a third assert macro variant could be used, not included above. One which takes an string argument in addition to the asserted expression and uses it to notify the user of the unusual error condition in a more friendly way.

Update: I created a repository in Github to hold this code with more corrections and code from the “Further thoughts” section.