Bespoke solution to monitor power outages at home

Posted on .

When I came home from a 5-days family trip this summer I immediately realized power was off in our flat. The main switch in the electricity panel was down together with one other additional switch. Everything appeared to have happened a few days before we arrived, so a few things in the fridge were ruined and most of the freezer contents had to be discarded. This was despite the fact we have relatives living close by with an emergency set of keys but, as we were completely unaware of the events, we couldn’t ask them to go check.

I thought about what happened and decided I wanted to set something up so I would get warned if power fails while I’m away. My first thoughts were using something that was available off-the-shelf, but I failed to find something cheap and easy. Fortunately, I already had a couple of things that could help me in this: a small cloud server (that hosts this blog) and a permanently-connected RPi4 that I use as a Pi-Hole at home. To be warned of a power failure, I wanted to use my RPi to ping (somehow) the cloud server from time to time and, on the cloud server, periodically check if we have received a recent ping. If too much time goes by without receiving a ping from home, we can assume something’s wrong and either we have a power outage or an Internet service outage.

The implementation would need the following things:

  1. The cloud server had to be able to send me an email.

  2. The cloud server could have a CGI script that, when accessed, would write a timestamp somewhere.

  3. The RPi would access that CGI script once every minute, for example.

  4. The cloud server would have something to check timestamps periodically, then email me if it’s been too long without a ping.

The difficulty is that I’m not a web developer, plus I’m using nginx on the cloud server and nginx doesn’t support CGI scripts, which complicates things a bit. However, I made all of this work and wanted to share my scripts in case someone finds it useful.

Sending emails from the server

This one is easy because I was already using something similar to monitor disks on a few computers using smartd. When smartd detects a disk may be about to fail, it can be told to email root, and we can use $HOME/.forward to redirect the email with a script. The script, as in this case, can use msmtp, which is a nice program that lets you send emails from the command line using an SMTP server. Thanks to Fastmail, I generated a new set of credentials for SMTP access, installed msmtp in the cloud server and created a config file for it in /etc/msmtprc. Note running msmtp --version will report the right system configuration file name. The configuration file looks like this:

account default
host SERVER
port PORT
auth on
user USERNAME
password PASSWORD
tls on
tls_certcheck on
tls_starttls off
tls_trust_file /etc/ssl/certs/ca-bundle.crt
syslog on
timeout 30

In my case, SERVER is smtp.fastmail.com, PORT is 465 and USERNAME and PASSWORD are the ones I created. The TLS trust file has that path in Fedora, but it may be different on other distributions.

With that configuration all set, I created the following script as /usr/local/bin/pingmonitor-mail:

#!/usr/bin/env bash
FROM=YOUR_EMAIL_ADDRESS
TO=YOUR_EMAIL_ADDRESS
DATE="$( TZ=Z date -R )"
SUBJECT="$1"
BODY="$2"

msmtp -f "$FROM" "$TO" <<EOF
From: $FROM
To: $TO
Date: $DATE
Subject: $SUBJECT

$BODY
EOF

It expects the subject of the email as the first argument and typically a sentence for the body as the second argument. I ran it a few times from the command line and verified it worked perfectly.

CGI script to record ping timestamps

As mentioned before, nginx does not support CGI. It only supports FastCGI, so this is slightly more complicated than expected. After a few tries, I settled on using /var/run/pingmonitor as the main directory containing the FastCGI socket (more on that later) and /var/run/pingmonitor/pings for the actual pings.

I thought a bit about how to record the ping timestamps. My initial idea was to save it to a file but then I started overthinking it. If I used a file to store the timestamps (either appending to it or overwriting the file contents) I wanted to make sure the checker would always read a full timestamp and wouldn’t get partial file contents. If the CGI script wrote the timestamp to the file it would need to block it somehow in the improbable case that the checker was attempting to read the file at the same time. To avoid that complication, I decided to take advantage of the file system to handle that for me. /var/run/pingmonitor/pings would be a directory instead. When the CGI script runs, it would create a new empty file in that directory with the timestamp being the name of the file. The checker would list the files in the directory, convert their names to timestamps and check the most recent one. I think that works because either the file exists or it does not when you list the directory contents, so it’s atomic. If you know it’s not atomic, please leave a comment or email me with a reference.

For the FastCGI script itself, I installed the fastcgi Python module using pip. This allowed me to create a script that easily provides a FastCGI process that launches before nginx, runs as the nginx user and creates the timestamp files when called. Take a look below:

#!/usr/bin/env python
import os
import fastcgi
import sys
import pwd
import grp
import time
import pathlib

RUN_DIR = '/var/run/pingmonitor'
PINGS_DIR = os.path.join(RUN_DIR, 'pings')
USER='nginx'
GROUP='nginx'
ONE_SECOND_NS = 1000000000

# Create run and pings directory. Not a problem if they exist.
os.makedirs(RUN_DIR, mode=0o755, exist_ok=True)
os.makedirs(PINGS_DIR, mode=0o755, exist_ok=True)

# Get UID and GID for nginx.
uid = pwd.getpwnam('nginx').pw_uid
gid = grp.getgrnam('nginx').gr_gid

# Make the directories be owned by the nginx user, so it can create the socket
# and ping files.
os.chown(RUN_DIR, uid, gid)
os.chown(PINGS_DIR, uid, gid)

# Switch to the run (base) directory to create the socket there.
os.chdir(RUN_DIR)

# Become the nginx user.
os.setgid(gid)
os.setuid(uid)

@fastcgi.fastcgi()
def pingmonitor():
    timestamp = time.time_ns() // ONE_SECOND_NS
    filename = '%016d' % (timestamp,)
    path = os.path.join(PINGS_DIR, filename)
    pathlib.Path(path).touch()
    sys.stdout.write('Content-type: text/plain\n\n')
    sys.stdout.write('OK\n')

Apart from directory creation and user switching logic at the beginning, the interesting part is the pingmonitor function. It obtains the epoch in nanoseconds and converts it to seconds. The file name is a zero-padded version of that number, which is is then “touched”, and a reply is served to the HTTP client.

Not pictured, is that by decorating the function with @fastcgi.fastcgi(), a socket is created in the current directory (/var/run/pingmonitor) with the name fcgi.sock. That socket is the FastCGI socket that nginx will use to redirect requests to the FastCGI process. Also, if you run that file as a script, the decorator will create a main loop for you.

I saved the script to /usr/local/bin/pingmonitor.cgi and set up a systemd service file to start it. The systemd unit file is called called /etc/systemd/system/pingmonitor.service:

[Unit]
Description=FastCGI Ping Monitor Service
After=network.target

[Service]
Type=simple
Restart=always
RestartSec=1
ExecStart=/usr/local/bin/pingmonitor.cgi

[Install]
WantedBy=nginx.service

To hook it up with nginx, I created a block in its configuration file:

        location /cgi-bin/RANDOM_STRING-pingmonitor.cgi {
            # Document root
            root DOCUMENT_ROOT;
            # Fastcgi socket
            fastcgi_pass unix:/var/run/pingmonitor/fcgi.sock;
            # Fastcgi parameters, include the standard ones
            include /etc/nginx/fastcgi_params;
            # Adjust non standard parameters (SCRIPT_FILENAME)
            fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        }

I used a StackOverflow question as a reference for this.

In the nginx configuration block you can see I’m using RANDOM_STRING as part of the CGI script URL, which is a long random string. This is because I didn’t want that URL to be easily discoverable. Its location is basically a secret between my server and my RPi.

After setting everything up I accessed the URL with my browser multiple times, confirmed the timestamp files were being created, etc.

Accessing the CGI script periodically

This is the easy part that goes in the RPi. I could’ve used a systemd timer but went with a service instead (like the guy pushing all shapes through the same hole), so the main part is a script that pings the URL once a minute, saved as /usr/local/bin/pingmonitor-pinger.sh.

#!/usr/bin/env bash
while true; do
    sleep 60
    curl --silent --max-time 30 -o /dev/null URL
done

And the corresponding systemd service file called /etc/systemd/system/pingmonitor-pinger.service:

[Unit]
Description=Ping Monitor Pinger Service
After=network.target

[Service]
Type=simple
Restart=always
RestartSec=1
ExecStart=/usr/local/bin/pingmonitor-pinger.sh

[Install]
WantedBy=multi-user.target

Checking timestamps periodically

This part goes into the cloud server again. The script tries to send a single email when it detects pings are too old (1000 seconds, more or less reasonable limit chosen arbitrarily), and another one if the pings come back. It’s also in charge of removing old ping files. I could have removed all existing files with each check, but I decided to arbitrarily keep the last 10 in case it was useful for something. To send emails, it uses /usr/local/bin/pingmonitor-mail as described above. I saved it under /usr/local/bin/pingmonitor-checker.py.

#!/usr/bin/env python
import glob
import os
import time
import subprocess
import sys

PINGS_DIR = '/var/run/pingmonitor/pings'
MAIL_PROGRAM = '/usr/local/bin/pingmonitor-mail'
MAX_DIFF = 1000 # Seconds.
SLEEP_TIME = 60 # Seconds.
MAX_FILES = 10
ONE_SECOND_NS = 1000000000

def get_epoch():
    return time.time_ns() // ONE_SECOND_NS

def print_msg(msg):
    print('%s' % (msg,), file=sys.stderr)

os.makedirs(PINGS_DIR, mode=0o755, exist_ok=True)
os.chdir(PINGS_DIR)

start_time = get_epoch()
ping_missing = False

while True:
    now = get_epoch()

    # List of files with a numeric name.
    filenames = glob.glob('0*')

    # Check the last timestamp. If no files exist yet, wait at least from the start
    # of the script.
    if len(filenames) == 0:
        last_timestamp = start_time
    else:
        filenames.sort()
        most_recent = filenames[-1]
        last_timestamp = int(most_recent, base=10)

    current_diff = now - last_timestamp

    # Remove old files.
    if len(filenames) > MAX_FILES:
        kept_files = filenames[-MAX_FILES:]
        for fn in filenames:
            if fn not in kept_files:
                os.remove(fn)

    if current_diff > MAX_DIFF and (not ping_missing):
        ping_missing = True
        subject = '[pingmonitor] No pings for %d seconds' % (MAX_DIFF,)
        body = 'Last timestamp: %s' % (time.ctime(last_timestamp),)
        print_msg('%s; %s' % (subject, body))
        subprocess.run([MAIL_PROGRAM, subject, body])

    elif current_diff < MAX_DIFF and ping_missing:
        ping_missing = False
        subject = '[pingmonitor] Ping recovered'
        body = 'Last timestamp: %s' % (time.ctime(last_timestamp),)
        print_msg('%s; %s' % (subject, body))
        subprocess.run([MAIL_PROGRAM, subject, body])

    time.sleep(SLEEP_TIME)

Again, such an script could be run as a systemd timer, but I decided to write it as a loop and use a service instead, called /etc/systemd/system/pingmonitor-checker.service.

[Unit]
Description=Ping Monitor Checker Service
After=pingmonitor.service
Wants=pingmonitor.service

[Service]
Type=simple
Restart=always
RestartSec=1
ExecStart=/usr/local/bin/pingmonitor-checker.py

[Install]
WantedBy=multi-user.target

Final thoughts

After setting that up, I checked it works by experimenting with a few timeouts and stopping and starting the pinger service on the RPi. I’m pretty happy with how things turned out, given that this sits outside my usual domain. With an unreliable Internet connection at home, what I did may not be suitable for you if all you’re interested in are the actual power outages. In my case, Internet outages are very infrequent so I’m willing to live with a few false positives if that means I won’t waste the contents of my fridge and freezer again.

Waiter, there's an IES in my DGC!

Posted on .

Filed under: igalia

Finally! Yesterday Khronos published Vulkan 1.3.296 including VK_EXT_device_generated_commands. Thousands of engineering hours seeing the light of day, and awesome news for Linux gaming.

Device-Generated Commands, or DGC for short, are Vulkan’s equivalent to ExecuteIndirect in Direct3D 12. Thanks to this extension, originally based on a couple of NVIDIA vendor extensions, it will be possible to prepare sequences of commands to run directly from the GPU, and executing those sequences directly without any data going through the CPU. Also, Proton now has a much-more official leg to stand on when it has to translate ExecuteIndirect from D3D12 to Vulkan while you run games such as Starfield.

The extension not only provides functionality equivalent to ExecuteIndirect. It goes beyond that and offers more fine-grained control like explicit preprocessing of command sequences, or switching shaders and pipelines with each sequence thanks to something called Indirect Execution Sets, or IES for short, that potentially work with ray tracing, compute and graphics (both regular and mesh shading).

As part of my job at Igalia, I’ve implemented CTS tests for this extension and I had the chance to work very closely with an awesome group of developers discussing specification, APIs and test needs. I hope I don’t forget anybody and apologize in advance if so.

  • Mike Blumenkrantz, of course. Valve contractor, Super Good Coder and current OpenGL Working Group chair who took the initial specification work from Patrick Doane and carried it across the finish line. Be sure to read his blog post about DGC. Also incredibly important for me: he developed, and kept up-to-date, an implementation of the extension for lavapipe, the software Vulkan driver from Mesa. This was invaluable in allowing me to create tests for the extension much faster and making sure tests were in good shape when GPU driver authors started running them.

  • Spencer Fricke from LunarG. Spencer did something fantastic here. For the first time, the needed changes in the Vulkan Validation Layers for such a large extension were developed in parallel while tests and the spec were evolving. His work will be incredibly useful for app developers using the extension in their games. It also allowed me to detect test bugs and issues much earlier and fix them faster.

  • Samuel Pitoiset (Valve contractor), Connor Abbott (Valve contractor), Lionel Landwerlin (Intel) and Vikram Kushwaha (NVIDIA) providing early implementations of the extension, discussing APIs, reporting test bugs and needs, and making sure the extension works as good as possible for a variety of hardware vendors out there.

  • To a lesser degree, most others mentioned as spec contributors for the extension, such as Hans-Kristian Arntzen (Valve contractor), Baldur Karlsson (Valve contractor), Faith Ekstrand (Collabora), etc, making sure the spec works for them too and makes sense for Proton, RenderDoc, and drivers such as NVK and others.

If you’ve noticed, a significant part of the people driving this effort work for Valve and, from my side, the work has also been carried as part of Igalia’s collaboration with them. So my explicit thanks to Valve for sponsoring all this work.

If you want to know a bit more about DGC, stay tuned for future talks about this topic. In about a couple of weeks, I’ll present a lightning talk (5 mins) with an overview at XDC 2024 in Montreal. Don’t miss it!

Signing PDFs without embedded forms under Linux

Posted on . Updated on .

Edit: I’ve added a couple more methods for modifying PDF files with suggestions from readers. Thanks everyone!

Picture the following situation: someone sends you a PDF document and asks you to send it back signed. Some problems, though:

  • The PDF doesn’t have an embedded form, it’s just something they exported from their word processor.

  • They’re not using any signing service like DocuSign, Dropbox Sign or any other.

Sounds implausible? I’ve faced the situation multiple times. From the top of my head:

  • When I joined Igalia some years ago I had to do that with a few documents.

  • Multiple times, one of them very recent, when interacting with some electronic administration websites, where the definition of electronic administration is:

    • We make the form available to you as a PDF document (thank us we don’t give you a .docx file at least).

    • You can send the form filled back to us through the internet, by attaching a file somehow.

    • No, we don’t have an HTML version of the form.

    • No, we don’t have anything set up so you can sign the document with your official electronic certificate or anything similar.

  • With documents from the homeowner association management company.

If you’re like me, what do you do? Print the document, fill it, scan it, attach it.

Side quest: scanning pages

No, I’m not taking pictures of the document with my phone to send it back. For me, that’s a solved problem now. The scanning system on Linux is called SANE (standing for Scanner Access Now Easy) and it has an official non-HTTPS site. If you want to properly scan an A4 or US-Letter sized document, you can get a desktop scanner. The SANE website has a list of supported devices, indicating how well-supported each is. Long story short? I’m partial to Canon scanners because I know they work reasonably well. In their website, as of the time I’m writing this, they list a couple of them: the CanoScan LIDE 300 and the CanoScan LIDE 400. Both of them are very well supported under SANE. I got the 300 model some time ago. Plug it in, open the document scanner Gnome app and start scanning. No setup needed.

Back to signing

Can we do better than the print+scan cycle? Can you insert your scanned signature into the document easily? For that, you have to solve two separate problems.

  • Getting a nice version of your signature in image form.

    • Ideally, as a PNG with a transparent background.

    • If not, a clear image with a light background could do it.

  • Inserting that image on the PDF somehow.

I don’t know if this sounds surprising to you, but the hardest problem is the first one. However, you only have to solve it once. Let’s start, then, by the second one.

Inserting the signature

Thanks Emma Anholt for mentioning you can open PDFs with LibreOffice Draw and use it to insert images and text anywhere in the document! Thanks also to Alyssa Rosenzweig for also mentioning Xournal (or Xournal++), which I’ve found to work much better at that task compared to LibreOffice. Also special thanks to everybody who contacted me mentioning different methods and tools! I really enjoy posting these articles just for the reactions, comments and suggestions.

Method 1: Xournal++

Xournal++ is an open source program you can typically install from your distribution repositories. I installed it from Gnome Software on Fedora without issues, and selected the official RPM as the source.

Gnome Software showing Xournal++ as installed

The application is designed to create notes in general, using images, text or typically a graphics tablet, but it turns out it has an option to annotate an existing PDF.

Xournal++ File menu showing an option to annotate a PDF file

It will open the PDF and let you put anything on it. You can use a wide variety of tools to insert images, text boxes or even draw anything on top by hand, if you have a graphics tablet or incredible mouse skills. You should take a look at the toolbar and/or the tools menu to see what it can do. The easiest option is clicking on the image insertion tool, click anywhere in the document and select the image you want to insert. If it turns out to be too big or small, you can drag the corner to increase or decrease size preserving the image aspect, and you can move it around freely. The text insertion tool is also handy to fill complex forms that can’t be filled as a proper form. When done, use “File > Export as PDF” to generate a new document. End of the journey!

Method 2: Firefox

Amazingly, I wasn’t aware of this option until it was mentioned to be by a couple of people. Thanks to Sasi Péter and an anonymous user for the tip.

When you open a PDF file under Firefox, there’s a toolbar that lets you insert text, draw and insert images, much like Xournal++.

Screenshot of the Firefox PDF editing toolbar

If, as it was my case, you don’t have the image insertion button, you need to take a look at the pdfjs.enableStampEditor in about:config. Once you’ve finished modifying the document click on the Save icon (folder with an arrow pointing down) to save the modified version.

Method 3: LibreOffice Draw

You can also open PDFs with LibreOffice Draw and you can use its incredibly wide variety of tools to modify the document. That includes inserting images, text or whatever you need. However, I’ve found LibreOffice Draw to be a bit lacking when preserving the original PDF contents. For example, one of the documents I had to sign had an appended Latex-generated subdocument in the last pages, and LibreOffice Draw was not able to preserve the font and style of those last pages after inserting the signature. Xournal++, on the other hand, preserved it without issues.

Method 4: falsisign

Suggested by Anisse on Mastodon.

falsisign is a tool intended to be used for automation and it probably falls out of what most people would consider a friendly tool. However, it looks incredibly practical if you need to sign a lot of pages in a document, and its default behavior allows the final document to look like if it had been printed, signed and then scanned again, if you want or need your document to look that way. There are options to disable that behavior, though.

Creating a nice version of your signature as a PNG file

This is getting long so I’ll cut it short: follow one of the available video tutorials on YouTube. I used the one I just linked, and I can boil it down to the following steps, but please follow the video in case of doubt:

  1. If possible, scan your signature on a white paper using a pen that leaves a clear mark, like a marker pen or Pilot V5 or something like that.

  2. If you don’t have a scanner, take a picture with your phone trying not to create a shadow from your body or the phone on the paper.

  3. Import the image into Gimp and crop it as you see fit.

  4. Decrease saturation to make it gray-scale.

  5. Play with the color value curves, moving up the highest part of the curve so the white (or almost white) background becomes fully white, and moving down the lowest part of the curve so the trace becomes even more dark and clear, until you’re satisfied with the results.

  6. Remove any specks from the white background using the drawing tools.

  7. Duplicate the main layer adding a copy of it.

  8. Invert the colors of the layer copy so it becomes white-on-black.

  9. Add a transparency mask to the original layer, with black as transparent.

  10. Copy the inverted layer and paste it into the transparency mask of the original layer (this will make the white background transparent).

  11. Export the final result as a PNG.

Caveats

Take into account it’s easy to extract images from PDF files, so using this method it may be easy for someone to extract your signature from the generated document. However, this can also be easily done if you scan or take a picture of a physical document already signed, even if the signature crosses some lines in the document, with minimal image manipulation skills, so I don’t think it changes the threat model in a significant way compared to the most common solution.

The Dark Side of the Blog

Posted on .

If you’re browsing this blog from a device in which the preferred theme is dark you may notice the blog now has a dark theme. This is an idea that had crossed my mind multiple times in the past months but it’s not until very recently that I decided to take a serious look at how to do it. My decision to finally tackle the issue was motivated by “contradicting” reports I read online. Some people complain about how the lack of a light mode option is an accesibility or usability issue for them, making it impossible to read a long text. For other people, it’s the other way around. The only proper solution to this is to have both a dark mode and a light mode and use the one preferred by the user, which is what this blog does now. No JavaScript was harmed (or used) in the process, so lets take a quick look at how to achieve this. In other words, follow me in this journey to the basic stuff from a non-web developer point of view.

The first thing I did was to replace all the colors in my CSS file with variables. This is not only helpful to create a dark theme, but it also helps keep the colors defined in a single place and makes changing them much easier. The programmer in me is now glad the color scheme is no longer full of magic constants. Achieving this is easy. If you previoysly had…​

body {
  background: white;
}

Now you can have…​

:root {
  --background-color: white;
}
body {
  background: var(--background-color);
}

When you define a CSS variable (or custom CSS property, which I think is the proper term to refer to them), you can give it an arbitrary name starting with a double dash. Later, you can use those properties as values using var() with the name inside the parens. You’re not restricted to colors. You can define more things like linear gradients, filters, etc.

After that, defining an alternative dark or light theme is as easy as doing…​

@media (prefers-color-scheme: dark) {
  :root {
    --background-color: black;
  }
}

And that’s it. You can now change the main colors to alternative versions for dark or light modes.

For my blog, the main theme is still going to be light. This is reflected when sometimes I insert images or other content and, if I can’t easily make the background transparent, I’m going to default to white. However, I’ll try to add white margings in the future so images don’t look as bad in the dark theme as the Igalia logo next to the Vulkan logo I used in a recent post.

One small trick I’m using that required a bit more digging is handling the Github icon in the “About Me” page. It’s an SVG icon drawn in an almost-black color and it’s referenced as an external resource, so we cannot change the fill property of the SVG to switch to a different color when using dark mode. Yet, if you look at the page in dark mode, you’ll see the icon is displayed in an almost-white color. The key, in this case, is a custom invert filter. In the CSS, I specify:

:root {
  --github-icon-filter: invert(0);
}
@media (prefers-color-scheme: dark) {
  :root {
    --github-icon-filter: invert(1);
  }
}
.githubicon {
  filter: var(--github-icon-filter);
}

This means the icon colors are inverted for dark mode and are kept as is for light mode.

And that’s it! Sorry if I bored you with basic stuff (web developer maybe?) and glad if you found this interesting! See you on the dark side of the Moon.

Year-end donations round, 2023 edition

Posted on .

As in previous years, I’ve made a small round of personal donations now that the year-end is approaching. This year I’ve changed my strategy a bit, donating a bit more and prioritizing software projects. My motivation to publish the list is to encourage others to donate and to give ideas for those looking for possible recipients.

This year, the list of projects and organizations is:

  • Signal, because I use it daily.

  • Notepad++ because my wife uses it daily for both work and personal tasks.

  • WinSCP for similar reasons.

  • Transmission, the BitTorrent client, because I also use it from time to time and it’s really nice.

  • Gnome because it’s my desktop environment these days.

  • LibreOffice because I also use it a lot and I think it’s an important piece of software in any desktop computer.

  • OpenStreetMap because it plays a major role in having freely available maps outside major corporation control.

  • Software Freedom Conservancy because it’s a nice organization and the X.Org Foundation, for example, will be under its umbrella soon.

  • Pi-Hole because I have one installed in my home network and it’s a really nice piece of software.

  • Aegis Authenticator because it’s another one of those apps I use daily. Edit: I cannot get this payment through. Aegis uses Buy Me a Coffee, the payment goes through Stripe and all my cards are rejected. 🤷