#!/usr/bin/env bash set -e function usage { cat <<EOF 1>&2 Usage: `basename $0` [-h] [-a AUDIO_STREAM] [-o OUTPUT_FILE] VIDEO_FILE [SUBTITLES_FILE] Create MKV file containing the video and audio streams from the given VIDEO_FILE without re-encoding. If -a is not used, the program will use stream 1 from VIDEO_FILE. If -o is not used, the default output file will be /tmp/watch.mkv. If passed as an additional argument, SUBTITLES_FILE will be included as an additional stream in the output MKV file and made active by default. Option -h prints this help message. EOF } function usage_and_abort { usage exit 1 } while getopts "a:o:h" OPTNAME; do case "$OPTNAME" in a) OPTARG_AUDIO_STREAM="$OPTARG" ;; o) OPTARG_OUTPUT_FILE="$OPTARG" ;; h) usage; exit 0 ;; *) usage_and_abort ;; esac done shift $((OPTIND-1)) if [ $# -lt 1 ] || [ $# -gt 2 ]; then usage_and_abort fi VIDEO_FILE="$1" SUBTITLES_FILE="$2" # May be empty. AUDIO_STREAM="${OPTARG_AUDIO_STREAM:-1}" OUTPUT_FILE="${OPTARG_OUTPUT_FILE:-/tmp/watch.mkv}" START_WEBSERVER="" WEBSERVER_PORT=64004 WEBSERVER_ROOT="$HOME/Public" declare -a FFMPEG_INPUTS declare -a FFMPEG_MAPS # Video file as input. FFMPEG_INPUTS+=("-i") FFMPEG_INPUTS+=("$VIDEO_FILE") # Pick first stream from it (video), and the selected audio stream (number 1 by default). FFMPEG_MAPS+=("-map") FFMPEG_MAPS+=("0:v:0") FFMPEG_MAPS+=("-map") FFMPEG_MAPS+=("0:$AUDIO_STREAM") if [ -n "$SUBTITLES_FILE" ]; then # Add subtitles file as second input. FFMPEG_INPUTS+=("-i") FFMPEG_INPUTS+=("$SUBTITLES_FILE") # Pick first subtitles stream from it and make them active by default. FFMPEG_MAPS+=("-map") FFMPEG_MAPS+=("1:s:0") FFMPEG_MAPS+=("-disposition:s:0") FFMPEG_MAPS+=("default") fi rm -f "$OUTPUT_FILE" ffmpeg "${FFMPEG_INPUTS[@]}" "${FFMPEG_MAPS[@]}" -c copy "$OUTPUT_FILE" read -n 1 -p "Start web server? [y/N] " REPLY echo case "$REPLY" in [yY]) START_WEBSERVER=1 ;; *) ;; esac if [ -n "$START_WEBSERVER" ]; then echo "Starting web server..." cd "$WEBSERVER_ROOT" busybox httpd -f -vv -p $WEBSERVER_PORT || true # || true helps because the web server is shut down with Ctrl+C and does not return 0. fi exit 0
Year-end donations round, 2024 edition
Just in time before the year ends, I’ve gone ahead with my round of personal donations for 2024. I highly encourage you to do something similar and support the free and open source software and organizations that make a difference to you.
This year, my personal list consisted of:
-
Signal as every year, because I use it daily.
-
Calibre eBook Management, which my wife uses a lot.
-
The Gnome Project, which is my desktop environment these days.
-
LibreOffice because I use it very frequently and I think it’s a critical piece of software for the desktop.
-
Pi-Hole because I have one set up in my home network to block ads via DNS, so in practice it’s used every day.
-
Internet Archive, which is critical in preserving content and culture around the globe.
-
Software Freedom Conservancy because it has several important projects under its umbrella and they hit close to home now that the X.Org Foundation has chosen them as the fiscal sponsors.
Casting video files from Linux to AppleTV
A couple of weeks ago I received an AppleTV 4K as an early Christmas present. It’s a really nice device and it immediately replaced my Chromecast Ultra as the way to watch streaming content on my 15-year-old non-smart TV. Of course, the kids love it too!
Before I disconnected my Chromecast Ultra from the TV to put it back into its box, there was a small matter I needed to solve. Sometimes I watch content on my TV by casting it from my Linux PC. Most of that content are rips of my DVD collection, which is thankfully legal in Spain, as far as I know.
Using the Chromecast Ultra, I only had to configure catt once with the name given to the device and then launch catt cast VIDEO_FILE
from the command line on my PC.
Like magic, the video would start playing on the TV provided it had a video and audio format the Chromecast Ultra would understand.
I was looking for a similar experience on the AppleTV, which does not support the Chromecast protocol and uses Apple’s proprietary AirPlay protocol instead.
The Concept
A web search only provided some unconvincing results with old tools that (for the most part) didn’t work, or were not as straightforward, or involved media servers and more complex setups. Long story short, if you want something that works well and is relatively easy to set up and understand in concept for us Linux nerds, the most practical solution I’ve found is to install the VLC app on the AppleTV, which has a convenient feature to play content from a remote URL. This way, you only need to serve the video file from your Linux box using HTTP. Yes, technically this is not really “casting”.
Typing URLs on the VLC app is a pain in the neck unless you pair a Bluetooth mouse and keyboard to your AppleTV, but VLC remembers a history of recently played URLs, so my final setup involves serving the content I want to play from a fixed URL. This way, I can open the VLC app, scroll down to the last played URL and hit the remote button to start playing whatever I’ve chosen to watch at that moment in a matter of seconds.
The URL
My URL looks like http://192.168.68.202:64004/watch.mkv
.
Let’s take that bit by bit.
-
The protocol is HTTP since I’m serving content from the local network and there’s no SSL involved at all.
-
The host name is the local IP address of my Linux box. To make it fixed, I configured a static DHCP assignment in the router so my Linux PC always gets the same address. You only have to do that once. As an alternative, if you use a Pi-Hole as the DNS server and DHCP server, it usually makes your devices available under the name
HOSTNAME.lan
, whereHOSTNAME
is the Linux box host name (seehostnamectl --help
if your distribution usessystemd
). Using this method, you do not need a fixed local network IP address. Of course, a simple third alternative is using a static local IP address outside the DHCP address range, but that can be inconvenient if the Linux box is a laptop that you carry around to other networks and you prefer to use DHCP by default. -
For the port I’ve chosen 64004 as one of the ephemeral ports that’s normally always available on Linux and is easy to remember. You normally want a port that’s above 1024 so opening it for listening does not require root privileges. Another sensible classic choice is 8080, the typical alternative to HTTP port 80.
-
Finally, the file name I’ve chosen is
watch.mkv
and it should be available from the root of the served content. Directory indexing is not required. VLC will parse the file contents before starting to play it, so the file does not need to be an actual MKV file despite its name.
The HTTP Server
You probably want something simple for the HTTP server, and there are several options.
Most Linux distributions have Python installed, and the http.server
module includes a simple built-in HTTP server that can be used from the command line.
python3 -m http.server -d DIRECTORY PORT
will serve the contents of the DIRECTORY
directory using the given PORT
(e.g. 8080 as we mentioned earlier).
I’m also partial to Busybox’s httpd
server because it has a few extra capabilities and it allows you to customize directory indexing with a CGI script.
To use it, call busybox httpd -f -vv -p PORT -h DIRECTORY
.
For most people, the Python server is a more direct and convenient option.
A simple setup is serving the contents of a fixed directory in the file system where you place a symlink named watch.mkv
that points to the real file you want to serve, and you change the target of the symlink each time.
The directory could be a subdirectory of /tmp
, or perhaps $HOME/public_html
.
In my case I serve $HOME/Public
because I use that location for serving files to the local network with multiple tools.
Then, there’s the matter of making sure the AppleTV can connect to your Linux box on the chosen port.
If your distribution uses some kind of firewall, you may have to explicitly open the port.
For example, in Fedora Workstation you could use sudo firewall-cmd --permanent --add-port=PORT/tcp
as a one-time setup command with PORT
being the chosen port.
Subtitles
If the file you serve contains subtitle streams, VLC lets you choose the subtitle track when playing content, so that’s good enough.
The app also has an option to look for subtitles following the same name pattern as the file being played.
For example, placing a watch.srt
file next to the video file in the same directory should work.
I sometimes download SRT files from the web when the included DVD subtitles are subpar.
I’m looking at you, Paw Patrol 2-Movie Collection, and your subtitles that display too early, too late or too briefly!
Almost as disappointing as the quality of the toys.
In any case, I found it very convenient to create a proper MKV file on the fly whenever I want to watch something, embedding the subtitle stream in it if needed, and making it active by default for the most friction-less experience possible.
Note this is different from “burning” the subtitles into the video stream, which requires re-encoding.
When embedding subtitles this way, I don’t even have to bother activating them from VLC.
Creating a proper MKV file on /tmp
is easy and pretty fast (seconds) if you don’t re-encode the video or audio streams.
For that, I wrote the following script and named it ffmpeg-prepare-watch.sh
.
Note you can change WEBSERVER_PORT
, WEBSERVER_ROOT
and the way the HTTP server is launched as mentioned above.
Conclusion
The user experience is now precisely as I like it: simple and understandable, despite using the command line.
When I want to “cast” something from my Linux PC, I call ffmpeg-prepare-watch.sh VIDEO_FILE SUBTITLES_FILE
.
This will create /tmp/watch.mkv
very quickly, and $HOME/Public/watch.mkv
is a symlink to it.
Once ffmpeg
finishes, I answer "y" to the prompt to start the web server, which will serve the contents of $HOME/Public
over HTTP.
Finally, I start playing the last URL from the VLC AppleTV app.
The whole process takes a few seconds and is as convenient as using catt
was before.
My XDC 2024 talk about VK_EXT_device_generated_commands
Some days ago I wrote about the new VK_EXT_device_generated_commands Vulkan extension that had just been made public. Soon after that, I presented a talk at XDC 2024 with a brief introduction to it. It’s a lightning talk that lasts just about 7 minutes and you can find the embedded video below, as well as the slides and the talk transcription if you prefer written formats.
Truth be told, the topic deserves a longer presentation, for sure. However, when I submitted my talk proposal for XDC I wasn’t sure if the extension was going to be public by the time XDC would take place. This meant I had two options: if I submitted a half-slot talk and the extension was not public, I needed to talk for 15 minutes about some general concepts and a couple of NVIDIA vendor-specific extensions: VK_NV_device_generated_commands and VK_NV_device_generated_commands_compute. That would be awkward so I went with a lighning talk where I could talk about those general concepts and, maybe, talk about some VK_EXT_device_generated_commands specifics if the extension was public, which is exactly what happened.
Fortunately, I will talk again about the extension at Vulkanised 2025. It will be a longer talk and I will cover the topic in more depth. See you in Cambridge in February and, for those not attending, stay tuned because Vulkanised talks are recorded and later uploaded to YouTube. I’ll post the link here and in social media once it’s available.
XDC 2024 recording
Talk slides and transcription
Hello, I’m Ricardo from Igalia and I’m going to talk about Device-Generated Commands in Vulkan. This is a new extension that was released a couple of weeks ago. I wrote CTS tests for it, I helped with the spec and I worked with some actual heros, some of them present in this room, that managed to get this implemented in a driver.
Device-Generated Commands is an extension that allows apps to go one step further in GPU-driven rendering because it makes it possible to write commands to a storage buffer from the GPU and later execute the contents of the buffer without needing to go through the CPU to record those commands, like you typically do by calling vkCmd functions working with regular command buffers.
It’s one step ahead of indirect draws and dispatches, and one step behind work graphs.
Getting away from Vulkan momentarily, if you want to store commands in a storage buffer there are many possible ways to do it. A naïve approach we can think of is creating the buffer as you see in the slide. We assign a number to each Vulkan command and store it in the buffer. Then, depending on the command, more or less data follows. For example, lets take the sequence of commands in the slide: (1) push constants followed by (2) dispatch. We can store a token number or command id or however you want to call it to indicate push constants, then we follow with meta-data about the command (which is the section in green color) containing the layout, stage flags, offset and size of the push contants. Finally, depending on the size, we store the push constant values, which is the first chunk of data in blue. For the dispatch it’s similar, only that it doesn’t need metadata because we only want the dispatch dimensions.
But this is not how GPUs work. A GPU would have a very hard time processing this. Also, Vulkan doesn’t work like this either. We want to make it possible to process things in parallel and provide as much information in advance as possible to the driver.
So in Vulkan things are different. The buffer will not contain an arbitrary sequence of commands where you don’t know which one comes next. What we do is to create an Indirect Commands Layout. This is the main concept. The layout is like a template for a short sequence of commands. We create this layout using the tokens and meta-data that we saw colored red and green in the previous slide.
We specify the layout we will use in advance and, in the buffer, we ony store the actual data for each command. The result is that the buffer containing commands (lets call it the DGC buffer) is divided into small chunks, called sequences in the spec, and the buffer can contain many such sequences, but all of them follow the layout we specified in advance.
In the example, we have push constant values of a known size followed by the dispatch dimensions. Push constant values, dispatch. Push constant values, dispatch. Etc.
The second thing Vulkan does is to severely limit the selection of available commands. You can’t just start render passes or bind descriptor sets or do anything you can do in a regular command buffer. You can only do a few things, and they’re all in this slide. There’s general stuff like push contants, stuff related to graphics like draw commands and binding vertex and index buffers, and stuff to dispatch compute or ray tracing work. That’s it.
Moreover, each layout must have one token that dispatches work (draw, compute, trace rays) but you can only have one and it must be the last one in the layout.
Something that’s optional (not every implementation is going to support this) is being able to switch pipelines or shaders on the fly for each sequence.
Summing up, in implementations that allow you to do it, you have to create something new called Indirect Execution Sets, which are groups or arrays of pipelines that are more or less identical in state and, basically, only differ in the shaders they include.
Inside each set, each pipeline gets an index and you can change the pipeline used for each sequence by (1) specifying the Execution Set in advance (2) using an execution set token in the layout, and (3) storing a pipeline index in the DGC buffer as the token data.
The summary of how to use it would be:
First, create the commands layout and, optionally, create the indirect execution set if you’ll switch pipelines and the driver supports that.
Then, get a rough idea of the maximum number of sequences that you’ll run in a single batch.
With that, create the DGC buffer, query the required preprocess buffer size, which is an auxiliar buffer used by some implementations, and allocate both.
Then, you record the regular command buffer normally and specify the state you’ll use for DGC. This also includes some commands that dispatch work that fills the DGC buffer somehow.
Finally, you dispatch indirect work by calling vkCmdExecuteGeneratedCommandsEXT. Note you need a barrier to synchronize previous writes to the DGC buffer with reads from it.
You can also do explicit preprocessing but I won’t go into detail here.
That’s it. Thank for watching, thanks Valve for funding a big chunk of the work involved in shipping this, and thanks to everyone who contributed!
Bespoke solution to monitor power outages at home
When I came home from a 5-days family trip this summer I immediately realized power was off in our flat. The main switch in the electricity panel was down together with one other additional switch. Everything appeared to have happened a few days before we arrived, so a few things in the fridge were ruined and most of the freezer contents had to be discarded. This was despite the fact we have relatives living close by with an emergency set of keys but, as we were completely unaware of the events, we couldn’t ask them to go check.
I thought about what happened and decided I wanted to set something up so I would get warned if power fails while I’m away. My first thoughts were using something that was available off-the-shelf, but I failed to find something cheap and easy. Fortunately, I already had a couple of things that could help me in this: a small cloud server (that hosts this blog) and a permanently-connected RPi4 that I use as a Pi-Hole at home. To be warned of a power failure, I wanted to use my RPi to ping (somehow) the cloud server from time to time and, on the cloud server, periodically check if we have received a recent ping. If too much time goes by without receiving a ping from home, we can assume something’s wrong and either we have a power outage or an Internet service outage.
The implementation would need the following things:
-
The cloud server had to be able to send me an email.
-
The cloud server could have a CGI script that, when accessed, would write a timestamp somewhere.
-
The RPi would access that CGI script once every minute, for example.
-
The cloud server would have something to check timestamps periodically, then email me if it’s been too long without a ping.
The difficulty is that I’m not a web developer, plus I’m using nginx on the cloud server and nginx doesn’t support CGI scripts, which complicates things a bit. However, I made all of this work and wanted to share my scripts in case someone finds it useful.
Sending emails from the server
This one is easy because I was already using something similar to monitor disks on a few computers using smartd.
When smartd detects a disk may be about to fail, it can be told to email root, and we can use $HOME/.forward
to redirect the email with a script.
The script, as in this case, can use msmtp
, which is a nice program that lets you send emails from the command line using an SMTP server.
Thanks to Fastmail, I generated a new set of credentials for SMTP access, installed msmtp
in the cloud server and created a config file for it in /etc/msmtprc
.
Note running msmtp --version
will report the right system configuration file name.
The configuration file looks like this:
account default host SERVER port PORT auth on user USERNAME password PASSWORD tls on tls_certcheck on tls_starttls off tls_trust_file /etc/ssl/certs/ca-bundle.crt syslog on timeout 30
In my case, SERVER
is smtp.fastmail.com, PORT
is 465 and USERNAME
and PASSWORD
are the ones I created.
The TLS trust file has that path in Fedora, but it may be different on other distributions.
With that configuration all set, I created the following script as /usr/local/bin/pingmonitor-mail
:
#!/usr/bin/env bash FROM=YOUR_EMAIL_ADDRESS TO=YOUR_EMAIL_ADDRESS DATE="$( TZ=Z date -R )" SUBJECT="$1" BODY="$2" msmtp -f "$FROM" "$TO" <<EOF From: $FROM To: $TO Date: $DATE Subject: $SUBJECT $BODY EOF
It expects the subject of the email as the first argument and typically a sentence for the body as the second argument. I ran it a few times from the command line and verified it worked perfectly.
CGI script to record ping timestamps
As mentioned before, nginx does not support CGI.
It only supports FastCGI, so this is slightly more complicated than expected.
After a few tries, I settled on using /var/run/pingmonitor
as the main directory containing the FastCGI socket (more on that later) and /var/run/pingmonitor/pings
for the actual pings.
I thought a bit about how to record the ping timestamps.
My initial idea was to save it to a file but then I started overthinking it.
If I used a file to store the timestamps (either appending to it or overwriting the file contents) I wanted to make sure the checker would always read a full timestamp and wouldn’t get partial file contents.
If the CGI script wrote the timestamp to the file it would need to block it somehow in the improbable case that the checker was attempting to read the file at the same time.
To avoid that complication, I decided to take advantage of the file system to handle that for me.
/var/run/pingmonitor/pings
would be a directory instead.
When the CGI script runs, it would create a new empty file in that directory with the timestamp being the name of the file.
The checker would list the files in the directory, convert their names to timestamps and check the most recent one.
I think that works because either the file exists or it does not when you list the directory contents, so it’s atomic.
If you know it’s not atomic, please leave a comment or email me with a reference.
For the FastCGI script itself, I installed the fastcgi
Python module using pip
.
This allowed me to create a script that easily provides a FastCGI process that launches before nginx, runs as the nginx
user and creates the timestamp files when called.
Take a look below:
#!/usr/bin/env python import os import fastcgi import sys import pwd import grp import time import pathlib RUN_DIR = '/var/run/pingmonitor' PINGS_DIR = os.path.join(RUN_DIR, 'pings') USER='nginx' GROUP='nginx' ONE_SECOND_NS = 1000000000 # Create run and pings directory. Not a problem if they exist. os.makedirs(RUN_DIR, mode=0o755, exist_ok=True) os.makedirs(PINGS_DIR, mode=0o755, exist_ok=True) # Get UID and GID for nginx. uid = pwd.getpwnam('nginx').pw_uid gid = grp.getgrnam('nginx').gr_gid # Make the directories be owned by the nginx user, so it can create the socket # and ping files. os.chown(RUN_DIR, uid, gid) os.chown(PINGS_DIR, uid, gid) # Switch to the run (base) directory to create the socket there. os.chdir(RUN_DIR) # Become the nginx user. os.setgid(gid) os.setuid(uid) @fastcgi.fastcgi() def pingmonitor(): timestamp = time.time_ns() // ONE_SECOND_NS filename = '%016d' % (timestamp,) path = os.path.join(PINGS_DIR, filename) pathlib.Path(path).touch() sys.stdout.write('Content-type: text/plain\n\n') sys.stdout.write('OK\n')
Apart from directory creation and user switching logic at the beginning, the interesting part is the pingmonitor
function.
It obtains the epoch in nanoseconds and converts it to seconds.
The file name is a zero-padded version of that number, which is is then “touched”, and a reply is served to the HTTP client.
Not pictured, is that by decorating the function with @fastcgi.fastcgi(), a socket is created in the current directory (/var/run/pingmonitor
) with the name fcgi.sock
.
That socket is the FastCGI socket that nginx will use to redirect requests to the FastCGI process.
Also, if you run that file as a script, the decorator will create a main loop for you.
I saved the script to /usr/local/bin/pingmonitor.cgi
and set up a systemd service file to start it.
The systemd unit file is called called /etc/systemd/system/pingmonitor.service
:
[Unit] Description=FastCGI Ping Monitor Service After=network.target [Service] Type=simple Restart=always RestartSec=1 ExecStart=/usr/local/bin/pingmonitor.cgi [Install] WantedBy=nginx.service
To hook it up with nginx, I created a block in its configuration file:
location /cgi-bin/RANDOM_STRING-pingmonitor.cgi { # Document root root DOCUMENT_ROOT; # Fastcgi socket fastcgi_pass unix:/var/run/pingmonitor/fcgi.sock; # Fastcgi parameters, include the standard ones include /etc/nginx/fastcgi_params; # Adjust non standard parameters (SCRIPT_FILENAME) fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; }
I used a StackOverflow question as a reference for this.
In the nginx configuration block you can see I’m using RANDOM_STRING
as part of the CGI script URL, which is a long random string.
This is because I didn’t want that URL to be easily discoverable.
Its location is basically a secret between my server and my RPi.
After setting everything up I accessed the URL with my browser multiple times, confirmed the timestamp files were being created, etc.
Accessing the CGI script periodically
This is the easy part that goes in the RPi.
I could’ve used a systemd timer but went with a service instead (like the guy pushing all shapes through the same hole), so the main part is a script that pings the URL once a minute, saved as /usr/local/bin/pingmonitor-pinger.sh
.
#!/usr/bin/env bash while true; do sleep 60 curl --silent --max-time 30 -o /dev/null URL done
And the corresponding systemd service file called /etc/systemd/system/pingmonitor-pinger.service
:
[Unit] Description=Ping Monitor Pinger Service After=network.target [Service] Type=simple Restart=always RestartSec=1 ExecStart=/usr/local/bin/pingmonitor-pinger.sh [Install] WantedBy=multi-user.target
Checking timestamps periodically
This part goes into the cloud server again.
The script tries to send a single email when it detects pings are too old (1000 seconds, more or less reasonable limit chosen arbitrarily), and another one if the pings come back.
It’s also in charge of removing old ping files.
I could have removed all existing files with each check, but I decided to arbitrarily keep the last 10 in case it was useful for something.
To send emails, it uses /usr/local/bin/pingmonitor-mail
as described above.
I saved it under /usr/local/bin/pingmonitor-checker.py
.
#!/usr/bin/env python import glob import os import time import subprocess import sys PINGS_DIR = '/var/run/pingmonitor/pings' MAIL_PROGRAM = '/usr/local/bin/pingmonitor-mail' MAX_DIFF = 1000 # Seconds. SLEEP_TIME = 60 # Seconds. MAX_FILES = 10 ONE_SECOND_NS = 1000000000 def get_epoch(): return time.time_ns() // ONE_SECOND_NS def print_msg(msg): print('%s' % (msg,), file=sys.stderr) os.makedirs(PINGS_DIR, mode=0o755, exist_ok=True) os.chdir(PINGS_DIR) start_time = get_epoch() ping_missing = False while True: now = get_epoch() # List of files with a numeric name. filenames = glob.glob('0*') # Check the last timestamp. If no files exist yet, wait at least from the start # of the script. if len(filenames) == 0: last_timestamp = start_time else: filenames.sort() most_recent = filenames[-1] last_timestamp = int(most_recent, base=10) current_diff = now - last_timestamp # Remove old files. if len(filenames) > MAX_FILES: kept_files = filenames[-MAX_FILES:] for fn in filenames: if fn not in kept_files: os.remove(fn) if current_diff > MAX_DIFF and (not ping_missing): ping_missing = True subject = '[pingmonitor] No pings for %d seconds' % (MAX_DIFF,) body = 'Last timestamp: %s' % (time.ctime(last_timestamp),) print_msg('%s; %s' % (subject, body)) subprocess.run([MAIL_PROGRAM, subject, body]) elif current_diff < MAX_DIFF and ping_missing: ping_missing = False subject = '[pingmonitor] Ping recovered' body = 'Last timestamp: %s' % (time.ctime(last_timestamp),) print_msg('%s; %s' % (subject, body)) subprocess.run([MAIL_PROGRAM, subject, body]) time.sleep(SLEEP_TIME)
Again, such an script could be run as a systemd timer, but I decided to write it as a loop and use a service instead, called /etc/systemd/system/pingmonitor-checker.service
.
[Unit] Description=Ping Monitor Checker Service After=pingmonitor.service Wants=pingmonitor.service [Service] Type=simple Restart=always RestartSec=1 ExecStart=/usr/local/bin/pingmonitor-checker.py [Install] WantedBy=multi-user.target
Final thoughts
After setting that up, I checked it works by experimenting with a few timeouts and stopping and starting the pinger service on the RPi. I’m pretty happy with how things turned out, given that this sits outside my usual domain. With an unreliable Internet connection at home, what I did may not be suitable for you if all you’re interested in are the actual power outages. In my case, Internet outages are very infrequent so I’m willing to live with a few false positives if that means I won’t waste the contents of my fridge and freezer again.
Waiter, there's an IES in my DGC!
Finally! Yesterday Khronos published Vulkan 1.3.296 including VK_EXT_device_generated_commands. Thousands of engineering hours seeing the light of day, and awesome news for Linux gaming.
Device-Generated Commands, or DGC for short, are Vulkan’s equivalent to ExecuteIndirect in Direct3D 12. Thanks to this extension, originally based on a couple of NVIDIA vendor extensions, it will be possible to prepare sequences of commands to run directly from the GPU, and executing those sequences directly without any data going through the CPU. Also, Proton now has a much-more official leg to stand on when it has to translate ExecuteIndirect from D3D12 to Vulkan while you run games such as Starfield.
The extension not only provides functionality equivalent to ExecuteIndirect. It goes beyond that and offers more fine-grained control like explicit preprocessing of command sequences, or switching shaders and pipelines with each sequence thanks to something called Indirect Execution Sets, or IES for short, that potentially work with ray tracing, compute and graphics (both regular and mesh shading).
As part of my job at Igalia, I’ve implemented CTS tests for this extension and I had the chance to work very closely with an awesome group of developers discussing specification, APIs and test needs. I hope I don’t forget anybody and apologize in advance if so.
-
Mike Blumenkrantz, of course. Valve contractor, Super Good Coder and current OpenGL Working Group chair who took the initial specification work from Patrick Doane and carried it across the finish line. Be sure to read his blog post about DGC. Also incredibly important for me: he developed, and kept up-to-date, an implementation of the extension for lavapipe, the software Vulkan driver from Mesa. This was invaluable in allowing me to create tests for the extension much faster and making sure tests were in good shape when GPU driver authors started running them.
-
Spencer Fricke from LunarG. Spencer did something fantastic here. For the first time, the needed changes in the Vulkan Validation Layers for such a large extension were developed in parallel while tests and the spec were evolving. His work will be incredibly useful for app developers using the extension in their games. It also allowed me to detect test bugs and issues much earlier and fix them faster.
-
Samuel Pitoiset (Valve contractor), Connor Abbott (Valve contractor), Lionel Landwerlin (Intel) and Vikram Kushwaha (NVIDIA) providing early implementations of the extension, discussing APIs, reporting test bugs and needs, and making sure the extension works as good as possible for a variety of hardware vendors out there.
-
To a lesser degree, most others mentioned as spec contributors for the extension, such as Hans-Kristian Arntzen (Valve contractor), Baldur Karlsson (Valve contractor), Faith Ekstrand (Collabora), etc, making sure the spec works for them too and makes sense for Proton, RenderDoc, and drivers such as NVK and others.
If you’ve noticed, a significant part of the people driving this effort work for Valve and, from my side, the work has also been carried as part of Igalia’s collaboration with them. So my explicit thanks to Valve for sponsoring all this work.
If you want to know a bit more about DGC, stay tuned for future talks about this topic. In about a couple of weeks, I’ll present a lightning talk (5 mins) with an overview at XDC 2024 in Montreal. Don’t miss it!