account default host SERVER port PORT auth on user USERNAME password PASSWORD tls on tls_certcheck on tls_starttls off tls_trust_file /etc/ssl/certs/ca-bundle.crt syslog on timeout 30
My XDC 2024 talk about VK_EXT_device_generated_commands
Some days ago I wrote about the new VK_EXT_device_generated_commands Vulkan extension that had just been made public. Soon after that, I presented a talk at XDC 2024 with a brief introduction to it. It’s a lightning talk that lasts just about 7 minutes and you can find the embedded video below, as well as the slides and the talk transcription if you prefer written formats.
Truth be told, the topic deserves a longer presentation, for sure. However, when I submitted my talk proposal for XDC I wasn’t sure if the extension was going to be public by the time XDC would take place. This meant I had two options: if I submitted a half-slot talk and the extension was not public, I needed to talk for 15 minutes about some general concepts and a couple of NVIDIA vendor-specific extensions: VK_NV_device_generated_commands and VK_NV_device_generated_commands_compute. That would be awkward so I went with a lighning talk where I could talk about those general concepts and, maybe, talk about some VK_EXT_device_generated_commands specifics if the extension was public, which is exactly what happened.
Fortunately, I will talk again about the extension at Vulkanised 2025. It will be a longer talk and I will cover the topic in more depth. See you in Cambridge in February and, for those not attending, stay tuned because Vulkanised talks are recorded and later uploaded to YouTube. I’ll post the link here and in social media once it’s available.
XDC 2024 recording
Talk slides and transcription
Hello, I’m Ricardo from Igalia and I’m going to talk about Device-Generated Commands in Vulkan. This is a new extension that was released a couple of weeks ago. I wrote CTS tests for it, I helped with the spec and I worked with some actual heros, some of them present in this room, that managed to get this implemented in a driver.
Device-Generated Commands is an extension that allows apps to go one step further in GPU-driven rendering because it makes it possible to write commands to a storage buffer from the GPU and later execute the contents of the buffer without needing to go through the CPU to record those commands, like you typically do by calling vkCmd functions working with regular command buffers.
It’s one step ahead of indirect draws and dispatches, and one step behind work graphs.
Getting away from Vulkan momentarily, if you want to store commands in a storage buffer there are many possible ways to do it. A naïve approach we can think of is creating the buffer as you see in the slide. We assign a number to each Vulkan command and store it in the buffer. Then, depending on the command, more or less data follows. For example, lets take the sequence of commands in the slide: (1) push constants followed by (2) dispatch. We can store a token number or command id or however you want to call it to indicate push constants, then we follow with meta-data about the command (which is the section in green color) containing the layout, stage flags, offset and size of the push contants. Finally, depending on the size, we store the push constant values, which is the first chunk of data in blue. For the dispatch it’s similar, only that it doesn’t need metadata because we only want the dispatch dimensions.
But this is not how GPUs work. A GPU would have a very hard time processing this. Also, Vulkan doesn’t work like this either. We want to make it possible to process things in parallel and provide as much information in advance as possible to the driver.
So in Vulkan things are different. The buffer will not contain an arbitrary sequence of commands where you don’t know which one comes next. What we do is to create an Indirect Commands Layout. This is the main concept. The layout is like a template for a short sequence of commands. We create this layout using the tokens and meta-data that we saw colored red and green in the previous slide.
We specify the layout we will use in advance and, in the buffer, we ony store the actual data for each command. The result is that the buffer containing commands (lets call it the DGC buffer) is divided into small chunks, called sequences in the spec, and the buffer can contain many such sequences, but all of them follow the layout we specified in advance.
In the example, we have push constant values of a known size followed by the dispatch dimensions. Push constant values, dispatch. Push constant values, dispatch. Etc.
The second thing Vulkan does is to severely limit the selection of available commands. You can’t just start render passes or bind descriptor sets or do anything you can do in a regular command buffer. You can only do a few things, and they’re all in this slide. There’s general stuff like push contants, stuff related to graphics like draw commands and binding vertex and index buffers, and stuff to dispatch compute or ray tracing work. That’s it.
Moreover, each layout must have one token that dispatches work (draw, compute, trace rays) but you can only have one and it must be the last one in the layout.
Something that’s optional (not every implementation is going to support this) is being able to switch pipelines or shaders on the fly for each sequence.
Summing up, in implementations that allow you to do it, you have to create something new called Indirect Execution Sets, which are groups or arrays of pipelines that are more or less identical in state and, basically, only differ in the shaders they include.
Inside each set, each pipeline gets an index and you can change the pipeline used for each sequence by (1) specifying the Execution Set in advance (2) using an execution set token in the layout, and (3) storing a pipeline index in the DGC buffer as the token data.
The summary of how to use it would be:
First, create the commands layout and, optionally, create the indirect execution set if you’ll switch pipelines and the driver supports that.
Then, get a rough idea of the maximum number of sequences that you’ll run in a single batch.
With that, create the DGC buffer, query the required preprocess buffer size, which is an auxiliar buffer used by some implementations, and allocate both.
Then, you record the regular command buffer normally and specify the state you’ll use for DGC. This also includes some commands that dispatch work that fills the DGC buffer somehow.
Finally, you dispatch indirect work by calling vkCmdExecuteGeneratedCommandsEXT. Note you need a barrier to synchronize previous writes to the DGC buffer with reads from it.
You can also do explicit preprocessing but I won’t go into detail here.
That’s it. Thank for watching, thanks Valve for funding a big chunk of the work involved in shipping this, and thanks to everyone who contributed!
Bespoke solution to monitor power outages at home
When I came home from a 5-days family trip this summer I immediately realized power was off in our flat. The main switch in the electricity panel was down together with one other additional switch. Everything appeared to have happened a few days before we arrived, so a few things in the fridge were ruined and most of the freezer contents had to be discarded. This was despite the fact we have relatives living close by with an emergency set of keys but, as we were completely unaware of the events, we couldn’t ask them to go check.
I thought about what happened and decided I wanted to set something up so I would get warned if power fails while I’m away. My first thoughts were using something that was available off-the-shelf, but I failed to find something cheap and easy. Fortunately, I already had a couple of things that could help me in this: a small cloud server (that hosts this blog) and a permanently-connected RPi4 that I use as a Pi-Hole at home. To be warned of a power failure, I wanted to use my RPi to ping (somehow) the cloud server from time to time and, on the cloud server, periodically check if we have received a recent ping. If too much time goes by without receiving a ping from home, we can assume something’s wrong and either we have a power outage or an Internet service outage.
The implementation would need the following things:
-
The cloud server had to be able to send me an email.
-
The cloud server could have a CGI script that, when accessed, would write a timestamp somewhere.
-
The RPi would access that CGI script once every minute, for example.
-
The cloud server would have something to check timestamps periodically, then email me if it’s been too long without a ping.
The difficulty is that I’m not a web developer, plus I’m using nginx on the cloud server and nginx doesn’t support CGI scripts, which complicates things a bit. However, I made all of this work and wanted to share my scripts in case someone finds it useful.
Sending emails from the server
This one is easy because I was already using something similar to monitor disks on a few computers using smartd.
When smartd detects a disk may be about to fail, it can be told to email root, and we can use $HOME/.forward
to redirect the email with a script.
The script, as in this case, can use msmtp
, which is a nice program that lets you send emails from the command line using an SMTP server.
Thanks to Fastmail, I generated a new set of credentials for SMTP access, installed msmtp
in the cloud server and created a config file for it in /etc/msmtprc
.
Note running msmtp --version
will report the right system configuration file name.
The configuration file looks like this:
In my case, SERVER
is smtp.fastmail.com, PORT
is 465 and USERNAME
and PASSWORD
are the ones I created.
The TLS trust file has that path in Fedora, but it may be different on other distributions.
With that configuration all set, I created the following script as /usr/local/bin/pingmonitor-mail
:
#!/usr/bin/env bash FROM=YOUR_EMAIL_ADDRESS TO=YOUR_EMAIL_ADDRESS DATE="$( TZ=Z date -R )" SUBJECT="$1" BODY="$2" msmtp -f "$FROM" "$TO" <<EOF From: $FROM To: $TO Date: $DATE Subject: $SUBJECT $BODY EOF
It expects the subject of the email as the first argument and typically a sentence for the body as the second argument. I ran it a few times from the command line and verified it worked perfectly.
CGI script to record ping timestamps
As mentioned before, nginx does not support CGI.
It only supports FastCGI, so this is slightly more complicated than expected.
After a few tries, I settled on using /var/run/pingmonitor
as the main directory containing the FastCGI socket (more on that later) and /var/run/pingmonitor/pings
for the actual pings.
I thought a bit about how to record the ping timestamps.
My initial idea was to save it to a file but then I started overthinking it.
If I used a file to store the timestamps (either appending to it or overwriting the file contents) I wanted to make sure the checker would always read a full timestamp and wouldn’t get partial file contents.
If the CGI script wrote the timestamp to the file it would need to block it somehow in the improbable case that the checker was attempting to read the file at the same time.
To avoid that complication, I decided to take advantage of the file system to handle that for me.
/var/run/pingmonitor/pings
would be a directory instead.
When the CGI script runs, it would create a new empty file in that directory with the timestamp being the name of the file.
The checker would list the files in the directory, convert their names to timestamps and check the most recent one.
I think that works because either the file exists or it does not when you list the directory contents, so it’s atomic.
If you know it’s not atomic, please leave a comment or email me with a reference.
For the FastCGI script itself, I installed the fastcgi
Python module using pip
.
This allowed me to create a script that easily provides a FastCGI process that launches before nginx, runs as the nginx
user and creates the timestamp files when called.
Take a look below:
#!/usr/bin/env python import os import fastcgi import sys import pwd import grp import time import pathlib RUN_DIR = '/var/run/pingmonitor' PINGS_DIR = os.path.join(RUN_DIR, 'pings') USER='nginx' GROUP='nginx' ONE_SECOND_NS = 1000000000 # Create run and pings directory. Not a problem if they exist. os.makedirs(RUN_DIR, mode=0o755, exist_ok=True) os.makedirs(PINGS_DIR, mode=0o755, exist_ok=True) # Get UID and GID for nginx. uid = pwd.getpwnam('nginx').pw_uid gid = grp.getgrnam('nginx').gr_gid # Make the directories be owned by the nginx user, so it can create the socket # and ping files. os.chown(RUN_DIR, uid, gid) os.chown(PINGS_DIR, uid, gid) # Switch to the run (base) directory to create the socket there. os.chdir(RUN_DIR) # Become the nginx user. os.setgid(gid) os.setuid(uid) @fastcgi.fastcgi() def pingmonitor(): timestamp = time.time_ns() // ONE_SECOND_NS filename = '%016d' % (timestamp,) path = os.path.join(PINGS_DIR, filename) pathlib.Path(path).touch() sys.stdout.write('Content-type: text/plain\n\n') sys.stdout.write('OK\n')
Apart from directory creation and user switching logic at the beginning, the interesting part is the pingmonitor
function.
It obtains the epoch in nanoseconds and converts it to seconds.
The file name is a zero-padded version of that number, which is is then “touched”, and a reply is served to the HTTP client.
Not pictured, is that by decorating the function with @fastcgi.fastcgi(), a socket is created in the current directory (/var/run/pingmonitor
) with the name fcgi.sock
.
That socket is the FastCGI socket that nginx will use to redirect requests to the FastCGI process.
Also, if you run that file as a script, the decorator will create a main loop for you.
I saved the script to /usr/local/bin/pingmonitor.cgi
and set up a systemd service file to start it.
The systemd unit file is called called /etc/systemd/system/pingmonitor.service
:
[Unit] Description=FastCGI Ping Monitor Service After=network.target [Service] Type=simple Restart=always RestartSec=1 ExecStart=/usr/local/bin/pingmonitor.cgi [Install] WantedBy=nginx.service
To hook it up with nginx, I created a block in its configuration file:
location /cgi-bin/RANDOM_STRING-pingmonitor.cgi { # Document root root DOCUMENT_ROOT; # Fastcgi socket fastcgi_pass unix:/var/run/pingmonitor/fcgi.sock; # Fastcgi parameters, include the standard ones include /etc/nginx/fastcgi_params; # Adjust non standard parameters (SCRIPT_FILENAME) fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; }
I used a StackOverflow question as a reference for this.
In the nginx configuration block you can see I’m using RANDOM_STRING
as part of the CGI script URL, which is a long random string.
This is because I didn’t want that URL to be easily discoverable.
Its location is basically a secret between my server and my RPi.
After setting everything up I accessed the URL with my browser multiple times, confirmed the timestamp files were being created, etc.
Accessing the CGI script periodically
This is the easy part that goes in the RPi.
I could’ve used a systemd timer but went with a service instead (like the guy pushing all shapes through the same hole), so the main part is a script that pings the URL once a minute, saved as /usr/local/bin/pingmonitor-pinger.sh
.
#!/usr/bin/env bash while true; do sleep 60 curl --silent --max-time 30 -o /dev/null URL done
And the corresponding systemd service file called /etc/systemd/system/pingmonitor-pinger.service
:
[Unit] Description=Ping Monitor Pinger Service After=network.target [Service] Type=simple Restart=always RestartSec=1 ExecStart=/usr/local/bin/pingmonitor-pinger.sh [Install] WantedBy=multi-user.target
Checking timestamps periodically
This part goes into the cloud server again.
The script tries to send a single email when it detects pings are too old (1000 seconds, more or less reasonable limit chosen arbitrarily), and another one if the pings come back.
It’s also in charge of removing old ping files.
I could have removed all existing files with each check, but I decided to arbitrarily keep the last 10 in case it was useful for something.
To send emails, it uses /usr/local/bin/pingmonitor-mail
as described above.
I saved it under /usr/local/bin/pingmonitor-checker.py
.
#!/usr/bin/env python import glob import os import time import subprocess import sys PINGS_DIR = '/var/run/pingmonitor/pings' MAIL_PROGRAM = '/usr/local/bin/pingmonitor-mail' MAX_DIFF = 1000 # Seconds. SLEEP_TIME = 60 # Seconds. MAX_FILES = 10 ONE_SECOND_NS = 1000000000 def get_epoch(): return time.time_ns() // ONE_SECOND_NS def print_msg(msg): print('%s' % (msg,), file=sys.stderr) os.makedirs(PINGS_DIR, mode=0o755, exist_ok=True) os.chdir(PINGS_DIR) start_time = get_epoch() ping_missing = False while True: now = get_epoch() # List of files with a numeric name. filenames = glob.glob('0*') # Check the last timestamp. If no files exist yet, wait at least from the start # of the script. if len(filenames) == 0: last_timestamp = start_time else: filenames.sort() most_recent = filenames[-1] last_timestamp = int(most_recent, base=10) current_diff = now - last_timestamp # Remove old files. if len(filenames) > MAX_FILES: kept_files = filenames[-MAX_FILES:] for fn in filenames: if fn not in kept_files: os.remove(fn) if current_diff > MAX_DIFF and (not ping_missing): ping_missing = True subject = '[pingmonitor] No pings for %d seconds' % (MAX_DIFF,) body = 'Last timestamp: %s' % (time.ctime(last_timestamp),) print_msg('%s; %s' % (subject, body)) subprocess.run([MAIL_PROGRAM, subject, body]) elif current_diff < MAX_DIFF and ping_missing: ping_missing = False subject = '[pingmonitor] Ping recovered' body = 'Last timestamp: %s' % (time.ctime(last_timestamp),) print_msg('%s; %s' % (subject, body)) subprocess.run([MAIL_PROGRAM, subject, body]) time.sleep(SLEEP_TIME)
Again, such an script could be run as a systemd timer, but I decided to write it as a loop and use a service instead, called /etc/systemd/system/pingmonitor-checker.service
.
[Unit] Description=Ping Monitor Checker Service After=pingmonitor.service Wants=pingmonitor.service [Service] Type=simple Restart=always RestartSec=1 ExecStart=/usr/local/bin/pingmonitor-checker.py [Install] WantedBy=multi-user.target
Final thoughts
After setting that up, I checked it works by experimenting with a few timeouts and stopping and starting the pinger service on the RPi. I’m pretty happy with how things turned out, given that this sits outside my usual domain. With an unreliable Internet connection at home, what I did may not be suitable for you if all you’re interested in are the actual power outages. In my case, Internet outages are very infrequent so I’m willing to live with a few false positives if that means I won’t waste the contents of my fridge and freezer again.
Waiter, there's an IES in my DGC!
Finally! Yesterday Khronos published Vulkan 1.3.296 including VK_EXT_device_generated_commands. Thousands of engineering hours seeing the light of day, and awesome news for Linux gaming.
Device-Generated Commands, or DGC for short, are Vulkan’s equivalent to ExecuteIndirect in Direct3D 12. Thanks to this extension, originally based on a couple of NVIDIA vendor extensions, it will be possible to prepare sequences of commands to run directly from the GPU, and executing those sequences directly without any data going through the CPU. Also, Proton now has a much-more official leg to stand on when it has to translate ExecuteIndirect from D3D12 to Vulkan while you run games such as Starfield.
The extension not only provides functionality equivalent to ExecuteIndirect. It goes beyond that and offers more fine-grained control like explicit preprocessing of command sequences, or switching shaders and pipelines with each sequence thanks to something called Indirect Execution Sets, or IES for short, that potentially work with ray tracing, compute and graphics (both regular and mesh shading).
As part of my job at Igalia, I’ve implemented CTS tests for this extension and I had the chance to work very closely with an awesome group of developers discussing specification, APIs and test needs. I hope I don’t forget anybody and apologize in advance if so.
-
Mike Blumenkrantz, of course. Valve contractor, Super Good Coder and current OpenGL Working Group chair who took the initial specification work from Patrick Doane and carried it across the finish line. Be sure to read his blog post about DGC. Also incredibly important for me: he developed, and kept up-to-date, an implementation of the extension for lavapipe, the software Vulkan driver from Mesa. This was invaluable in allowing me to create tests for the extension much faster and making sure tests were in good shape when GPU driver authors started running them.
-
Spencer Fricke from LunarG. Spencer did something fantastic here. For the first time, the needed changes in the Vulkan Validation Layers for such a large extension were developed in parallel while tests and the spec were evolving. His work will be incredibly useful for app developers using the extension in their games. It also allowed me to detect test bugs and issues much earlier and fix them faster.
-
Samuel Pitoiset (Valve contractor), Connor Abbott (Valve contractor), Lionel Landwerlin (Intel) and Vikram Kushwaha (NVIDIA) providing early implementations of the extension, discussing APIs, reporting test bugs and needs, and making sure the extension works as good as possible for a variety of hardware vendors out there.
-
To a lesser degree, most others mentioned as spec contributors for the extension, such as Hans-Kristian Arntzen (Valve contractor), Baldur Karlsson (Valve contractor), Faith Ekstrand (Collabora), etc, making sure the spec works for them too and makes sense for Proton, RenderDoc, and drivers such as NVK and others.
If you’ve noticed, a significant part of the people driving this effort work for Valve and, from my side, the work has also been carried as part of Igalia’s collaboration with them. So my explicit thanks to Valve for sponsoring all this work.
If you want to know a bit more about DGC, stay tuned for future talks about this topic. In about a couple of weeks, I’ll present a lightning talk (5 mins) with an overview at XDC 2024 in Montreal. Don’t miss it!
Signing PDFs without embedded forms under Linux
Edit: I’ve added a couple more methods for modifying PDF files with suggestions from readers. Thanks everyone!
Picture the following situation: someone sends you a PDF document and asks you to send it back signed. Some problems, though:
-
The PDF doesn’t have an embedded form, it’s just something they exported from their word processor.
-
They’re not using any signing service like DocuSign, Dropbox Sign or any other.
Sounds implausible? I’ve faced the situation multiple times. From the top of my head:
-
When I joined Igalia some years ago I had to do that with a few documents.
-
Multiple times, one of them very recent, when interacting with some electronic administration websites, where the definition of electronic administration is:
-
We make the form available to you as a PDF document (thank us we don’t give you a .docx file at least).
-
You can send the form filled back to us through the internet, by attaching a file somehow.
-
No, we don’t have an HTML version of the form.
-
No, we don’t have anything set up so you can sign the document with your official electronic certificate or anything similar.
-
-
With documents from the homeowner association management company.
If you’re like me, what do you do? Print the document, fill it, scan it, attach it.
Side quest: scanning pages
No, I’m not taking pictures of the document with my phone to send it back. For me, that’s a solved problem now. The scanning system on Linux is called SANE (standing for Scanner Access Now Easy) and it has an official non-HTTPS site. If you want to properly scan an A4 or US-Letter sized document, you can get a desktop scanner. The SANE website has a list of supported devices, indicating how well-supported each is. Long story short? I’m partial to Canon scanners because I know they work reasonably well. In their website, as of the time I’m writing this, they list a couple of them: the CanoScan LIDE 300 and the CanoScan LIDE 400. Both of them are very well supported under SANE. I got the 300 model some time ago. Plug it in, open the document scanner Gnome app and start scanning. No setup needed.
Back to signing
Can we do better than the print+scan cycle? Can you insert your scanned signature into the document easily? For that, you have to solve two separate problems.
-
Getting a nice version of your signature in image form.
-
Ideally, as a PNG with a transparent background.
-
If not, a clear image with a light background could do it.
-
-
Inserting that image on the PDF somehow.
I don’t know if this sounds surprising to you, but the hardest problem is the first one. However, you only have to solve it once. Let’s start, then, by the second one.
Inserting the signature
Thanks Emma Anholt for mentioning you can open PDFs with LibreOffice Draw and use it to insert images and text anywhere in the document! Thanks also to Alyssa Rosenzweig for also mentioning Xournal (or Xournal++), which I’ve found to work much better at that task compared to LibreOffice. Also special thanks to everybody who contacted me mentioning different methods and tools! I really enjoy posting these articles just for the reactions, comments and suggestions.
Method 1: Xournal++
Xournal++ is an open source program you can typically install from your distribution repositories. I installed it from Gnome Software on Fedora without issues, and selected the official RPM as the source.
The application is designed to create notes in general, using images, text or typically a graphics tablet, but it turns out it has an option to annotate an existing PDF.
It will open the PDF and let you put anything on it. You can use a wide variety of tools to insert images, text boxes or even draw anything on top by hand, if you have a graphics tablet or incredible mouse skills. You should take a look at the toolbar and/or the tools menu to see what it can do. The easiest option is clicking on the image insertion tool, click anywhere in the document and select the image you want to insert. If it turns out to be too big or small, you can drag the corner to increase or decrease size preserving the image aspect, and you can move it around freely. The text insertion tool is also handy to fill complex forms that can’t be filled as a proper form. When done, use “File > Export as PDF” to generate a new document. End of the journey!
Method 2: Firefox
Amazingly, I wasn’t aware of this option until it was mentioned to be by a couple of people. Thanks to Sasi Péter and an anonymous user for the tip.
When you open a PDF file under Firefox, there’s a toolbar that lets you insert text, draw and insert images, much like Xournal++.
If, as it was my case, you don’t have the image insertion button, you need to take a look at the pdfjs.enableStampEditor
in about:config
.
Once you’ve finished modifying the document click on the Save icon (folder with an arrow pointing down) to save the modified version.
Method 3: LibreOffice Draw
You can also open PDFs with LibreOffice Draw and you can use its incredibly wide variety of tools to modify the document. That includes inserting images, text or whatever you need. However, I’ve found LibreOffice Draw to be a bit lacking when preserving the original PDF contents. For example, one of the documents I had to sign had an appended Latex-generated subdocument in the last pages, and LibreOffice Draw was not able to preserve the font and style of those last pages after inserting the signature. Xournal++, on the other hand, preserved it without issues.
Method 4: falsisign
Suggested by Anisse on Mastodon.
falsisign is a tool intended to be used for automation and it probably falls out of what most people would consider a friendly tool. However, it looks incredibly practical if you need to sign a lot of pages in a document, and its default behavior allows the final document to look like if it had been printed, signed and then scanned again, if you want or need your document to look that way. There are options to disable that behavior, though.
Creating a nice version of your signature as a PNG file
This is getting long so I’ll cut it short: follow one of the available video tutorials on YouTube. I used the one I just linked, and I can boil it down to the following steps, but please follow the video in case of doubt:
-
If possible, scan your signature on a white paper using a pen that leaves a clear mark, like a marker pen or Pilot V5 or something like that.
-
If you don’t have a scanner, take a picture with your phone trying not to create a shadow from your body or the phone on the paper.
-
Import the image into Gimp and crop it as you see fit.
-
Decrease saturation to make it gray-scale.
-
Play with the color value curves, moving up the highest part of the curve so the white (or almost white) background becomes fully white, and moving down the lowest part of the curve so the trace becomes even more dark and clear, until you’re satisfied with the results.
-
Remove any specks from the white background using the drawing tools.
-
Duplicate the main layer adding a copy of it.
-
Invert the colors of the layer copy so it becomes white-on-black.
-
Add a transparency mask to the original layer, with black as transparent.
-
Copy the inverted layer and paste it into the transparency mask of the original layer (this will make the white background transparent).
-
Export the final result as a PNG.
Caveats
Take into account it’s easy to extract images from PDF files, so using this method it may be easy for someone to extract your signature from the generated document. However, this can also be easily done if you scan or take a picture of a physical document already signed, even if the signature crosses some lines in the document, with minimal image manipulation skills, so I don’t think it changes the threat model in a significant way compared to the most common solution.
The Dark Side of the Blog
If you’re browsing this blog from a device in which the preferred theme is dark you may notice the blog now has a dark theme. This is an idea that had crossed my mind multiple times in the past months but it’s not until very recently that I decided to take a serious look at how to do it. My decision to finally tackle the issue was motivated by “contradicting” reports I read online. Some people complain about how the lack of a light mode option is an accesibility or usability issue for them, making it impossible to read a long text. For other people, it’s the other way around. The only proper solution to this is to have both a dark mode and a light mode and use the one preferred by the user, which is what this blog does now. No JavaScript was harmed (or used) in the process, so lets take a quick look at how to achieve this. In other words, follow me in this journey to the basic stuff from a non-web developer point of view.
The first thing I did was to replace all the colors in my CSS file with variables. This is not only helpful to create a dark theme, but it also helps keep the colors defined in a single place and makes changing them much easier. The programmer in me is now glad the color scheme is no longer full of magic constants. Achieving this is easy. If you previoysly had…
body { background: white; }
Now you can have…
:root { --background-color: white; } body { background: var(--background-color); }
When you define a CSS variable (or custom CSS property, which I think is the proper term to refer to them), you can give it an arbitrary name starting with a double dash.
Later, you can use those properties as values using var()
with the name inside the parens.
You’re not restricted to colors.
You can define more things like linear gradients, filters, etc.
After that, defining an alternative dark or light theme is as easy as doing…
@media (prefers-color-scheme: dark) { :root { --background-color: black; } }
And that’s it. You can now change the main colors to alternative versions for dark or light modes.
For my blog, the main theme is still going to be light. This is reflected when sometimes I insert images or other content and, if I can’t easily make the background transparent, I’m going to default to white. However, I’ll try to add white margings in the future so images don’t look as bad in the dark theme as the Igalia logo next to the Vulkan logo I used in a recent post.
One small trick I’m using that required a bit more digging is handling the Github icon in the “About Me” page. It’s an SVG icon drawn in an almost-black color and it’s referenced as an external resource, so we cannot change the fill property of the SVG to switch to a different color when using dark mode. Yet, if you look at the page in dark mode, you’ll see the icon is displayed in an almost-white color. The key, in this case, is a custom invert filter. In the CSS, I specify:
:root { --github-icon-filter: invert(0); } @media (prefers-color-scheme: dark) { :root { --github-icon-filter: invert(1); } } .githubicon { filter: var(--github-icon-filter); }
This means the icon colors are inverted for dark mode and are kept as is for light mode.
And that’s it! Sorry if I bored you with basic stuff (web developer maybe?) and glad if you found this interesting! See you on the dark side of the Moon.