What is the X.Org Foundation, anyway?
A few weeks ago the annual X.Org Foundation Board of Directors election took place. The Board of Directors has 8 members at any given moment, and members are elected for 2-year terms. Instead of renewing the whole board every 2 years, half the board is renewed every year. Foundation members, which must apply for or renew membership every year, are the electorate in the process. Their main duty is voting in board elections and occasionally voting in other changes proposed by the board.
As you may know, thanks to the work I do at Igalia, and the trust of other Foundation members, I’m part of the board and currently serving the second year of my term, which will end in Q1 2024. Despite my merits coming from my professional life, I do not represent Igalia as a board member. However, to avoid companies from taking over the board, I must disclose my professional affiliation and we must abide by the rule that prohibits more than two people with the same affiliation from being on the board at the same time.
Because of the name of the Foundation and for historical reasons, some people are confused about its purpose and sometimes they tend to think it acts as a governance body for some projects, particularly the X server, but this is not the case. The X.Org Foundation wiki page at freedesktop.org has some bits of information but I wanted to clarify a few points, like mentioning the Foundation has no paid employees, and explain what we do at the Foundation and the tasks of the Board of Directors in practical terms.
Cue the music.
(“The Who - Who Are You?” starts playing)
The main points would be:
-
The Foundation acts as an umbrella for multiple projects, including the X server, Wayland and others.
-
The board of directors has no power to decide who has to work on what.
-
The largest task is probably organizing XDC.
-
Being a director is not a paid position.
-
The Foundation pays for project infrastructure.
-
The Foundation, or its financial liaison, acts as an intermediary with other orgs.
Umbrella for multiple projects
Some directors have argued in the past that we need to change the Foundation name to something different, like the Freedesktop.org Foundation. With some healthy sense of humor, others have advocated for names like Freedesktop Software Foundation, or FSF for short, which should be totally not confusing. Humor or not, the truth is the X.Org Foundation is essentially the Freedesktop Foundation, so the name change would be nice in my own personal opinion.
If you take a look at the Freedesktop Gitlab instance, you can navigate to a list of projects and sort them by stars. Notable mentions you’ll find in the list: Mesa, PipeWire, GStreamer, Wayland, the X server, Weston, PulseAudio, NetworkManager, libinput, etc. Most of them closely related to a free and open source graphics stack, or free and open source desktop systems in general.
X.Org server unmaintained? I feel you
As I mentioned above, the Foundation has no paid employees and the board has no power to direct engineering resources to a particular project under its umbrella. It’s not a legal question, but a practical one. Is the X.Org server dying and nobody wants to touch it anymore? Certainly. Many people who worked on the X server are now working on Wayland and creating and improving something that works better in a modern computer, with a GPU that’s capable of doing things which were not available 25 years ago. It’s their decision and the board can do nothing.
On a tangent, I’m feeling a bit old now, so let me say when I started using Linux more than 20 years ago people were already mentioning most toolkits were drawing stuff to pixmaps and putting those pixmaps on the screen, ignoring most of the drawing capabilities of the X server. I’ve seen tearing when playing movies on Linux many times, and choppy animations everywhere. Attempting to use the X11 protocol over a slow network resulted in broken elements and generally unusable screens, problems which would not be present when falling back to a good VNC server and client (they do only one specialized thing and do it better).
For the last 3 or 4 years I’ve been using Wayland (first on my work laptop, nowadays also on my personal desktop) and I’ve seen it improve all the time. When using Wayland, animations are never choppy in my own experience, tearing is unheard of and things work more smoothly, as far as my experience goes. Thanks to using the hardware better, Wayland may also give you improved battery life. I’ve posted in the past that you can even use NVIDIA with Gnome on Wayland these days, and things are even simpler if you use an Intel or AMD GPU.
Naturally, there may be a few things which may not be ready for you yet. For example, maybe you use a DE which only works on X11. Or perhaps you use an app or DE which works on Wayland, but its support is not great and has problems there. If it’s an app, likely power users or people working on distributions can tune it to make it use XWayland by default, instead of Wayland, while bugs are ironed out.
X.Org Developers Conference
Ouch, there we have the “X.Org” moniker again…
Back on track, if the Foundation can do nothing about the lack of people maintaining the X server and does not set any technical direction for projects, what does it do? (I hear you shouting “nothing!” while waving your fist at me.) One of the most time-consuming tasks is organizing XDC every year, which is arguably one of the most important conferences, if not the most important one, for open source graphics right now.
Specifically, the board of directors will set up a commission composed of several board members and other Foundation members to review talk proposals, select which ones will have a place at the conference, talk to speakers about shortening or lengthening their talks, and put them on a schedule to be used at the conference, which typically lasts 3 days. I chaired the paper committee for XDC 2022 and spent quite a lot of time on this.
The conference is free to attend for anyone and usually alternates location between Europe and the Americas. Some people may want to travel to the conference to present talks there but they may lack the budget to do so. Maybe they’re a student or they don’t have enough money, or their company will not sponsor travel to the conference. For that, we have travel grants. The board of directors also reviews requests for travel grants and approves them when they make sense.
But that is only the final part. The board of directors selects the conference contents and prepares the schedule, but the job of running the conference itself (finding an appropriate venue, paying for it, maybe providing some free lunches or breakfasts for attendees, handling audio and video, streaming, etc) falls in the hands of the organizer. Kid you not, it’s not easy to find someone willing to spend the needed amount of time and money organizing such a conference, so the work of the board starts a bit earlier. We have to contact people and request for proposals to organize the conference. If we get more than one proposal, we have to evaluate and select one.
As the conference nears, we have to fire some more emails and convince companies to sponsor XDC. This is also really important and takes time as well. Money gathered from sponsors is not only used for the conference itself and travel grants, but also to pay for infrastructure and project hosting throughout the whole year. Which takes us to…
Spending millions in director salaries
No, that’s not happening.
Being a director of the Foundation is not a paid position. Every year we suffer a bit to be able to get enough candidates for the 4 positions that will be elected. Many times we have to extend the nomination period.
If you read news about the Foundation having trouble finding candidates for the board, that barely qualifies as news because it’s almost the same every year. Which doesn’t mean we’re not happy when people spread the news and we receive some more nominations, thank you!
Just like being an open source maintainer is not a grateful task sometimes, not everybody wants to volunteer and do time-consuming tasks for free. Running the board elections themselves, approving membership renewals and requests every year, and sending voting reminders also takes time. Believe me, I just did that a few weeks ago with help from Mark Filion from Collabora and technical assistance from Martin Roukala.
Project infrastructure
The Foundation spends a lot of money on project hosting costs, including Gitlab and CI systems, for projects under the Freedesktop.org umbrella. These systems are used every day and are fundamental for some projects and software you may be using if you run Linux. Running our own Gitlab instance and associated services helps keep the web decentralized and healthy, and provides more technical flexibility. Many people seem to appreciate those details, judging by the number of projects we host.
Speaking on behalf of the community
The Foundation also approaches other organizations on behalf of the community to achieve some stuff that would be difficult otherwise.
To pick one example, we’ve worked with VESA to provide members with access to various specifications that are needed to properly implement some features. Our financial liaison, formerly SPI and soon SFC, signs agreements with the Khronos Group that let them waive fees for certifying open source implementations of their standards.
For example, you know RADV is certified to comply with the Vulkan 1.3 spec and the submission was made on behalf of Software in the Public Interest, Inc. Same thing for lavapipe. Similar for Turnip, which is Vulkan 1.1 conformant.
Conclusions
The song is probably over by now and you have a better idea of what the Foundation does, and what the board members do to keep the lights on. If you have any questions, please let me know.
From Team Blue and Green to Team Red
It’s finally happened. I bought a brand new desktop computer on August 2014, almost 9 years ago. It had an Intel Haswell processor (i5-4690s), 8 GiB of RAM and a GeForce GTX 760. I later doubled the amount of RAM to 16 GiB (precise date unknown), replaced the GPU with a GTX 1070 in November 2016 and upgraded the CPU to an i7-4770K in October 2017. Since then, no more upgrades. It’s been my main personal (non-work) computer for the last few years.
But now I’m typing this from a different box. Yet the physical box and the OS installation is actually the same.
.',;::::;,'. rg3@deckard .';:cccccccccccc:;,. ----------- .;cccccccccccccccccccccc;. OS: Fedora Linux 37 (Thirty Seven) x86_64 .:cccccccccccccccccccccccccc:. Host: B650M DS3H .;ccccccccccccc;.:dddl:.;ccccccc;. Kernel: 6.1.18-200.fc37.x86_64 .:ccccccccccccc;OWMKOOXMWd;ccccccc:. Uptime: 15 mins .:ccccccccccccc;KMMc;cc;xMMc:ccccccc:. Packages: 3136 (rpm) ,cccccccccccccc;MMM.;cc;;WW::cccccccc, Shell: bash 5.2.15 :cccccccccccccc;MMM.;cccccccccccccccc: Resolution: 2560x1440 :ccccccc;oxOOOo;MMM0OOk.;cccccccccccc: DE: GNOME 43.3 cccccc:0MMKxdd:;MMMkddc.;cccccccccccc; WM: Mutter ccccc:XM0';cccc;MMM.;cccccccccccccccc' WM Theme: Clearlooks-Phenix ccccc;MMo;ccccc;MMW.;ccccccccccccccc; Theme: Adwaita-dark [GTK2/3] ccccc;0MNc.ccc.xMMd:ccccccccccccccc; Icons: Adwaita [GTK2/3] cccccc;dNMWXXXWM0::cccccccccccccc:, Terminal: tmux cccccccc;.:odl:.;cccccccccccccc:,. CPU: AMD Ryzen 5 7600X (12) @ 4.700GHz :cccccccccccccccccccccccccccc:'. GPU: AMD ATI Radeon RX 6700/6700 XT/6750 XT / 6800M/6850M XT .:cccccccccccccccccccccc:;,.. Memory: 2574MiB / 15717MiB '::cccccccccccccc::;,.
A couple of weeks ago I grabbed an AMD Ryzen 5 7600X that was on sale together with a basic AM5 motherboard and a hard-to-find 2x8 GiB DDR5 6000 MHz CL36 kit.
I decided to save some money this time and kept the case, power supply and drives.
Surprisingly for me, the process was actually almost plug-and-play.
The pessimistic side of me was expecting boot problems due to missing chipset drivers or something like that, but no.
I replaced the components in the case for the new ones, plugged my drives in and Fedora booted without issues.
The only small detail I needed to fix was firing up nm-connection-editor
and replacing the old interface name with the new one in the default DHCP connection.
Windows had no issues either, but it did require reactivating the license.
The one I had from 9 years ago was retail, so no problems with that.
My choice of a Ryzen 5 7600X was actually simple: these days, compared to Intel, Ryzen has a slight advantage on performance-per-watt even in mid-range CPUs, with Intel now slowly catching up. The equivalent Intel competitor, i5-13400F, while a very good CPU, features a mix of efficiency and performance cores. Its design is more complex than the one from AMD and probably harder to handle in software, maybe more prone to scheduling mistakes by the OS. I run the 7600X in “Eco” mode which, for the record, means setting up the PBO limits to manual mode and using the following values: PPT limit 88000, TDC limit 75000 and EDC limit 150000. These values are documented in several sources. Other motherboards have an easier way to toggle this with a simple switch for Eco mode but, in the one I have, values need to be entered manually. Why did I get a 7600X only to run it in Eco mode instead of grabbing a plain 7600? Because the 7600X was on sale and significantly cheaper (240 vs 270 euros, final price).
A few days later I decided to replace the GPU too. I chose a Radeon RX 6700 (non-XT). Two reasons for the choice: Linux support with open-source drivers (including RADV, which is being worked on by an amazing group of developers hired by Valve and with whom I have the pleasure of interacting frequently while working on CTS) and the stellar price/performance ratio of that particular model. It’s frequently on sale for a bit over 300 euros where I live (I grabbed it for 330).
I’ve said in the past I’m not a fan of any brand, and I still say so. It’s a coincidence, favored by the market situation, that my CPU/GPU combo is now all made by AMD. I’m pretty sure in the future things may change again.
Replacing the GPU required more attention to detail, despite the replacement being conceptually and physically much easier than replacing the other components. On Windows, I ran DDU and removed all GPU drivers, leaving the computer ready for a GPU replacement. On Linux, I followed these steps:
-
Uninstalled the NVIDIA drivers from RPM Fusion following their super-clear instructions.
-
Edited
/etc/default/grub
to remove legacy kernel parameters used by NVIDIA, making surenouveau
was not blacklisted either on the command line or from/etc/modprobe.d
. -
Regenerated
/boot/grub2/grub.conf
usinggrub2-mkconfig
to apply the new boot parameters. -
Rebooted and verified everything continued to work and I was running GNOME on Wayland on Nouveau.
-
Also ran
dracut -f
for good measure (probably not needed but better safe than sorry).
Then I turned the computer off, replaced the GPU, turned it back on and, voilà, plug-and-play on Linux. On Windows I had to download and install the official AMD drivers, and that was it.
All in all, I was surprised by how simple the whole process was, and glad that I didn’t have to reinstall or boot from installation media to fix stuff. There is, however, a stark contrast in terms of what it meant, performance-wise, to upgrade the CPU compared to the GPU. That deserves a rant I will leave for another blog post in the coming days.
Quick note about Hetzner cloud pricing
I use a Hetzner VPS “cloud” server for hosting this blog and recently discovered a small detail in its pricing that can save you a few euros under some circumstances. I want to clarify this information is not exactly hidden. It’s clearly stated in their billing FAQ but, still, some absent-minded people like myself may not be aware of it until you see the bill.
Price per hour and monthly cap
Cloud servers in Hetzner are prominently announced with a very visible price per month and also a price per hour displayed in a smaller font next to it. For example, take a look at the current price for a CX11 instance, the type that hosts this blog, without an IPv4 address applied. It’s the cheapest one they have (click on the image for the full size).
Keen eyes (not mine) will notice the price per hour is not merely a clarification of the monthly price to help you calculate the cost of a server you use for less than a month. The price per hour is higher than the price per month in a typical 30-days month:
>>> 0.0063*24*30 4.536
This means that the price per month is actually a limit in the total price that applies if, and only if, you use the server for the whole month. It works like a loyalty discount.
When does this matter?
In many circumstances. For example, my blog server runs Fedora. Because I use it in all my systems and I’m lazy and I don’t want to use or learn anything else to host a blog. Anyway, that means roughly every 6 months there’s a new Fedora release and I have to upgrade the server. I could upgrade it in place but I like reproducibility, so I have a semi-automated script/procedure that installs what I need on a brand new server and copies data from the old one. So, normally, I upgrade the OS by creating a new server, going through that process, verifying everything works and shutting down the old instance. This takes around 15 minutes.
What happens if I switch servers in the middle of a given month? That month, being optimistic and supposing I can switch instantly with no overlap in hours, the old server will not be used for the whole month. It will be used for half of it, and the new one will be used for the other half, but not a full month either. Neither of them gets the discounted monthly price and I have to pay, in total, a full month at the per-hour rate. So instead of paying €3.98 I pay €4.54. It’s just a few cents, but a 14% increase over the normal price. In the most expensive cloud instance, the difference in price is over €10.
Applying a different strategy
The best way to proceed in these cases, I think, is to switch servers on the last day of the month, this way:
-
Spin up the new server that day as late as possible in the evening.
-
Migrate data from the old server to the new one.
-
Wait for the next day in the morning and shut down the old server.
The old server will be used for a whole month plus a few hours of the following one. The new server will be used for the whole following month, plus a few hours of the previous one.
Typically, this means you would pay the normal monthly price cap for both months, plus no more than 12 hours at the per-hour rate in excess during the overlapping period.
>>> 0.0063*12 0.0756
Only 8 cents above the normal monthly price, or a 2% increase (at most) over a normal month if you want to put it that way. This also applies to IPv4 addresses, which have a per-hour rate and a monthly cap just like servers. For the curious, adding both costs I pay €4.59 on a normal month for the server as of the time I’m writing this.
Using Firejail to minimize risk when running web browsers
I’ve blogged in the past about how I liked to run my normal web browser under a different user. In other words, I think web browsers are the weakest link in the security chain of every desktop and workstation computer. Browsers fix security issues with every release and are used to access, download and execute programs and other documents from untrusted sources, in a wide variety of formats. When I run a web browser, sometimes I don’t know what I’m going to be opening. It may be a malicious web page that will try to exploit a vulnerability in the browser I’m using. Using the method I described in the previous link, I could run the browser process as another user, so it cannot easily access my personal files, documents, cryptographic keys, etc. That method relied on running X11 and letting local users, or at least the user running the browser, connect to the server owned by my own user.
There’s a small risk involved in that but, more importantly, since moving to Wayland, the method to allow other users to access your display server is not as straighforward.
In general, a solution involving Wayland means the web browser user needs access to some files in the XDG_RUNTIME_DIR
directory, including the Wayland socket.
I used filesystem ACLs for that and, in my experience, the process is error-prone and unreliable.
Sometimes I’ve had to adjust the set of files, or the permissions I needed to grant to those files, and things have broken out of the blue after system upgrades.
The second source of risk comes from the fact that, if you want that web browser to be able to play sounds, for example when watching a video, you also needed to give the web browser user access to your sound daemon.
I mentioned a method to share your user’s PulseAudio instance with other users and an update on that when I switched from PulseAudio to PipeWire.
Today I wanted to share a simpler approach to all of this, which is running your web browser, typically Firefox, under a very restricted environment using Firejail.
Firejail is an open source project, probably available from your package manager, that uses Linux namespaces, seccomp-bpf and capabilities to restrict what your web browser can do and access.
Notably, it ships profiles for multiple applications either based on blocklists or, in the case of Firefox (the main use case), allowlists.
When you run Firefox through Firejail, for example by running firejail firefox
, the resulting Firefox process will be restricted in several ways and will not be able to access most of your home directory, except for the ~/Downloads
directory and its own configuration and data directories.
If, on top of that, it’s running under Wayland, it will not be able to spy on your screen and other windows unless there’s a second vulnerability available in the Wayland compositor.
The following screenshot shows the file manager and Firefox displaying the contents of the home directory.
Firefox is running under Firejail and, as you can see, it does not display the whole directory contents.
In fact, it’s not only not displaying every file, but also using custom versions of some of them inside its jail.
For example, I don’t have a .bashrc
file in my home directory and Firefox is seeing a “fake” one.
The src
directory you can see from Firefox is also completely restricted in contents and Firefox only sees one file in the whole hierarchy: my .gtkrc-2.0
configuration file because I have it stored in a “dotfiles” repository under src
and symlinked to the final location.
Since I discovered Firejail, I’ve switched to using it by default when running Firefox, ditching my ad-hoc mechanisms described in previous posts.
Feeling comfortable with Cascadia Code
A few years ago I blogged about switching my terminal and programming font from Terminus to Ubuntu Mono. It’s only fair, then, that I mention I’ve switched from Ubuntu Mono to Cascadia Code. I’ve been using Cascadia for many months now, probably over a year, and the experience has been great so far. The font was commissioned by Microsoft and released under the SIL Open Font License, which makes it available in the repositories of many Linux distributions. For example, it’s easily available in the official Fedora or Debian repositories.
When I decided to give it a try I was turned off by some inconsistencies in the shapes of some characters. In particular, the shape of the lowercase F glyph is a bit odd due to the horizontal crossing line being quite low compared to similar features in other characters. In other words, apparently Ubuntu Mono was easier on the eyes due to its simpler and more consistent shapes. However, after using it for months, I can really vouch for it. It can be used for long programming sessions comfortably, the characters are quite distinct from one another, it’s elegant and I haven’t gotten tired of it at all. Summing up its advantages:
-
The font has thick strokes, which is important to make it look good when you increase the font size for those like me that don’t see as well as they did in their youth or simply prefer to configure fonts with a larger size.
-
It’s very easy to read and doesn’t get tiring.
-
It’s released under an actual open font license, making it widely available (contrary to Ubuntu Mono).
-
The character size is more consistent with other fonts in the system, so it can be easily combined with them.
Regarding the last point, I mention it because fonts from the Ubuntu family tend to be smaller when compared to other fonts in the system. A 16pt size text containing a 16pt size Ubuntu Mono word will likely look a bit weird, with the Ubuntu Mono word being smaller than the surrounding text. Of course, the Ubuntu font family is internally consistent in this regard: if the surrounding text is also in a Ubuntu font, you won’t have this problem.
Anyway, if you haven’t had the chance, give the font a try. I’m using it now for my IDEs and terminals. Note: if you don’t like the programming ligatures (I don’t), you have several options. The easiest one is using the Cascadia Mono variant, which removes them completely.