If you’ve been paying attention to the evolution of the Linux gaming ecosystem in recent years, including the release of the Steam Deck and the new Steam Deck OLED, it’s likely your initial reaction to the blog post title is a simple “OK”. However, I’m coming from a very particular place so I wanted to explain my point of view and the significance of this, and hopefully you’ll find the story interesting.
As a background, let me say I’ve always gamed on Windows when using my PC. If you think I’m an idiot for doing so lately, specially because my work at Igalia involves frequently interacting with Valve contractors like Samuel Pitoiset, Timur Kristóf, Mike Blumenkrantz or Hans-Kristian Arntzen, you’d be more than right. But hear me out. I’ve always gamed on Windows because it’s the safe bet. With a couple of small kids at home and very limited free time, when I game everything has to just work. No fiddling around with software, config files, or wasting time setting up the software stack. I’m supposed to boot Windows when I want to play, play, and then turn my computer off. The experience needs to be as close to a console as possible. And, for anything non-gaming, which is most of it, I’d be using my Linux system.
In the last years, thanks to the work done by Valve, the Linux gaming stack has improved a lot. Despite this, I’ve kept gaming on Windows for a variety of reasons:
For a long time, my Linux disk only had a capacity of 128GB, so installing games was not a real possibility due to the amount of disk space they need.
Also, I was running Slackware and installing Steam and getting the whole thing running implied a fair amount of fiddling I didn’t even want to think about.
Then, when I was running Fedora on a large disk, I had kids and I didn’t want to take any risks or possibly waste time on that.
So, what changed?
Earlier this year I upgraded my PC and replaced an old Intel Haswell i7-4770k with a Ryzen R5 7600X, and my GPU changed from an NVIDIA GTX 1070 to a Radeon RX 6700. The jump in CPU power was much bigger and impressive than the more modest jump in GPU power. But talking about that and the sorry state of the GPU market is a story for another blog post. In any case, I had put up with the NVIDIA proprietary driver for many years and I think, on Windows and for gaming, NVIDIA is the obvious first choice for many people, including me. Dealing with the proprietary blob under Linux was not particularly problematic, specially with the excellent way it’s handled by RPMFusion on Fedora, where essentially you only have to install a few packages and you can mostly forget about it.
However, given my recent professional background I decided to go with an AMD card for the first time. I wanted to use a fully open source graphics stack and I didn’t want to think about making compromises in Wayland support or other fronts whatsoever. Plus, at the time I upgraded my PC, the timing was almost perfect for me to switch to an AMD card, because:
AMD cards were, in general, performing better for the same price than NVIDIA cards, except for ray tracing.
The RX 6700 non-XT was on sale.
It had the same performance as a PS5 or so.
It didn’t draw a ton of power like many recent high-end GPUs (175W, similar to the 1070 and its 150W TDP).
After the system upgrade, I did notice a few more stability problems when gaming under Windows, compared to what I was used to with an NVIDIA card. You can find thousands of opinions, comments and anecdotes on the Internet about the quality of AMD drivers, and a lot of people say they’re a couple of steps below NVIDIA drivers. It’s not my intention at all to pile up on those, but it’s true my own personal experience is having generally more crashes in games and having to face more weird situations since I switched to AMD. Normally, it doesn’t get to the point of being annoying at all, but sometimes it’s a bit surprising and I could definitely notice that increase in instability without any bias on my side, I believe. Which takes us to Far Cry 6.
A few days ago I finished playing Doom Eternal and its expansions (really nice game, by the way!) and I decided to go with Far Cry 6 next. I’m slowly working my way up with some more graphically demanding games that I didn’t feel comfortable with playing on the 1070. I went ahead and installed the game on Windows. Being a big 70GB download (100GB on disk), that took a bit of time. Then I launched it, adjusted the keyboard and mouse settings to my liking and I went to the video options menu. The game had chosen the high preset for me and everything looked good, so I attempted to run the in-game benchmark to see if the game performed well with that preset (I love it when games have built-in benchmarks!). After a few seconds in a loading screen, the game crashed I was back to the desktop. “Oh, what a bad way to start!”, I thought, without knowing what lied ahead. I launched the game again, same thing.
On the course of the 2 hours that followed, I tried everything:
Launching the main game instead of the benchmark, just in case the bug only happened in the benchmark. Nope.
Lowering quality and resolution.
Disabling any advanced setting.
Trying windowed mode, or borderless full screen.
Vsync off or on.
Disabling the overlays for Ubisoft, Steam, AMD.
Rebooting multiple times.
Uninstalling the drivers normally as well as using DDU and installing them again.
Same result every time. I also searched on the web for people having similar problems, but got no relevant search results anywhere. Yes, a lot of people both using AMD and NVIDIA had gotten crashes somewhere in the game under different circumstances, but nobody mentioned specifically being unable to reach any gameplay at all. That day I went to bed tired and a bit annoyed. I was also close to having run the game for 2 hours according to Steam, which is the limit for refunds if I recall correctly. I didn’t want to refund the game, though, I wanted to play it.
The next day I was ready to uninstall it and move on to another title in my list but, out of pure curiosity, given that I had already spent a good amount of time trying to make it run, I searched for it on the Proton compatibility database to see if it could be run on Linux, and it seemed to be possible. The game appeared to be well supported and it was verified to run on the Deck, which was good because both the Deck and my system have an RDNA2 GPU. In my head I wasn’t fully convinced this could work, because I didn’t know if the problem was in the game (maybe a bug with recent updates) or the drivers or anywhere else (like a hardware problem).
And this was, for me, when the fun started. I installed Steam on Linux from the Gnome Software app. For those who don’t know it, it’s like an app store for Gnome that acts as a frontend to the package manager.
Steam showed up there with 3 possible sources: Flathub, an “rpmfusion-nonfree-steam” repo and the more typical “rpmfusion-nonfree” repo. I went with the last option and soon I had Steam in my list of apps. I launched that and authenticated using the Steam mobile app QR code scanning function for logging in (which is a really cool way to log in, by the way, without needing to recall your username and password).
My list of installed games was empty and I couldn’t find a way to install Far Cry 6 because it was not available for Linux. However, I thought there should be an easy way to install it and launch it using the famous Proton compatibility layer, and a quick web search revealed I only had to right-click on the game title, select Properties and choose to “Force the use of a specific Steam Play compatibility tool” under the Compatibility section. Click-click-click and, sure, the game was ready to install. I let it download again and launched it.
Some stuff pops up about processing or downloading Vulkan shaders and I see it doing some work. In that first launch, the game takes more time to start compared to what I had seen under Windows, but it ends up launching (and subsequent launches were noticeably faster). That includes some Ubisoft Connect stuff showing up before the game starts and so on. Intro videos play normally and I reach the game menu in full screen. No indication that I was running it on Linux whatsoever. I go directly to the video options menu, see that the game again selected the high preset, I turn off VSync and launch the benchmark. Sincerely, honestly, completely and totally expecting it to crash one more time and that would’ve been OK, pointing to a game bug. But no, for the first time in two days this is what I get:
The benchmark runs perfectly, no graphical glitches, no stuttering, frame rates above 100FPS normally, and I had a genuinely happy and surprised grin on my face. I laughed out loud and my wife asked what was so funny. Effortless. No command lines, no config files, nothing.
As of today, I’ve played the game for over 30 hours and the game has crashed exactly once out of the blue. And I think it was an unfortunate game bug. The rest of the time it’s been running as smooth and as perfect as the first time I ran the benchmark. Framerate is completely fine and way over the 0 frames per second I got on Windows because it wouldn’t run. The only problem seems to be that when I finish playing and exit to the desktop, Steam is unable to stop the game completely for some reason (I don’t know the cause) and it shows up as still running. I usually click on the Stop button in the Steam interface after a few seconds, it stops the game and that’s it. No problem synchronizing game saves to the cloud or anything. Just that small bug that, again, only requires a single extra click.
Then I remember something that had happened a few months before, prior to starting to play Doom Eternal under Windows. I had tried to play Deathloop first, another game in my backlog. However, the game crashed every few minutes and an error window popped up. The amount and timing of the crashes didn’t look constant, and lowering the graphics settings sometimes would allow me to play the game a bit longer, but in any case I wasn’t able to finish the game intro level without crashes and being very annoyed. Searching for the error message on the web, I saw it looked like a game problem that was apparently affecting not only AMD users, but also NVIDIA ones, so I had mentally classified that as a game bug and, similarly to the Far Cry 6 case, I had given up on running the game without refunding it hoping to be able to play it in the future.
Now I was wondering if it was really a game bug and, even if it was, if maybe Proton could have a workaround for it and maybe it could be played on Linux. Again, ProtonDB showed the game to be verified on the Deck with encouraging recent reports. So I installed Deathloop on Linux, launched it just once and played for 20 minutes or so. No crashes and I got as far as I had gotten on Windows in the intro level. Again, no graphical glitches that I could see, smooth framerates, etc. Maybe it was a coincidence and I was lucky, but I think I will be able to play the game without issues when I’m done with Far Cry 6.
In conclusion, this story is another data point that tells us the quality of Proton as a product and software compatibility layer is outstanding. In combination with some high quality open source Mesa drivers like RADV, I’m amazed the experience can be actually better than gaming natively on Windows. Think about that: the Windows game binary running natively on a DX12 or Vulkan official driver crashes more and doesn’t work as well as the game running on top of a Windows compatibility layer with a graphics API translation layer, on top of a different OS kernel and a different Vulkan driver. Definitely amazing to me and it speaks wonders of the work Valve has been doing on Linux. Or it could also speak badly of AMD Windows drivers, or both.
Sure, some new games on launch have more compatibility issues, bugs that need fixing, maybe workarounds applied in Proton, etc. But even in those cases, if you have a bit of patience, play the game some months down the line and check ProtonDB first (ideally before buying the game), you may be in for a great experience. You don’t need to be an expert either. Not to mention that some of these details are even better and smoother if you use a Steam Deck as compared to an (officially) unsupported Linux distribution like I do.
A few weeks ago the annual X.Org Foundation Board of Directors election took place. The Board of Directors has 8 members at any given moment, and members are elected for 2-year terms. Instead of renewing the whole board every 2 years, half the board is renewed every year. Foundation members, which must apply for or renew membership every year, are the electorate in the process. Their main duty is voting in board elections and occasionally voting in other changes proposed by the board.
As you may know, thanks to the work I do at Igalia, and the trust of other Foundation members, I’m part of the board and currently serving the second year of my term, which will end in Q1 2024. Despite my merits coming from my professional life, I do not represent Igalia as a board member. However, to avoid companies from taking over the board, I must disclose my professional affiliation and we must abide by the rule that prohibits more than two people with the same affiliation from being on the board at the same time.
Because of the name of the Foundation and for historical reasons, some people are confused about its purpose and sometimes they tend to think it acts as a governance body for some projects, particularly the X server, but this is not the case. The X.Org Foundation wiki page at freedesktop.org has some bits of information but I wanted to clarify a few points, like mentioning the Foundation has no paid employees, and explain what we do at the Foundation and the tasks of the Board of Directors in practical terms.
Cue the music.
(“The Who - Who Are You?” starts playing)
The main points would be:
The Foundation acts as an umbrella for multiple projects, including the X server, Wayland and others.
The board of directors has no power to decide who has to work on what.
The largest task is probably organizing XDC.
Being a director is not a paid position.
The Foundation pays for project infrastructure.
The Foundation, or its financial liaison, acts as an intermediary with other orgs.
Umbrella for multiple projects
Some directors have argued in the past that we need to change the Foundation name to something different, like the Freedesktop.org Foundation. With some healthy sense of humor, others have advocated for names like Freedesktop Software Foundation, or FSF for short, which should be totally not confusing. Humor or not, the truth is the X.Org Foundation is essentially the Freedesktop Foundation, so the name change would be nice in my own personal opinion.
If you take a look at the Freedesktop Gitlab instance, you can navigate to a list of projects and sort them by stars. Notable mentions you’ll find in the list: Mesa, PipeWire, GStreamer, Wayland, the X server, Weston, PulseAudio, NetworkManager, libinput, etc. Most of them closely related to a free and open source graphics stack, or free and open source desktop systems in general.
X.Org server unmaintained? I feel you
As I mentioned above, the Foundation has no paid employees and the board has no power to direct engineering resources to a particular project under its umbrella. It’s not a legal question, but a practical one. Is the X.Org server dying and nobody wants to touch it anymore? Certainly. Many people who worked on the X server are now working on Wayland and creating and improving something that works better in a modern computer, with a GPU that’s capable of doing things which were not available 25 years ago. It’s their decision and the board can do nothing.
On a tangent, I’m feeling a bit old now, so let me say when I started using Linux more than 20 years ago people were already mentioning most toolkits were drawing stuff to pixmaps and putting those pixmaps on the screen, ignoring most of the drawing capabilities of the X server. I’ve seen tearing when playing movies on Linux many times, and choppy animations everywhere. Attempting to use the X11 protocol over a slow network resulted in broken elements and generally unusable screens, problems which would not be present when falling back to a good VNC server and client (they do only one specialized thing and do it better).
For the last 3 or 4 years I’ve been using Wayland (first on my work laptop, nowadays also on my personal desktop) and I’ve seen it improve all the time. When using Wayland, animations are never choppy in my own experience, tearing is unheard of and things work more smoothly, as far as my experience goes. Thanks to using the hardware better, Wayland may also give you improved battery life. I’ve posted in the past that you can even use NVIDIA with Gnome on Wayland these days, and things are even simpler if you use an Intel or AMD GPU.
Naturally, there may be a few things which may not be ready for you yet. For example, maybe you use a DE which only works on X11. Or perhaps you use an app or DE which works on Wayland, but its support is not great and has problems there. If it’s an app, likely power users or people working on distributions can tune it to make it use XWayland by default, instead of Wayland, while bugs are ironed out.
X.Org Developers Conference
Ouch, there we have the “X.Org” moniker again…
Back on track, if the Foundation can do nothing about the lack of people maintaining the X server and does not set any technical direction for projects, what does it do? (I hear you shouting “nothing!” while waving your fist at me.) One of the most time-consuming tasks is organizing XDC every year, which is arguably one of the most important conferences, if not the most important one, for open source graphics right now.
Specifically, the board of directors will set up a commission composed of several board members and other Foundation members to review talk proposals, select which ones will have a place at the conference, talk to speakers about shortening or lengthening their talks, and put them on a schedule to be used at the conference, which typically lasts 3 days. I chaired the paper committee for XDC 2022 and spent quite a lot of time on this.
The conference is free to attend for anyone and usually alternates location between Europe and the Americas. Some people may want to travel to the conference to present talks there but they may lack the budget to do so. Maybe they’re a student or they don’t have enough money, or their company will not sponsor travel to the conference. For that, we have travel grants. The board of directors also reviews requests for travel grants and approves them when they make sense.
But that is only the final part. The board of directors selects the conference contents and prepares the schedule, but the job of running the conference itself (finding an appropriate venue, paying for it, maybe providing some free lunches or breakfasts for attendees, handling audio and video, streaming, etc) falls in the hands of the organizer. Kid you not, it’s not easy to find someone willing to spend the needed amount of time and money organizing such a conference, so the work of the board starts a bit earlier. We have to contact people and request for proposals to organize the conference. If we get more than one proposal, we have to evaluate and select one.
As the conference nears, we have to fire some more emails and convince companies to sponsor XDC. This is also really important and takes time as well. Money gathered from sponsors is not only used for the conference itself and travel grants, but also to pay for infrastructure and project hosting throughout the whole year. Which takes us to…
Spending millions in director salaries
No, that’s not happening.
Being a director of the Foundation is not a paid position. Every year we suffer a bit to be able to get enough candidates for the 4 positions that will be elected. Many times we have to extend the nomination period.
If you read news about the Foundation having trouble finding candidates for the board, that barely qualifies as news because it’s almost the same every year. Which doesn’t mean we’re not happy when people spread the news and we receive some more nominations, thank you!
Just like being an open source maintainer is not a grateful task sometimes, not everybody wants to volunteer and do time-consuming tasks for free. Running the board elections themselves, approving membership renewals and requests every year, and sending voting reminders also takes time. Believe me, I just did that a few weeks ago with help from Mark Filion from Collabora and technical assistance from Martin Roukala.
The Foundation spends a lot of money on project hosting costs, including Gitlab and CI systems, for projects under the Freedesktop.org umbrella. These systems are used every day and are fundamental for some projects and software you may be using if you run Linux. Running our own Gitlab instance and associated services helps keep the web decentralized and healthy, and provides more technical flexibility. Many people seem to appreciate those details, judging by the number of projects we host.
Speaking on behalf of the community
The Foundation also approaches other organizations on behalf of the community to achieve some stuff that would be difficult otherwise.
To pick one example, we’ve worked with VESA to provide members with access to various specifications that are needed to properly implement some features. Our financial liaison, formerly SPI and soon SFC, signs agreements with the Khronos Group that let them waive fees for certifying open source implementations of their standards.
For example, you know RADV is certified to comply with the Vulkan 1.3 spec and the submission was made on behalf of Software in the Public Interest, Inc. Same thing for lavapipe. Similar for Turnip, which is Vulkan 1.1 conformant.
The song is probably over by now and you have a better idea of what the Foundation does, and what the board members do to keep the lights on. If you have any questions, please let me know.
It’s finally happened. I bought a brand new desktop computer on August 2014, almost 9 years ago. It had an Intel Haswell processor (i5-4690s), 8 GiB of RAM and a GeForce GTX 760. I later doubled the amount of RAM to 16 GiB (precise date unknown), replaced the GPU with a GTX 1070 in November 2016 and upgraded the CPU to an i7-4770K in October 2017. Since then, no more upgrades. It’s been my main personal (non-work) computer for the last few years.
But now I’m typing this from a different box. Yet the physical box and the OS installation is actually the same.
.',;::::;,'. rg3@deckard .';:cccccccccccc:;,. ----------- .;cccccccccccccccccccccc;. OS: Fedora Linux 37 (Thirty Seven) x86_64 .:cccccccccccccccccccccccccc:. Host: B650M DS3H .;ccccccccccccc;.:dddl:.;ccccccc;. Kernel: 6.1.18-200.fc37.x86_64 .:ccccccccccccc;OWMKOOXMWd;ccccccc:. Uptime: 15 mins .:ccccccccccccc;KMMc;cc;xMMc:ccccccc:. Packages: 3136 (rpm) ,cccccccccccccc;MMM.;cc;;WW::cccccccc, Shell: bash 5.2.15 :cccccccccccccc;MMM.;cccccccccccccccc: Resolution: 2560x1440 :ccccccc;oxOOOo;MMM0OOk.;cccccccccccc: DE: GNOME 43.3 cccccc:0MMKxdd:;MMMkddc.;cccccccccccc; WM: Mutter ccccc:XM0';cccc;MMM.;cccccccccccccccc' WM Theme: Clearlooks-Phenix ccccc;MMo;ccccc;MMW.;ccccccccccccccc; Theme: Adwaita-dark [GTK2/3] ccccc;0MNc.ccc.xMMd:ccccccccccccccc; Icons: Adwaita [GTK2/3] cccccc;dNMWXXXWM0::cccccccccccccc:, Terminal: tmux cccccccc;.:odl:.;cccccccccccccc:,. CPU: AMD Ryzen 5 7600X (12) @ 4.700GHz :cccccccccccccccccccccccccccc:'. GPU: AMD ATI Radeon RX 6700/6700 XT/6750 XT / 6800M/6850M XT .:cccccccccccccccccccccc:;,.. Memory: 2574MiB / 15717MiB '::cccccccccccccc::;,.
A couple of weeks ago I grabbed an AMD Ryzen 5 7600X that was on sale together with a basic AM5 motherboard and a hard-to-find 2x8 GiB DDR5 6000 MHz CL36 kit.
I decided to save some money this time and kept the case, power supply and drives.
Surprisingly for me, the process was actually almost plug-and-play.
The pessimistic side of me was expecting boot problems due to missing chipset drivers or something like that, but no.
I replaced the components in the case for the new ones, plugged my drives in and Fedora booted without issues.
The only small detail I needed to fix was firing up
nm-connection-editor and replacing the old interface name with the new one in the default DHCP connection.
Windows had no issues either, but it did require reactivating the license.
The one I had from 9 years ago was retail, so no problems with that.
My choice of a Ryzen 5 7600X was actually simple: these days, compared to Intel, Ryzen has a slight advantage on performance-per-watt even in mid-range CPUs, with Intel now slowly catching up. The equivalent Intel competitor, i5-13400F, while a very good CPU, features a mix of efficiency and performance cores. Its design is more complex than the one from AMD and probably harder to handle in software, maybe more prone to scheduling mistakes by the OS. I run the 7600X in “Eco” mode which, for the record, means setting up the PBO limits to manual mode and using the following values: PPT limit 88000, TDC limit 75000 and EDC limit 150000. These values are documented in several sources. Other motherboards have an easier way to toggle this with a simple switch for Eco mode but, in the one I have, values need to be entered manually. Why did I get a 7600X only to run it in Eco mode instead of grabbing a plain 7600? Because the 7600X was on sale and significantly cheaper (240 vs 270 euros, final price).
A few days later I decided to replace the GPU too. I chose a Radeon RX 6700 (non-XT). Two reasons for the choice: Linux support with open-source drivers (including RADV, which is being worked on by an amazing group of developers hired by Valve and with whom I have the pleasure of interacting frequently while working on CTS) and the stellar price/performance ratio of that particular model. It’s frequently on sale for a bit over 300 euros where I live (I grabbed it for 330).
I’ve said in the past I’m not a fan of any brand, and I still say so. It’s a coincidence, favored by the market situation, that my CPU/GPU combo is now all made by AMD. I’m pretty sure in the future things may change again.
Replacing the GPU required more attention to detail, despite the replacement being conceptually and physically much easier than replacing the other components. On Windows, I ran DDU and removed all GPU drivers, leaving the computer ready for a GPU replacement. On Linux, I followed these steps:
Uninstalled the NVIDIA drivers from RPM Fusion following their super-clear instructions.
/etc/default/grubto remove legacy kernel parameters used by NVIDIA, making sure
nouveauwas not blacklisted either on the command line or from
grub2-mkconfigto apply the new boot parameters.
Rebooted and verified everything continued to work and I was running GNOME on Wayland on Nouveau.
dracut -ffor good measure (probably not needed but better safe than sorry).
Then I turned the computer off, replaced the GPU, turned it back on and, voilà, plug-and-play on Linux. On Windows I had to download and install the official AMD drivers, and that was it.
All in all, I was surprised by how simple the whole process was, and glad that I didn’t have to reinstall or boot from installation media to fix stuff. There is, however, a stark contrast in terms of what it meant, performance-wise, to upgrade the CPU compared to the GPU. That deserves a rant I will leave for another blog post in the coming days.
I use a Hetzner VPS “cloud” server for hosting this blog and recently discovered a small detail in its pricing that can save you a few euros under some circumstances. I want to clarify this information is not exactly hidden. It’s clearly stated in their billing FAQ but, still, some absent-minded people like myself may not be aware of it until you see the bill.
Price per hour and monthly cap
Cloud servers in Hetzner are prominently announced with a very visible price per month and also a price per hour displayed in a smaller font next to it. For example, take a look at the current price for a CX11 instance, the type that hosts this blog, without an IPv4 address applied. It’s the cheapest one they have (click on the image for the full size).
Keen eyes (not mine) will notice the price per hour is not merely a clarification of the monthly price to help you calculate the cost of a server you use for less than a month. The price per hour is higher than the price per month in a typical 30-days month:
>>> 0.0063*24*30 4.536
This means that the price per month is actually a limit in the total price that applies if, and only if, you use the server for the whole month. It works like a loyalty discount.
When does this matter?
In many circumstances. For example, my blog server runs Fedora. Because I use it in all my systems and I’m lazy and I don’t want to use or learn anything else to host a blog. Anyway, that means roughly every 6 months there’s a new Fedora release and I have to upgrade the server. I could upgrade it in place but I like reproducibility, so I have a semi-automated script/procedure that installs what I need on a brand new server and copies data from the old one. So, normally, I upgrade the OS by creating a new server, going through that process, verifying everything works and shutting down the old instance. This takes around 15 minutes.
What happens if I switch servers in the middle of a given month? That month, being optimistic and supposing I can switch instantly with no overlap in hours, the old server will not be used for the whole month. It will be used for half of it, and the new one will be used for the other half, but not a full month either. Neither of them gets the discounted monthly price and I have to pay, in total, a full month at the per-hour rate. So instead of paying €3.98 I pay €4.54. It’s just a few cents, but a 14% increase over the normal price. In the most expensive cloud instance, the difference in price is over €10.
Applying a different strategy
The best way to proceed in these cases, I think, is to switch servers on the last day of the month, this way:
Spin up the new server that day as late as possible in the evening.
Migrate data from the old server to the new one.
Wait for the next day in the morning and shut down the old server.
The old server will be used for a whole month plus a few hours of the following one. The new server will be used for the whole following month, plus a few hours of the previous one.
Typically, this means you would pay the normal monthly price cap for both months, plus no more than 12 hours at the per-hour rate in excess during the overlapping period.
>>> 0.0063*12 0.0756
Only 8 cents above the normal monthly price, or a 2% increase (at most) over a normal month if you want to put it that way. This also applies to IPv4 addresses, which have a per-hour rate and a monthly cap just like servers. For the curious, adding both costs I pay €4.59 on a normal month for the server as of the time I’m writing this.
I’ve blogged in the past about how I liked to run my normal web browser under a different user. In other words, I think web browsers are the weakest link in the security chain of every desktop and workstation computer. Browsers fix security issues with every release and are used to access, download and execute programs and other documents from untrusted sources, in a wide variety of formats. When I run a web browser, sometimes I don’t know what I’m going to be opening. It may be a malicious web page that will try to exploit a vulnerability in the browser I’m using. Using the method I described in the previous link, I could run the browser process as another user, so it cannot easily access my personal files, documents, cryptographic keys, etc. That method relied on running X11 and letting local users, or at least the user running the browser, connect to the server owned by my own user.
There’s a small risk involved in that but, more importantly, since moving to Wayland, the method to allow other users to access your display server is not as straighforward.
In general, a solution involving Wayland means the web browser user needs access to some files in the
XDG_RUNTIME_DIR directory, including the Wayland socket.
I used filesystem ACLs for that and, in my experience, the process is error-prone and unreliable.
Sometimes I’ve had to adjust the set of files, or the permissions I needed to grant to those files, and things have broken out of the blue after system upgrades.
The second source of risk comes from the fact that, if you want that web browser to be able to play sounds, for example when watching a video, you also needed to give the web browser user access to your sound daemon.
I mentioned a method to share your user’s PulseAudio instance with other users and an update on that when I switched from PulseAudio to PipeWire.
Today I wanted to share a simpler approach to all of this, which is running your web browser, typically Firefox, under a very restricted environment using Firejail.
Firejail is an open source project, probably available from your package manager, that uses Linux namespaces, seccomp-bpf and capabilities to restrict what your web browser can do and access.
Notably, it ships profiles for multiple applications either based on blocklists or, in the case of Firefox (the main use case), allowlists.
When you run Firefox through Firejail, for example by running
firejail firefox, the resulting Firefox process will be restricted in several ways and will not be able to access most of your home directory, except for the
~/Downloads directory and its own configuration and data directories.
If, on top of that, it’s running under Wayland, it will not be able to spy on your screen and other windows unless there’s a second vulnerability available in the Wayland compositor.
The following screenshot shows the file manager and Firefox displaying the contents of the home directory.
Firefox is running under Firejail and, as you can see, it does not display the whole directory contents.
In fact, it’s not only not displaying every file, but also using custom versions of some of them inside its jail.
For example, I don’t have a
.bashrc file in my home directory and Firefox is seeing a “fake” one.
src directory you can see from Firefox is also completely restricted in contents and Firefox only sees one file in the whole hierarchy: my
.gtkrc-2.0 configuration file because I have it stored in a “dotfiles” repository under
src and symlinked to the final location.
Since I discovered Firejail, I’ve switched to using it by default when running Firefox, ditching my ad-hoc mechanisms described in previous posts.