My participation in XDC 2020

Posted on .

Filed under: igalia

The 2020 X.Org Developers Conference took place from September 16th to September 18th. For the first time, due to the ongoing COVID-19 pandemic, it was a fully virtual event. While this meant that some interesting bits of the conference, like the hallway track, catching up in person with some people and doing some networking, was not entirely possible this time, I have to thank the organizers for their work in making the conference an almost flawless event. The conference was livestreamed directly to YouTube, which was the main way for attendees to watch the many different talks. freenode was used for the hallway track, with most discussions happening in the ##xdc2020 IRC channel. In addition ##xdc2020-QA was used for attendees wanting to add questions or comments at the end of the talk.

Igalia was a silver sponsor of the event and we also participated with 5 different talks, including one by yours truly.

My talk about VK_EXT_extended_dynamic_state was based on my previous blog post, but it includes a more detailed explanation of the extension as well as more detailed comments and an explanation about how the extension was created. I took advantage of the possibility of using pre-recorded videos for the conference, as I didn’t fully trust my kids wouldn’t interrupt me in the middle of the talk. In the end I think it was a good idea and, from the presenter point of view, I also found out using a script and following it strictly (to some degree) prevented distractions and made the talk a bit shorter and more to the point, because I tend to beat around the bush when talking live. You can watch my talk in the embedded video below.

Slides for the talk are also available and below you can find a transcript of the talk.

<Title slide>

Hello, my name is Ricardo García, I work at Igalia as part of its Graphics team and today I will be talking about the extended dynamic state Vulkan extension. At Igalia I was involved in creating CTS tests for this extension and also in reviewing the spec when writing those tests, in a very minor capacity. This extension is pretty simple and very useful, and the talk is divided in two parts. First I will talk about the extension itself and then I’ll reflect on a few bits about how this extension was created that I consider quite interesting.

<Part 1>

<Extension description slide>

So, first, what does this extension do? Its documentation says:

VK_EXT_extended_dynamic_state adds some more dynamic state to support applications that need to reduce the number of pipeline state objects they compile and bind.

In other words, as you will see, it makes Vulkan pipeline objects more flexible and easier to use from the application point of view.

<Pipeline diagram slide>

So, to give you some context, this is [the] typical graphics pipeline representation in many APIs like OpenGL, DirectX or Vulkan. You’ve probably seen variations of this a million times. The pipeline is divided in stages, some of them fixed-function, some of them programmable with shaders. Each stage usually takes some data from the previous stage and produces data to be consumed by the next one, apart from using other external resources like buffers or textures or whatever. What’s the Vulkan approach to represent this process?

<Creation structure slide>

Vulkan wants you to specify almost every single aspect of the previous pipeline in advance by creating a graphics pipeline object that contains information about how every stage should work. And, once created, most of these pipeline parameters or configuration cannot be changed. As you can see here, this includes shader programs, how vertices are read and processed, depth and stencil tests, you name it. Pipeline objects are heavy objects in Vulkan and they are hard to create. Why does Vulkan want you to do that? The answer has always been this keyword: “optimization”. Giving all the information in advance gives more chances for every current or even future implementations to optimize how the pipeline works. It’s the safe choice. And, despite this, you can see there’s a pipeline creation parameter with information about dynamic state. These are things that can be changed when using the pipeline without having to create a separate and almost identical pipeline object.

<New dynamic states slide>

What the extension does should be pretty obvious now: it adds a bunch of additional elements that can be changed on the fly without creating additional pipelines. This includes things like primitive topology, front face vertex order, vertex stride, cull mode and more aspects of the depth and stencil tests, etc. A lot of things. Using them if needed means fewer pipeline objects, fewer pipeline cache accesses and simpler programs in general. As I said before, it makes Vulkan pipeline objects more flexible and easier to use from the application point of view, because more pipeline aspects can be changed on the fly when using these pipeline objects instead of having to create separate objects for each combination of parameters you may want to modify at runtime. This may make the application logic simpler and it can also help when Vulkan is used as the backend, for example, to implement higher level APIs that are not so rigid regarding pipelines. I know this extension is useful for some emulators and other API-translating projects.

<New commands slide>

Together with those it also introduces a new set of functions to change those parameters on the fly when recording commands that will use the pipeline state object.

<Pipeline diagram slide>

So, knowing that and going back to the graphics pipeline, the obvious question is: does this impact performance? Aren’t we reducing the number of optimization opportunities the implementation has if we use these additional dynamic states? In theory, yes. In practice, it depends on the implementation. Many GPUs and Vulkan drivers out there today have some pipeline aspects that are considered “dynamic” in the sense that they are easily changed on the fly without a perceptible impact in performance, while others are truly important for optimization. For example, take shaders. In Vulkan they’re provided as SPIR-V programs that need to be translated to GPU machine code and creating pipelines when the application starts makes it easy to compile shaders beforehand to avoid stuttering and frame timing issues later, for example. And not only that. As you create pipelines, you’re telling the implementation which shaders are used together. Say you have a vertex shader that outputs 4 parameters, and it’s used in a pipeline with a fragment shader that only uses the first 2. When creating the pipeline the implementation can decide to discard instructions that are only related to producing the 2 extra unused parameters in the vertex shader. But other things like, for example, changing the front face? That may be trivial without affecting performance.

<Part 2>

<Eric Lengyel tweet slide>

Moving on to the second part, I wanted to talk about how this extension was created. It all started with an “angry” tweet by Eric Lengyel (sorry if I’m not pronouncing it correctly) who also happens to be the author of the previous diagram. He complained in Twitter that you couldn’t change the front face dynamically, which happens to be super useful for rendering reflections, and pointed to an OpenGL NVIDIA extension that allowed you to do exactly that.

<Piers Daniell reply slide>

This was noticed by Piers Daniell from NVIDIA, who created a proposal in Khronos. That proposal was discussed with other vendors (software and hardware) that chimed in on aspects that could be or should be made dynamic if possible, which resulted in the multi-vendor extension we have today.

<RADV implementation slide>

In fact, RADV was one of the first Vulkan implementations to support the extension thanks to the effort by Samuel Pitoiset.

<Promoters of Khronos slide>

This whole process got me thinking Khronos may sometimes be seen from the outside as this closed silo composed mainly of hardware vendors. Certainly, there are a lot of hardware vendors but if you take the list of promoter members you can see some fairly well-known software vendors as well, and API usability and adoption are important for both groups. There are many people in Khronos trying to make Vulkan easier to use even if we’re all aware that’s somewhat in conflict with providing a lower level API that should let you write performant applications.

<Khronos Contributors slide>

If you take a look at the long list of contributor members, that’s only shown partially here because it’s very long, you’ll notice a lot of actors from different backgrounds as well.

<Vulkan-Docs repo slide>

Moreover, while Khronos and its different Vulkan working groups are far from an open source project or community, I believe they’re certainly more open to contributions than what many people think. For example, the Vulkan spec is published in a GitHub repo with instructions to build it (the spec is written in AsciiDoc) and this repo is open for issues and pull requests. So, obviously, if you want to change major parts of Vulkan and how some aspects of the API work, you’re going to meet opposition and maybe you should be joining Khronos to discuss things internally with everyone involved in there. However, while an angry tweet was enough for this particular extension, if you’re not well-known you may want to create an issue instead, exposing your use case and maybe with other colleagues chiming in on details or supporting of your proposal. I know for a fact issues created in this public repo are discussed in periodic Khronos meetings. It may take some weeks if people are busy and there’s a lot of things on the table, but they’re going to end up being discussed, which is a very good thing I was happy to see, and I want to put emphasis on that. I would like Khronos to continue doing that and I would like more people to take advantage of the public repos from Khronos. I know the people involved in the Vulkan spec want to make the text as clear as possible. Maybe you think some paragraph is confusing, or there’s a missing link to another section that provides more context, or something absurd is allowed by the spec and should be forbidden. You can try a reasoned pull request for any of those. Obviously, no guarantees it will go in, but interesting in any case.

<Blend state tweet slide>

For example, in the Twitter thread I showed before, I tweeted a reply when the extension was published and, among a few retweets, likes and quoted replies I found this very interesting Tweet I’m showing you here, asking for the whole blend state to be made dynamic and indicating that would be game-changing for some developers and very interesting for web browsers. We all want our web browsers to leverage the power of the GPU as much as possible, right? So why not? I thought creating an issue in the public repo for this case could be interesting.

<Dynamic blend state issue slide>

And, in fact, it turns out someone had already created an issue about it, as you can see here.

<Tom Olson reply slide>

And in this case, in this issue, Tom Olson from ARM replied that the working group had been discussing it and it turns out in this particular case existing hardware doesn’t make it easy to make the blend state fully dynamic without possibly recompiling shaders under the hood and introducing unwanted complexity in the implementations, so it was rejected for now. But even if, in this case, the reply is negative, you can see what I was mentioning: the issue reached the working group, it was considered, discussed and the issue creator got a reply and feedback. And that’s what I wanted to show you.

<Final slide>

And that’s all. Thanks for listening! Any questions maybe?

The talk was followed by a Q&A section moderated, in this case, by Martin Peres. In the text below RG stands for Ricardo Garcia and MP stands for Martin Peres.

RG: OK…​ Hello everyone!

MP: OK, so far we do not have any questions. Jason Ekstrand has a comment: "We (the Vulkan Working Group) has had many contributions to the spec".

RG: Yeah, yeah, exactly. I mean, I don’t think it’s very well known but yeah, indeed, there are a lot of people who have already contributed issues, pull requests and there have been many external contributions already so these things should definitely continue and even happen more often.

MP: OK, I’m gonna ask a question. So…​ how much do you think this is gonna help layering libraries like Zink because I assume, I mean, one of the big issues with Zink is that you need to have a lot of pipelines precompiled and…​ is this helping Zink?

RG: I don’t know if it’s being used. I think I did a search yesterday to see if Zink was using the extension and I don’t remember if I found anything specific so maybe the Zink people can answer the question but, yeah, it should definitely help in those cases because OpenGL is not as strict as Vulkan regarding pipelines obviously. You can change more things on the fly and if the underlying Vulkan implementation supports extended dynamic state it should make it easier to emulate OpenGL on top of Vulkan. For example, I know it’s being used by VKD3D right now to emulate DirectX 12 and there’s a emulator, a few emulators out there which are using the extension because, you know, APIs for consoles are different and they can use this type of extensions to make code better.

MG: Agree. Jason also has another comment saying there are even extensions in flight from the Mesa community for some windowing-system related stuff.

RG: Yeah, I was happy to see yesterday…​ I think it was yesterday, well, here at this XDC that the present timing extension pull request is being handled right now on GitHub which I think is a very good thing. It’s a trend I would like to [see] continue because, well, I guess sometimes, you know, the discussions inside the Working Group and inside Khronos may involve IP or whatever so it’s better to have those discussions sometimes in private, but it is a good thing that maybe, you know, there are a few extensions that could be handled publicly in GitHub instead of the internal tools in Khronos. So, yeah, that’s a good thing and a trend I would like to see continue: extensions discussed in public.

MG: Yeah, sounds very cool. OK, I think we do not have any question…​ other questions or comments so let’s say thank you very much and…​

RG: Thank you very much and let me congratulate you for…​ to the organizers for organizing XDC and…​ everyone, enjoy the rest of the day, thank you.

MG: Thank you! See you in 13m 30s for the status of freedesktop.org’s GitLab cloud hosting.

Regarding Zink, at the time I’m writing this, there’s an in-progress merge request for it to take advantage of the extension. Regarding the present timing extension, its pull request is at GitHub and you can also watch a short talk from Day One of the conference. I also mentioned the extension being used by VKD3D. I was specifically referring to the VKD3D-Proton fork.

References used in the talk:

VK_EXT_extended_dynamic_state released for Vulkan

Posted on . Updated on .

Filed under: igalia

A few days ago, the VK_EXT_extended_dynamic_state extension for Vulkan was released and included for the first time as part of Vulkan 1.2.145. This is a pretty interesting extension that makes Vulkan pipelines more flexible and practical for many use cases. At Igalia, I was involved in getting this extension out the door as the author of its VK-GL-CTS tests and, in a very minor capacity, by reviewing the spec text and contributing a couple of small fixes to it.

Vulkan pipelines

The purpose of this Vulkan extension is to make Vulkan pipelines less rigid by allowing them to have certain values set dynamically when you use the pipeline instead of those values being set in stone when creating the pipeline. For those less familiar with Vulkan, Vulkan pipelines are one of the most “heavy” objects in the API. Vulkan typically has compute and graphics pipelines. For this extension, we’ll be talking about graphics pipelines. A pipeline object, when created, contains a lot of information about what the GPU needs to do when rendering a scene or part of a scene, like how triangle vertices need to be read from memory, the number of textures, buffers and images that will be used, parameters for color blending operations, depth and stencil tests, multisample antialiasing, viewports, etc.

Vulkan, being a low-overhead API that tries to help you squeeze as much performance as possible out of a GPU, wants you to specify all that information in advance so implementations (GPU plus driver) have higher chances of optimizing the process, both at pipeline creation time and at runtime. Every time you “bind a pipeline” (i.e. setting it as the active pipeline for future commands) you’re telling the implementation how everything should work, which is usually followed by commands telling the GPU to draw lots of geometry using the previous parameters.

Creating a pipeline may also involve compiling shaders to native GPU instructions. Shaders are “small” programs that run on the GPU when the rendering process reaches a programmable stage. When a GPU is drawing anything, the drawing process is divided in stages. Each stage takes a number of inputs both directly from the previous stage and as external resources (buffers, textures, etc), and produces a number of outputs to be directly consumed by the next stage or as side effects in external resources. Some of those stages are fixed and some are programmable with user-provided shader programs. When these shaders are not so small, compiling and optimizing them to native GPU instructions takes some time. Usually not a very long time, but every millisecond counts when you only have 16 of them to draw the next frame in order to achieve 60 frames per second. Stuff like this is what drove the creation of the ACO shader compiler for the Mesa RADV driver and it’s also why some drivers hash shader contents and use a shader cache to check if that exact shader has been compiled before. It’s also why Vulkan wants you to create pipelines in advance if possible. Otherwise, if you realize you need a new pipeline in the middle of preparing the next frame in an action game, the pipeline creation process may make the game stutter at that point due to the extra processing time needed.

Vulkan gives you several possibilities to alleviate the problem. You can create every pipeline you may need in advance. This is one of the most effective approaches but may involve a good number of pipelines due to the different possible combinations of pipeline parameters you may want to use. Say you want to vary 7 different parameters independently from each other with two possible values each. That means you have to create 128 different pipelines and manage them in your application. Another option is using a pipeline cache that will speed up creation of pipelines identical or similar to other ones created in the past. This lets you focus only on the pipeline variants you need at a given point in time. Finally, Vulkan gives you the possibility of changing a few pipeline parameters on the fly instead of giving them fixed values at pipeline creation time. This is the dynamic state inside the pipeline.

Dynamic state and VK_EXT_extended_dynamic_state

Dynamic state helps in addition to anything I mentioned before. It makes your application logic easier by not having to deal with so many different variations and reduces the total number of times you may have to create a new pipeline, which may decrease initialization time, pipeline cache sizes and access, state changes and game stuttering. VK_EXT_extended_dynamic_state, when available and as its name implies, extends the number of pipeline elements that can be part of that dynamic state. It adds states like the culling mode, front face, primitive topology, viewport with count, scissor with count (previously, viewports and scissors could be changed dynamically but not their counts), vertex input binding stride, depth test activation and writes, depth comparison operation, depth bounds activation and stencil test activation and operations. That’s a pretty large set of new dynamic elements.

The obvious question that follows is if using so many dynamic elements decreases performance, in the sense that it may reduce the optimization opportunities the implementation may have because some details about the pipeline are not known in advance. The answer is that this really depends on the implementation. For example, in some implementations the culling mode or front face may be set in a register before drawing operations and there’s no practical difference between setting it when the pipeline is bound to be used or dynamically before a large set of drawing commands are used.

I’ve measured the impact of enabling every new dynamic state in a simple GPU-bound Vulkan program that displays a rotating model on screen and I haven’t noticed any performance impact with the NVIDIA proprietary driver and a GTX 1070 card, but your mileage may vary. As usual, measure before deploying.

VK_EXT_extended_dynamic_state can also help when Vulkan is used as the backend to implement other higher level APIs which are not as rigid as Vulkan itself and in which some drawing parameters can be changed on the fly, being up to the driver to implement those changes as efficiently as possible. We’re talking about OpenGL, or DirectX up to version 11. As you can imagine, it’s an interesting extension for projects like DXVK and it can help improve the state of Linux gaming through Wine and Proton.

Origins of VK_EXT_extended_dynamic_state

The story about how this extension came to be is also interesting. It all started as a reaction to an “angry” tweet by Eric Lengyel in which he lamented that he had to create two separate pipelines just to change the front face or winding order of triangles when rendering a reflection. That prompted Piers Daniell from NVIDIA to start a multivendor effort inside Khronos that resulted in VK_EXT_extended_dynamic_state. As you can read in the extension summary, several companies where involved: AMD, Arm, Broadcom, Google, Imagination, Intel, NVIDIA, and Valve.

For that reason, this extension is also one of the many success stories from the Khronos Group, a forum in which hardware and software vendors, big and small, participate designing and standardizing cross-platform solutions for the graphics industry. Many different points of view are taken into account when designing those solutions. If you look at the member list you’ll see plenty of known logos from hardware manufacturers and software developers, including companies making widely available game engines.

In this case an angry tweet was enough to spark an effort, but that’s not the ideal situation. You can propose specification improvements, extensions or new ideas using the Vulkan Docs repository. An issue could be enough and, for small changes, a pull request can be even better.

Visualizing images from VK-GL-CTS test results

Posted on . Updated on .

Filed under: igalia

When running OpenGL or Vulkan tests normally from VK-GL-CTS, the test suite executable will usually produce a file named TestResults.qpa containing test results in XML format. Sometimes either by default or when a test fails, this file contains output images obtained typically from the test output itself and maybe a reference image the former is being compared to. In addition, sometimes an error mask is provided so implementors can easily detect cases of pixels falling outside the expected result range.

These images are normally converted to PNGs using 32-bits per pixel (RGBA8) and represented in the test log as a string of Base64-encoded binary data. Representing the result image in that format can be an approximation exercise when the original format or type is very different (like R32G32B32A32_SFLOAT or a 3D image), but in some situations the result image can be faithfully represented in that chunk of PNG-encoded data.

To view those images, the VK-GL-CTS README file mentioned the possibility of using the Cherry tool. Given that it requires setting up a web server and running tests in a special way, some people would typically rely on external tools or scripts instead, like the base64 command line tool included in GNU coreutils, which can encode and decode Base64 content.

My Igalia colleague Eduardo Lima, however, had another idea. Most people have a tool in their systems which is capable of directly displaying Base64-encoded PNG data: a web browser. With a little bit of Javascript, he had created a self-contained and single-page web app you could use to view images by pasting the PNG-encoded version of those images directly. The tool created <img> elements on the page from the pasted text, and used data:image/png;base64,<Base64-encoded data> as the image src property. It also allowed comparing two images side by side and was, in general, incredibly handy.

Alejandro Piñeiro, currently working on the Raspberry Pi Vulkan driver also at Igalia, suggested improving the existing tool by allowing it to read TestResults.qpa files directly in order to reduce friction. I’m far from a web programmer and I’m sorry for the many times I have probably sinned along the way, but I took his suggestions and my own pet peeves with the existing tool and implemented an improved version of it. I submitted my new tool for review and inclusion as part of VK-GL-CTS and I’m happy to say it landed not long ago. If you use VK-GL-CTS, do not hesitate to open qpa_image_viewer.html from the scripts subdirectory in your web browser and give it a go when visualizing test results.

I have also uploaded a copy of the tool to my own personal space at Igalia. Feel free to bookmark it and use it when needed. It’s just a few KB in size. As I mentioned before, it’s self-contained and standalone, so everything happens locally in your browser. You can read its source code as it’s also relatively small. You can open TestResults.qpa files directly from your system and you can also paste chunks of text containing <Image> elements. It accumulates images in the images section at the bottom, i.e., every time you tell it to process a chunk of text or process a new file, it will add the images it finds to the images section. To ease identifying problems, it also includes a built-in zoom tool. Images have the image-rendering: pixelated CSS property, which is supported in Chromium and WebKit-based browsers (sadly, not Firefox for now), making the zooming process use the nearest-neighbor approximation instead of a linear interpolation, so pixels are represented as faithfully as possible when scaling.

Replacing Vim with Visual Studio Code

Posted on . Updated on .

For the better part of the last 20 years or so I’ve been a heavy Vim user and I’ve been using Vim as my main development environment. However, around a year ago I tried Visual Studio Code and I’ve been using it continuously ever since, tweaking some minor bits here and there. While I still use Vim as my main plain text editor (as a matter of fact I’m typing this post using Vim), when it comes to writing code either in a small or large code base, I tend to use VSC. Note most projects I work on are based on C/C++.

The reason for the change is simple: I just believe VSC is a better development environment. It’s not about the way the editor looks or about bells and whistles. VSC, simply put, has all the features most people expect to have in a modern IDE preconfigured and more or less working out of the box, or at least accessible with a few simple clicks and adjustments. It’s true you can configure Vim to give you a similar experience to VSC but it requires learning a respectable amount of new key combinations that you need to get used to (and slowly incorporate into your muscle memory) and, in addition, you need to add and manage every bit of new configuration yourself.

For example, if you search on the web for a list of useful Vim plugins for coders you’ll find plenty of articles and posts like this one from Alex Hunt recommending many things I was using in Vim, like fzf to quickly find files by name in the tree, lightline, a multiple cursors plugin, NerdTree, EditorConfig support, etc. Those are definitely great but VSC has all of that out of the box without having to do anything, save for installing a couple of extensions with a few mouse clicks, and I don’t need to use Vundle and remember to keep my plugins updated.

Obviously, not everything is perfect in the world of VSC. Thanks to Vim’s cindent, I remember having a more seamless experience indenting code, almost requiring no input from my side. In VSC I find myself hitting the Tab key or Shift+Tab key combination more frequently than I used to. Or, for example, Vim’s macros and pattern substitution commands were more powerful for automating text replacement compared to multiple cursors, but the reality is that multiple cursors are easier to use and good enough for 90% of the cases.

Strengths of Visual Studio Code

I’ve already mentioned a better experience out of the box and, in general, being easier to use. Not long ago, this “Making Emacs popular again” LWN.net article was circulating around and reached the front page in Hacker News. It’s about an internal discussion in the Emacs community regarding how to make Emacs more attractive to newcomers and pointing to a Stack Overflow survey that revealed Visual Studio Code was chosen as the main development environment by more than 50% of respondents. Both in LWN.net and Hacker News, some people were surprised about how fast, in a matter of a few years, Visual Studio Code was offering a better out-of-the-box experience than Emacs, which has been out there for more than 40 years, but I don’t think that’s a fair comparison.

If you’ve ever had to work writing code for Windows or know someone who does, you probably know or have heard that Visual Studio, the original proprietary tool, is a great development environment. You can blame Microsoft for many things, but definitely not for forgetting to put man-hours and resources behind its development tools. Visual Studio has been praised for years for being powerful, smart and having excellent debugging facilities. So Visual Studio Code is not a new project coming out of nowhere. It’s a bunch of people from Microsoft trying to replicate and refine that experience using a multiplatform interface based on Electron, something that’s possible today but wouldn’t have been possible several years ago.

While doing that, they have embraced or created open specifications and standards to ease interoperability. For example, while Visual Studio is known for its “project approach” in C/C++, where you typically create a “solution” that may contain one or more configured “projects”, Visual Studio Code is way more relaxed. Coupled with Microsoft’s proprietary C/C++ plugin (more on that later), it will happily launch in a directory containing the source code for a C/C++ project and find a compile_commands.json file in there (a standard originating in the LLVM project). It will start indexing source code files and extracting the code structure, and it will be ready to assist you writing new code in seconds or a few minutes if the project is very large (you can, of course, start writing code immediately but its smart code completion features may not be fully available until it finishes indexing the project). It launches fast and caches important data and editor state, so the second day you will be able to continue where you left off immediately.

For its smart “IntelliSense” features it relies on an open standard called “Language Server Protocol” that originated precisely in Visual Studio Code. Those smart features aim to give you all the information you need to write code while requiring the absolute minimum from you. For example, lets suppose you’re writing a new statement that calls a function. If VSC can locate where that function is declared, it will display a tooltip with its prototype (or a list of prototypes to choose if the function is overloaded) and will let you easily see the function arguments and their types, highlighting which one is next as you fill the list. If you wrote that function and you happened to include a comment above the function explaining what it does (just a comment, no special Doxygen-style formatting is required), it will display it too as part of the tooltip. Of course, when typing a period or an arrow to call an object method, it will also list the available methods and so on.

Also worth mentioning are the C/C++ debugging facilities provided by the proprietary C/C++ plugin. Sometimes it’s a joy to be able to so easily browse local variables, STL data structures or to get the value of variables merely by hovering the mouse cursor over them while reviewing the code (values will be revealed in a tooltip while in debugging mode), to name a few things it does right. For these tasks, VSC typically relies on GDB or LLDB in their “machine interface” modes on Linux, so it’s again leveraging interoperability.

In addition, it’s an extensible IDE with an add-ons market similar to the ones for Chrome and Firefox which has allowed a community to grow around the project. Finally, the editor itself is fully available as real open source but let me clarify that a bit before closing this post.

Visual Studio Code license

Like I said, the core of the editor is available under a real open source license. However, when you install Visual Studio Code using one of its official repositories (like the one available for Fedora), you will be downloading a binary that contains a few extra things and is itself released under a Freeware (not Free Software) license.

In that regard, you can choose to use binaries provided by the VSCodium project. VSCodium is to VSC what Chromium is to Chrome (hence, I believe, the name of the project). If you use VSCodium, a few extensions and plugins that are available for VSC will not be available in Codium. For example, Microsoft’s official C/C++ plugin, which is not released under a Free Software license either. Its source code is available but the binary releases you get from the add-ons market are also released under a Freeware license. Specifically, its license prohibits using it in anything else than VSC as you can read in the first numbered point.

This otherwise excellent C/C++ extension provides two key aspects of the VSC experience: “IntelliSense” and the debugging interface. As I mentioned before, code completion and assistance uses the language server protocol under the hood. This matches what, for example, YouCompleteMe uses for Vim. As far as free software implementations of a language server go, for C/C++ you typically have clangd (YCM uses this one) and also ccls. Both have a VSC extension in its market under a proper FOSS license. I have tried both and I think ccls is better as of the time I’m writing this. ccls is, in my humble opinion, 95% there compared to Microsoft’s extension. It provides 90% of what Microsoft gives you and 5% more details that are missing from their proprietary extension, which is pretty cool.

The project wiki for ccls has some neat instructions about how to use it for code completion while relying on Microsoft’s extension for the debugger, which made me wonder if there’s any FOSS extension out there providing a nice debugger interface that can replace the one by Microsoft. So far I’ve found Native Debug. It might be good enough for you but I’ve found it to be a bit lacking in some aspects. At Igalia we currently do not have the resources needed to develop a better debugging extension or helping improve that one. However, I truly believe someone with deeper pockets and an interest in the free software ecosystem could and should fund an initiative to get the debugging facilities in VSC to the next level, releasing the result under a FOSS license. Coupled with clangd or ccls, that would make the C/C++ proprietary extension not needed and we could rely on an entirely free software solution.

Upgraded to Fedora 32 and some local mail tips

Posted on . Updated on .

Fedora 32 was released a few days ago and I have already upgraded my home computers to it. The upgrade process was as short and smooth as always, so congratulations to everyone involved and also congratulations to everyone behind RPM Fusion, who had their Fedora 32 repositories ready from day 1. From my particular point of view this upgrade required adjusting a few things after installing and some of them are worth mentioning.

mDNS breaks after upgrading

As explained in the “Common F32 Bugs” wiki page, there’s a bug present if upgrading from Fedora 31 to Fedora 32 that will not happen when installing the system from scratch. If I recall correctly, two packages will fight to modify /etc/nsswitch.conf and that will probably result in mDNS being removed from that file as an option to resolve host names. It will not make much of a difference to many people but some others may experience problems because of it. For example, in my case the networked work printer stopped “working” in the sense that CUPS wasn’t able to properly contact the printer, which led to confusing situations.

This upgrade bug, as far as I know, was already present when upgrading from Fedora 30 to Fedora 31. I recommend you to take a look at /etc/nsswitch.conf just after upgrading and adding the mDNS options if not present, as described in the wiki.

EarlyOOM is now enabled by default in Fedora

EarlyOOM is a nice thing to have that will try to kill offending processes if you are about to run out of memory, but before actually running out of it so your system will not become unresponsive. It’s one of those relatively simple solutions to a complex problem that will work well for 90% of the cases out there. It prioritizes killing web browsers and other programs known to use a lot of memory while preserving programs that are considered essential for a workstation session if they’re running.

If you didn’t have the package installed before, you may be missing it after upgrading, so it’s worth taking a look. You could check if the package is installed with rpm -qa | grep earlyoom. Once installed, you can verify it’s running with sudo systemctl status earlyoom. If not, you can enable the unit with sudo systemctl enable earlyoom and start it with sudo systemctl start earlyoom.

Periodic TRIM is now enabled by default

Before Fedora 32, I used to mount all my filesystems with the discard mount option. This is called continuous TRIM, but it turns out to have some limitations, like only being able to trim blocks that change from used to free and having some potential or theoretical security problems with disk encryption. In this situation, as mentioned in the Arch Linux wiki, both Debian and Red Hat recommend to use periodic trimming if possible and the circumstances allow it. Periodic trimming is achieved by running fstrim from time to time, a utility provided by the util-linux package, which can be used via a systemd unit and timer file that are also provided there.

If you were using the discard mount option before, you can remove it from /etc/fstab and the default filesystem mount options, if present, and enable the fstrim.timer unit if not enabled.

Some Python 2 packages were purged

With Python 2.x reaching its end-of-life, Fedora has completed the essential purge of Python 2 packages. Many people will not be too affected by this as most outstanding Python packages have been ported to Python 3 or have supported Python 3 for a long time now, but a few of them will be lost. In my case, the getmail package was removed and I was using it as part of my mail forwarding script that you can see here, to receive mail both locally and forwarded to my main FastMail account when a problem happens in my system and a daemon wants to send me an email message (think smartd).

Fortunately, Postfix allows configuring several destinations in ~/.forward, one per line, so I changed my mail forwarding script to skip local mail delivery and added an extra line in my ~/.forward file, like this:

/var/spool/mail/rg3
|/home/rg3/bin/mailforward

This will keep both delivery destinations.

Bonus: local mail tip

Related to this, if a daemon generates email output before the network is completely available, the local delivery part will work but the mail forwarding part may not, so I consider it essential to be notified of local mail delivery. As you may know, many shells like Bash will allow you to set a mail spool file to be checked from time to time using the MAIL and MAILCHECK environment variables, the latter being used to specify the checking period. See man bash for more details. However, when a new mail message is detected, this will only print “You have new mail” before the prompt is displayed. If the command you just ran was a bit verbose it’s easy to miss the line, and it will not be printed again until you log in or a new mail message arrives.

In my opinion, a superior solution is to check for local mail availability every time the prompt is displayed, and to put something in the prompt itself when local mail is available. This way, the notification will stay in the prompt as long as needed and it’s not as easy to miss, specially if you use colors. You will probably want to use an mbox spool in this case, and remove every message from it after reading them (either deleting them like I do or moving read messages to another mail box).

To achieve what I explained above, I set MAIL to my local mail spool in mbox format (/var/spool/mail/mbox) and MAILCHECK to zero to disable periodic checks using bash. Then, I set PS1 like this:

PS1="$( printf '\001\033[93m\002$( test -s "$MAIL" && printf "[mail] " )\001\033[96m\002\\[email protected]\h:\001\033[95m\002\w\001\033[92m\002\$\001\033[0m\002 ' )"
export PS1

As you can see, I use printf as a more portable and shell-independent echo alternative to generate escape sequences for the prompt colors. The important part is that, as show above, I run a command in the prompt by generating a literal “$(…​)” in it. This command checks if the mail spool exists and is not empty, printing [mail] before the prompt in a different color when it does. Due to the “$(…​)” sequence being included literally in the PS1 value, this is run every time the prompt is displayed. The notification about pending mail stays with the prompt until I read and delete the message, making it much harder to miss.