Migrating my Fedora installation to a new drive

Posted on . Updated on .

Some days ago I bought a new 500GB Crucial MX500 SSD that was heavily discounted to replace my still perfectly serviceable 128GB Intel 320 Series SSD that stored my Fedora installation. I was running into space problems with some workflows lately, and 500GB is enough for now and allows me to forget about the problem.

One of the things I like to do when migrating to a new computer or hard drive is to keep my Linux installation as it is instead of reinstalling. Sometimes, reinstalling and restoring the most important settings would be faster, but I prefer to keep everything as it is if possible even if it takes a bit longer.

I only had one major problem so this post will serve as documentation for my future self. It’s the first time I did this operation with Fedora.

I plugged my new drive in using a spare SATA cable to have both drives connected at the same time. I created the partition table for the new drive and an ext4 file system covering the whole drive.

When I started using Linux I tried several partitioning schemes like making a specific partition for /home or /var but, while those schemes do have a place in some other contexts, for a simple desktop computer with only one Linux system installed, having a single file system is much simpler and effective, in my opinion. You won’t have to think hard about how much space you’ll need in each partition or face the nightmare of failing to take something into account and having to resize file systems.

If you use a popular file system like ext4 you won’t have to create a separate /boot partition either. In addition, I also avoid creating a swap partition. I use a swap file when needed. I known performance is not as good with it, but it’s much more flexible and, if you start swapping hard, performance is going to suck anyway.

Back to the main subject, when I had the new file system created, I booted from a live SystemRescueCd and mounted both the old and the new file system. Using this live system, in retrospective, has some small risks because, as I will explain later, I need to run some commands after chrooting to the new file system while /dev, /proc and /sys are bind-mounted, and my Fedora installation, including the kernel, could be somewhat incompatible with the running kernel from the live system. I should probably have used a Fedora installation disk instead.

I made sure no hidden files were present in the old root directory and proceeded to run cp -a /old/root/* /new/root. If I had more partitions I would have needed to replicate both structures with additional mounts before running the cp command. cp -a is supposed to preserve every file attribute, including extended attributes and SELinux security labels, I think, but I’ll come back to this later. From this point onwards, every file-related operation I mention in the post refers to the new hard drive.

After copying finished, I edited /etc/fstab and GRUB’s configuration file to replace the old root file system UUID with the new one in both places. If you’re not mounting your Linux file systems by UUID, start doing it now. Then, I bind-mounted (mount --bind) /sys, /proc and /dev on the new file system and chrooted to it from the live CD. I regenerated the initial RAM disk just in case with dracut -f --kver FEDORA_KERNEL_VERSION, where FEDORA_KERNEL_VERSION is the kernel version that was installed on Fedora as the newest kernel, and then rebooted.

I was greeted with a GRUB error message saying it couldn’t find the file system with the old UUID and it dropped me to a console prompt where “help”, “?” and other similar commands where not valid, so it wasn’t of much use to me. I had forgotten to reinstall GRUB and now it was failing to find some files for an early stage before most useful things were ready.

Back to the live CD, I remounted the new file system, bind-mounted again the special file systems and chrooted into it again. This time, I ran grub2-install to reinstall GRUB and, after rebooting, I was finally greeted with GRUB’s menu. Selecting the newest Fedora kernel appeared to boot the system successfully during the first second but happiness didn’t last long as journald was failing to start for some reason and this, in turn, prevented a lot of units from the boot process from working and I couldn’t even get a virtual terminal. The error message was “Failed to start Journal Service”. A web search on that error didn’t spit out many interesting results but, after some time (and I honestly don’t remember exactly how) I realized it could be related to SELinux.

Something had gone wrong while copying files from the live CD between the old drive and the new drive with cp -a and SELinux labels were wrong on the new file system. I don’t know the exact reason and I honestly didn’t have time to investigate, but the quick solution was booting from the live CD again and editing /etc/selinux/config. Merely changing SELinux to disabled or permissive mode allowed me to boot normally into my migrated Fedora installation (yay!). But, if possible, I wanted to keep using SELinux due to the additional security and because in day-to-day operations it never interferes with my workflow. In order to fix labels I left SELinux configured in permissive mode (disabled will not work for the following step) and created an /.autorelabel file.

After rebooting, Fedora sees the file and doesn’t proceed to boot normally. Instead, it changes to a special mode in which SELinux labels are reset to their default values. Then, the autorelabel file is removed and the system reboots. Finally, I changed SELinux to enforcing mode and verified I could still boot normally from the new drive. Success and end of story.

Summary:

  • Re-create your file system hierarchy in the new drive.

  • Mount both hierarchies from a live CD.

  • Copy files from one hierarchy to the other one with cp -a.

  • chroot to the new hierarchy mount-binding /proc, /sys and /dev previously.

  • Edit /etc/fstab and GRUB’s configuration file to mention the new UUIDs.

  • Reinstall GRUB.

  • Regenerate the initial RAM disk for the system kernel just in case.

  • Change SELinux to permissive mode and create an /.autorelabel file.

  • Boot into the new drive, wait for SELinux labels to be reapplied and let it reboot.

  • Verify everything works, then change SELinux to enforcing mode and reboot.

  • Verify everything continues to work.

Patrick Volkerding's financial situation

Posted on .

I had a daughter on July 17th and I’ve been a bit busy these first weeks so I didn’t find out about Volkerding’s financial situation until a few days ago. I don’t think this matter received major coverage in Hacker News, but it was covered by LWN.net. As soon as I found out how bad things were, I sent a few tens of dollars his way via his PayPal link for donations.

It’s heartbreaking to read about his situation taking into account I used Slackware for 12 years before switching to Fedora and I got a Slackware subscription as soon as I was able to afford it. It’s saddening to imagine he was only receiving a small part of those subscriptions if any, so I felt compelled to send some money to himself directly. I urge any Slackware user to do the same if possible.

When I switched away from Slackware into Fedora more than 3 years ago (wow, time flies when you have children!) I explained my reasons and mentioned it was common knowledge he wasn’t making a lot of money on Slackware, but I could never imagine it was that bad. Maintaining Slackware is a full time job and worth a full time salary. It’s work thousands of people are benefiting from.

Taking into account the numbers he posted, he mentions Slackware 14.2, the last stable release as of the time I’m writing this, generated almost $100K in revenue, which means, at $40 per subscription, Slackware could have nearly 2500 paying subscribers. My guess is the majority of those are people who don’t mind paying $40 a year for Volkerding to continue working on Slackware. After all, they have a subscription so they would get charged that amount whenever a new Slackware release comes out, which may very well be once a year. Receiving the DVDs at home is fine but so would be receiving a thank you letter for most of them. There’s no reason those $100K could go directly to Volkerding every year or every couple of years.

As I mentioned in the past and some others mentioned in the LinuxQuestions.org thread, I think Volkerding needs to adapt a bit to the current times. It’s important to maximize revenue as the well dries progressively, as much as trying to stop the well from drying. In this sense, even if that means opening up your personal finances to everyone to the extent Slackware is a one-man project, people need to know about his situation, his revenue goals (Wikipedia style), his technical goals with the project and other needs that arise. It’s disheartening to read he could only recently get a hold of an EFI machine, for example (not to mention the situation of this home and family taking into account he has a daughter). If he was lacking that hardware I’m sure a handful of users could have put together a few bucks and send one his way. This is a minor but significant example, in my opinion, of things that could be done differently nowadays.

I’m glad to read the public response was outstanding as he thanked everyone in the Change Log entry from July 27th and I wish him, his family and Slackware the best in the future.

Thank you, Patrick, for all the hard work you’ve put into Slackware for more than 25 years that many people like me have directly benefited from. Best regards from a fellow dad and former Slackware user.

Game Review: Deux Ex: Mankind Divided

Posted on .

It’s been a while since my previous post. I’ve been quite busy with the renovation of what became my new home a few days ago. In fact, I’m still pretty busy with the finishing touches and, frankly, a bit overwhelmed with all the work that’s still pending, more so taking into account I’m only a few months away from becoming a father again. Of course, I still find time here and there to play some games and relax. I’m currently enjoying Hollow Knight but I’m far from being ready to review it. Previously, I played Deus Ex: Mankind Divided and I want to share some thoughts about it.

Mankind Divided was generally well received by critics and fans, but it was specifically criticized for its microtransactions and presumably short campaign length. You can count me in the group of fan series and my perspective may be a bit rosy, but I think it’s a great game. I disliked the whole microtransactions affair, the in-game “triangle codes” tied to a mobile app and other stuff like breach mode, but the really good news here is that you can ignore all of that (it never gets in your way) and still enjoy a great game. It’s not revolutionary like the original Deus Ex and it’s not as good as Human Revolution, in my opinion, but it’s still pretty good.

The graphics and attention to detail in most environments are amazing, even if the appearance of a few characters is still a bit “deusexy”, as I commented in my previous post. The gameplay is good and very similar to Human Revolution, with similar achievements for a Pacifist approach or Foxiest of the Hounds if you want to go that route (I did). I welcomed most gameplay changes. The comeback of Multitools from the original Deus Ex is nice and flexible. It allows you to bypass specific locks if you lack the skill, paying the price of not getting any experience points. Disabling robots and turrets permanently with EMP grenades is no longer possible, favoring other game mechanics like the remote hacking augmentation, the specific computer hacking skills and armor piercing rounds. I disliked the changes to the computer hacking screen. Its GUI is a bit more cumbersome and the “Stop Worm” button location made me click it by mistake many times when trying to reach an object in the upper part of the screen. The increased level decoration detail in modern games sometimes makes it hard to spot interesting items and objects, but the problem is elegantly solved here and integrated in-game with the Magpie augmentation.

As for complaints about the game’s length, I believe it ended right where it was supposed to end. It didn’t catch me by surprise. I may be wrong, but it looked clear to me the game was about to end and leave up some plot lines unsolved. It reminded me of the original Star Wars or Matrix film trilogies. The first films in those stand alone. Due to their success, more films were planned and both second films (Matrix Reloaded and The Empire Strikes Back) end up with cliffhangers and unsolved plot lines. I believe something similar is bound to happen with this hypothetical “Jensen’s Trilogy”. Rumors with somewhat credible sources circulating on the Internet put the blame for microtransactions and splitting the story in two on the game publisher, Square Enix.

The story does have less meat, but it’s interesting to see it slowly tie to elements in the original Deus Ex. We’ve got almost every character there already: Lucius DeBeers, Beth DuClare, Morgan Everett, Stanton Dowd, Bob Page, Joseph Manderley, etc. Even a prototype for the Morpheus AI is mentioned in one of the in-game pocket secretaries. The game side missions are very interesting and provide much needed support to the main story. A fast player will easily get 20 to 30 hours of gameplay and a slower and completionist player like myself may get over 40 or 50 hours out of it in the first playthrough, so I refuse to call it a short game.

Another question is if maps are less varied and if that contributes to the game feeling smaller or shorter than Human Revolution. I don’t think there’s an easy answer for that. The game only has one big hub located in the city of Prague. But it’s so big it had to be divided in two: a small starting area and a larger one. Each of those has several buildings that can be visited, as well as a moderately large sewer system and a number of sub-areas. For example, the large Prague hub includes the Dvali apartment complex and theater as well as the Palisade Bank. Additional maps take you to different areas, including the relatively large and complex Golem City, which is as large as a hub by itself. I’ve praised multilayer maps and levels in the past, with interconnected architecture, and this is one of those games featuring amazing level design, with the Augmented Rights Coalition headquarters in Golem City, as well as the Church of the MachineGod apartment complex, deserving a special mention.

To me, the game graphics, sound and technological aspects are worth a 9 and the gameplay is easily worth an 8.5. The overall score could be an 8.5, bound by its gameplay.

Regarding the future of the game series, I’d like to see one more Jensen game to solve part of the remaining open plot lines. After that, the franchise should probably move to other characters. The Deus Ex universe is rich and complex so it has room for telling many stories. I read someone in reddit say they should make a game about Paul Denton. It certainly has potential and could be nice. It could explain in detail how UNATCO was formed. Eventually, someone should remake or, to be precise, re-imagine the original Deus Ex (probably my favorite PC game of all time). Of course, I don’t mean a mere remake. Human Revolution proved it’s possible to take a franchise and update its core gameplay to modern standards. I read a Human Revolution review that mentioned the game played like a love letter to the original Deus Ex. Not as revolutionary, but a worthy homage to the original game and an nice update to the franchise. That’s what I would be expecting from a Deus Ex remake. A big game following roughly the same plot as the original, with varied locations and updated gameplay, directed with a reasonable but heavy hand and without fear of risks to make necessary changes.

I’ll have mine without microtransactions, thanks.

Gaming, anti-aliasing, lights and performance

Posted on . Updated on .

It’s been a few months since my last game review and I wanted to write about the last batch of games I’ve been playing and how they use different anti-aliasing and lighting techniques to improve the way they look on screen.

Doom

A few months ago I played Doom, the reboot from 2016. It was one of the most praised games that year and I can certainly see its virtues. However, I think gamers in general and PC gamers in particular were blinded by its brilliant technology and performance on different systems and forgot a bit about the somewhat basic game play.

To me, Doom is a solid 8.5 game but it’s far from a GOTY award. It’s fun, it’s replayable and it’s divided in maps that are themselves divided, for the most part, in different arenas were you fight hordes of monsters. This simple concept, coupled with a simple plot, makes it easy to enjoy the game in short gaming sessions, clearing arena after arena. For obsessive-compulsive completionists like me, this division makes the game less addictive, which is arguably a good thing, compared to other games were there’s always a pending secondary quest, a question mark in a map or a new item to get. Doom provides great replay value and a quite difficult ultra nightmare mode with permadeath. I intended to complete the game in that mode but I dropped out despite observing constant progress in my playthroughs because it was going to take too long and I wasn’t enjoying the ride.

How does Doom manage to run so well compared to other games? The underlying technology is fantastic but I’d add it’s not an open-world game, it doesn’t have vegetation or dozens of NPCs to control. Maps are full of small rooms and corridors, and even semi-open spaces are relatively simple. 3D objects in game don’t abuse polygon count. They depend on good textures and other effects to look good. Textures themselves are detailed but not in excess. They’re used wisely. Interactive elements get more detailed textures while ornamental items get simpler ones. In general, it performs well for the same reasons Prey, for example, also performs well.

Doom offers several interesting choices as anti-aliasing options, many of them cheap and effective for that specific game, including several ones with a temporal anti-aliasing (TAA) component. TAA offers the best quality anti-aliasing in modern games, as of the time I’m writing this, if used well. It can be cheap and very effective, but the drawbacks may include ghosting and an excessively blurred image.

I experienced TAA extensively for the first time when I played Fallout 4 some months ago (I reviewed it here). In this game, the ghosting effect of TAA is very noticeable the first time you’re out in the open and move among trees but, still, it’s a small sacrifice to make compared to a screen full of visible “jaggies”. I believe TAA is the best option when playing Fallout 4 despite the ghosting and the increased image blurriness.

In Doom, it may or may not be the best option. The lack of vegetation, trees and its relatively simple geometry means the game doesn’t look bad at all without any temporal component if you don’t mind a few jaggies here and there, but I still recommend you to enable any form of TAA if you can. If you find the image to be too blurry, try to compensate that with the in-game sharpening setting. Ghosting is barely noticeable. It happens when enemies move very quickly right in front of the camera and, ostensibly, when Samuel Hayden appears on screen, around his whole body and, in particular, his fingers and legs. Disabling all temporal components is the only way to see Hayden as it was intended. The ghosting effect around him is so visible I thought it was done on purpose as a weird artistic effect the first time I saw it. Fortunately, both ghosting situations happen once in a blue moon, which is why I still recommend TAA for this game.

Batman: Arkham Knight

I also played Arkham Knight and it’s a good game. Plenty of challenges for a completionist but it’s a bit repetitive. Like I said when I reviewed Arkham Origins, I still think Asylum is the best in the series. Arkham Knight features graphical improvements, a darker plot that brings the game closer to Nolan’s film trilogy, and too many gadgets. The amount of key combinations and types of enemies reaches an overwhelming level and you need to add the Batmobile on top of them. Don’t get me wrong, it’s a solid 8 and its performance problems at launch are mostly moot now thanks to software changes and the arrival of the GeForce 10xx series, which handle the game reasonably well. However, it’s not a game I see myself replaying in the future.

Arkham Knight’s urban environments do not need temporal anti-aliasing that much, which is good because the game doesn’t have that option and still manages to look reasonably good. The game suffers from small performance problems when you enable every graphics option and ride in the Batmobile or glide high above the city, but frame rate drops are not severe.

Rise of the Tomb Raider

Finally, a few days ago I finished playing Rise of the Tomb Raider. I liked the game the same way I liked the 2013 reboot, with a score of 8 or 8.5, but other reviewers have been more positive. Some elements have been improved and others have been made worse. The plot, atmosphere and characters are better. Keyboard and mouse controls have received more attention and many game mechanics have been expanded without making them too complicated. On the other hand, completing the game to 100% is now harder and not more fun. The series is starting to join the trend of adding more collectibles and things to find just to make the player spend more time playing without actually being more fun and rewarding. Still, I completed the game to 100%.

With my GTX 1070 on a 1080p60 monitor the game runs perfectly fine with everything turned up to 11, except when it does not. There are a few places in the game were the frame rate tanks, and sometimes for no obvious reasons. One of the most noticeable places I remember was an underground tunnel with relatively simple geometry where I was able to make it drop to 42 FPS.

The way to fix that is very simple. The first thing to go is VXAO, an ambient occlusion technique. Dropping that to HBAO+ or simply “On” gives back a good amount of frames. In general, ambient occlusion is something that improves the overall look of the image. It’s heavily promoted by NVIDIA but, in my opinion, the difference between its basic forms and the most expensive ones doesn’t have a dramatic impact on image quality. If you have the power to run a game maxed out, be my guest. If you’re trying to find a compromise, ambient occlusion should be one of the first things to go.

To get back more frames you can also tune shadow quality down a notch or two. In Rise of the Tomb raider, the difference between “very high” and “high” is noticeable but not too much, and going down to “high” avoided many frame rate drops for me.

As opposed to ambient occlusion, I personally find shadow quality to have a very perceptible visual impact in most games. Reducing shadow quality normally means shadows look more blocky and weird and, if they move, they tend to do it in leaps instead of smoothly. I distinctly remember playing Max Payne 3 some years ago (that uninterruptible movie with some interactive segments interleaved, if you recall) and it featured a closeup of Max Payne in the main game menus with visible jaggy shadows across his cheek that drove me nuts (why would game developers choose to have that shadow there in such a controlled situation?).

Contrary to both previous games, Rise of the Tomb Raider features lots of vegetation, trees, small details and shiny surfaces, but it doesn’t offer a temporal anti-aliasing option. The result is a picture full of jaggies at times that, no doubt, will bother a few gamers.

In general, recent games tend to include more geometry, vegetation and lighting details that exacerbate every aliasing problem. At the same time, you have more detailed textures and you don’t want to blur them, specially when you’re looking at something close up. This is why, in many recent games, FXAA, the poster child of cheap and effective post-processing anti-aliasing, is a bad option. It’s very cheap but it doesn’t do a very good job and it blurs the image a lot. If you’re playing, let’s say, Fallout 3 (a 2008 title, time flies!), FXAA is an excellent choice. The relatively simple geometry, compared to today’s games, makes FXAA effective at removing jaggies. The lack of detail means textures don’t look blurrier than usual with it.

Moving forward in time, Crysis 3 was one of the first mainstream games to feature SMAA prominently, another form of post-processing anti-aliasing which is very fast on modern cards and improved the situation a bit. It attempted to fix jaggies like FXAA did but without blurring textures. Very cheap but it does cost a bit more than FXAA and is not as easily injected from outside the game (FXAA can be activated, in general, from NVIDIA’s control panel and used to improve the visual aspect of many old games that do not support it directly). SMAA did a good job for many years and was my preferred choice for a long time. I still choose it depending on the game.

These days, omitting a TAA option can be a significant mistake for certain titles like Rise of the Tomb Raider. In contrast, I’ve just started playing Mankind Divided, which offers a TAA option, and graphically it’s quite impressive (except for the way people look, a bit DeusEx-y if you know what I mean). Its developers did make incomprehensible decisions when choosing settings for the predefined quality levels. In my opinion, you should fine-tune most parameters by hand in the game, reading some online guides and experimenting. They also included a very visible option to turn MSAA on without any warnings about performance.

In any case, TAA in Mankind Divided is complemented by a sharpening filter that prevents excessive blurring if you’re inclined to activate it. Granted, the game doesn’t feature much vegetation but it looks remarkably good in any case. Both features complement each other rather well and give you a sharp image mostly free of notable jaggies.

The cost of TAA varies with the specific implementation. In general, it costs more than FXAA and SMAA but is very far from costing as much as other less-effective techniques like MSAA.

Roadmap for libbcrypt

Posted on . Updated on .

libbcrypt (previously just called “bcrypt”) is a small project I created some time ago when I was looking for a C or C++ library implementing the bcrypt password hashing algorithm and found no obvious choices that met my criteria. The most commonly referred implementation was the one provided by Solar Designer, but it lacks good documentation and you still need to dive into the source code to find out which headers to include and which functions to use from the whole package. So I did just that and built a small and better documented wrapper around it, just for fun. The goal was lowering the barrier of entry for anyone wanting to use bcrypt in C/C++. Python, Ruby, Go or JavaScript have other implementations available.

What I did was simple: I wrote a well-documented header file with a minimal set of functions, inspired by one of the Python implementations, and a single implementation file that, combined with Solar Designer’s code, generates a static library you can link into your programs. Later, I fixed some stupid coding mistakes I made, despite its small size, and forgot about it.

The project, as of the time I’m writing this, has 95 stars and 35 forks in GitHub (not many, but more than others) and not long ago I realized it’s one of the first Google search results when trying to find a “bcrypt library”. So it seems my small experiment has been promoted and I have to answer to a social contract!

In the last couple of weeks, I’ve been working a few minutes almost every day polishing the library, improving its documentation, reading other people’s code and documentation and adding some functionality. You can take a look at the results in the project’s future branch. Summary of changes from master:

  • The main implementation has been changed from a static to a dynamic library so it’s easier to update the implementation if a problem is found, without recompiling everything. I use -fvisibility=hidden to hide internal symbols and speed up link time. A static library is also provided, just in case you need it.

  • The function to generate salts has been changed from using /dev/urandom to using getentropy. That means the library will probably only compile under a modern Linux and maybe OpenBSD, and this is the main reason these changes are still not merged to the master branch. Without despising the BSDs at all, let’s be practical: Linux is the most widely used Unix-like system for servers and getentropy, introduced by OpenBSD, is just better than /dev/urandom because it’s simpler and safer to use, can be used in a chroot, etc. With Linux now implementing it, there are not many reasons to use anything else.

  • I have added a manpage documenting everything better and emphasizing the 72-byte implementation limit in password length.

  • I have added functions and documentation explaining the rationale for pre-hashing passwords before sending them to bcrypt, which works around the previous limitation in part.

  • There’s now an install target.

  • I have added a pkg-config file to ease finding out compilation and linkage flags in the installed library.

  • Tests have been separated to their own file and made a bit more practical.

I’ll merge these changes to master after things calm down for a while, I process any feedback I receive (if any) and after Red Hat, Debian and Ubuntu have a stable or long-term support version with glibc >= 2.25, which introduced getentropy. The next Ubuntu LTS will have it, the next Debian stable will have it and RHEL 8 will probably have it.

I may also try to package the library for Fedora, which eventually should make it land in Red Hat. I’m not a Fedora packager yet and this may be a good use case to try to become one and learn about the process.

If anyone’s interested in the project, please take a look at the future branch, comment here, open issues in GitHub, mail me, etc. Any feedback is appreciated because this was just a small experiment and I’m not a user of my own library. Also, I don’t recall ever publishing a shared library before. If I’m doing something wrong and you have experience with that, feedback is appreciated too.

What about Argon2?

Argon2 is another password hashing algorithm that won the Password Hashing Competition in 2015. The competition tried to find an algorithm that was better than bcrypt and scrypt. Its official implementation is released under the terms of the CC0 license, it works under Linux and many other platforms, and builds a shared and static library featuring both high-level and low-level functions. In other words, Argon2 already has a pretty good official, easy-to-use, API and implementation.

In my opinion, if you want to use Argon2, you should be using its official library or libsodium. The latter has packages for most distributions and systems. Argon2 is the best option if you want to move away from bcrypt, but there is no need to do it as of the time I’m writing this. The benefits are mostly mathematical and theoretical. Argon2 is much better, but bcrypt is still very secure.

Argon2 has three parameters allowing you to control the amount of time, memory and threads used to hash the password, as well as a command-line tool to experiment and find out the best values for those parameters in your use case. The libsodium documentation also has a guide to help you choose the right values.

The official library only contains the password hashing functions and leaves out details like generating a random salt or pre-hashing the password to harmonize password processing time. A few quick tests done with the argon2 command line tool from the official implementation revealed a small, almost insignificant password processing time difference between a small 5-byte password and a large 1MB one. I conclude Argon2 doesn’t need pre-hashing but I don’t know the underlying details. You can also choose the size of the generated password hash.

If you’re using libsodium, it includes several functions to generate high quality random bytes, needed for salts. A plain call to getentropy in Linux should also be trivial.

What about scrypt?

If you’re currently using scrypt you already have an implementation, and if you’re not using it yet but you’re considering using it in the future, you could skip it and jump straight to Argon2 (see the previous section). It’s a better option, in my opinion.

If you insist on using scrypt, there’s already a libscrypt project that’s packaged for Debian, Fedora, Arch Linux and others. It takes most of its code from the official scrypt repository to create a library separated from the official command-line tool.

The library also covers obtaining random data to generate seeds (it uses /dev/urandom), but not password pre-hashing. As with Argon2, a small test with a custom command-line tool revealed a very small and almost insignificant password processing time difference between a small and a very large password, so I conclude again no password pre-hashing is needed for libscrypt.