Game Review: Batman: Arkham Origins

Posted on .

Batman: Arkham Origins is a game developed by WB Games Montreal while Rocksteady, creators of Arkham Asylum and Arkham City, worked on the final installment from the Arkham series called Arkham Knight, due to be released later this year. As Arkham Knight was going to take a bit long to be developed, publisher WB Games decided to create this game to fill the gap. This will be a very brief review.

Arkham Origins received lower scores than its two predecessors, with critics complaining that it added almost nothing to the series. I basically agree. Origins improves graphics a bit, adds some new effects and plays essentially like Arkham City. To me, that’s not specially good. The best game in the series so far, in my humble opinion, is the original Arkham Asylum (Metacritic disagrees with me). It took everyone by surprise by perfectly capturing the classic essence of Batman as portrayed in the comics while Christopher Nolan cast the character under a very different light at that time with his movies (Arkham Asylum was released a year after The Dark Knight).

Asylum took everyone by surprise and provided focused gameplay with a great story, a few side missions and characters and felt like a totally cohesive experience. City, on the other hand, made a mistake that’s pervasive in the industry nowadays: if you feel the game is going to be good, add more content. More side missions, collectibles and game modes so if the player likes the feeling of playing the game they can continue to do so while thinking they are making progress in some sense, like collecting some stuff or improving their score in a minigame, which would drive them closer to some arbitrary 100% completion score mark. Old gamers like myself would simply prefer to replay the game, probably in a higher difficulty setting.

In this sense, and even if it has its own story, Origins is simply more content. It has its low points, like Joker’s voice being different and a bit worse due to Mark Hamill retiring from the character, and its bright moments like the way it explains the start of the relationship between Joker and Batman, or Anarky’s speech and criticism of Batman if you listen to it.

I’d give the game a score around 8, while giving City a 9 and Asylum a 9.5, more or less. If it’s been some time since you played a Batman game and would prefer some new content instead of replaying, grab a copy of Origins and you won’t be disappointed. If, instead, you want some new gameplay and really improved graphics (at the cost of needing a much beefier computer or a new generation console), wait for Arkham Knight.

Game Review: Metro: Last Light Redux

Posted on .

The redux edition of both Metro 2033 and Metro: Last Light bring graphics and performance improvements, along with minor gameplay changes and enhanced modes. Basically, both games can now be played in Survival or Spartan mode. Survival is, shall we say, Metro 2033 mode. Ammo is scarce and the survival aspect dominates. Spartan is Metro: Last Light mode, with more ammo and items available and emphasis in the combat scenes. Once you’ve chosen the mode, you can choose the difficulty.

I didn’t know Metro: Last Light put so much emphasis in combat and I chose Survival mode, to be played in Ranger Hardcore difficulty the same way I enjoyed Metro 2033, and I believe it was a bit of a mistake. Metro: Last Light is supposed to be played in Spartan mode, clearly.

Gameplay

From my point of view, Metro: Last Light is a couple of steps below Metro 2033, even if both received similar Metacritic scores. The story continues that of Metro 2033 with the “bad” ending. It’s a bit disappointing, but the main problem is that, as a sequel, the story is a bit worse and the game has lost part of that novelty brightness Metro 2033 had. The game would benefit from deeper character development. The game universe has also changed a bit and, as I mentioned, the combat aspect receives more emphasis compared to the survival aspect, which changes the mood of the game to something less original and more mainstream.

The redux editions of Metro, including Metro: Last Light, make several mistakes in Ranger Hardcore mode which are hard to explain. Ranger Hardcore is supposed to be realistic in some aspects, like the absence of a HUD, but this is taken too far or, shall I say, implemented incorrectly. People have complained about the lack of prompts in quick time events, or how you can’t be sure which grenade-type item is currently selected, with the only clue being a sound effect. I don’t think that adds anything to realism. Only to player confusion. In a realistic scenario, you would clearly know if what you have in your hand is a throwing knife or a grenade. A HUD icon should temporarily pop up when changing the item, even if it doesn’t tell you how many items are left in the inventory, with that action needing to bring the diary up, as with other ammo.

Compared to Metro 2033, Metro: Last Light also introduces mini-bosses or boss-like fights at the end of some levels, and some of those are harder than expected, in my humble opinion, due to gameplay issues. For example, there is a fight in a swamp where the miniboss appears several times throughout the level, before the final appearance that starts the boss fight. Only in that boss fight the boss will take damage, as far as I know. In Survival mode, ammo is scarce and the player has no clue (s)he’s wasting ammo while shooting the boss before the final encounter. Any ammo wasted on those appearances will be less ammo for the real fight. That part of the game could have been designed better. I was becoming frustrated with the fight until a web search revealed this to me, and the recommended strategy is to restart the level and flash past it, picking everything up and arriving at the boss fight with lots of filters and ammo. This is what I had to do in the end.

Most gameplay elements are kept the same, though. There are some new weapons but you can still customize them, barter at outposts and all the classic Metro elements are still there, like ammo used as currency.

Technical

Metro games have always given the the impression that, while their graphics are good, there are many other games in the market that look better using less resources. Yes, the lighting, ambient, texture and sound work are very good and provide a unique atmosphere and universe. Human settlements in the game are still recreated with lots of everyday life details, and convey the sadness of the postapocalyptic world those people have to live in.

The light and its combination with texture work, in particular, match the game perfectly. If somebody ever makes a S.T.A.L.K.E.R. game again, it would benefit from having indoor graphics as the ones in Metro. But one is left to wonder if 90% of that look and feel could be achieved with half the hardware resources. The new versions of the engine and the Redux editions fix this problem, but only partly. It’s still too demanding for what they provide, in my humble opinion.

Scores

  • Gameplay: 7.5.

  • Technical: 7.5.

  • Overall: 7.5.

My recommendation is to pick the Metro Redux bundle in Steam during a sale, and play Metro: Last Light in Spartan instead of Survival like I did. You’ll have fun if you like FPS games.

Busy end of the year

Posted on .

It’s been a long time since my last post. I’ve been busy working, enjoying holidays and gaming. I have to make the most of my time with the latter, because my family is about to grow and I’m sure I won’t be able to game as I do now for a long time.

It’s interesting that 2014 appears to have been, statistically, one of the worst years for gaming. Thanks to my new computer I’ve been replaying some old games in their full graphics glory, and played some relatively recent games that I will be reviewing in the following days, like Metro: Last Light, Tomb Raider and Batman: Arkham Origins. Do you see a pattern? All of them were released in 2013. Maybe the worst year in gaming thing is true. I still have Dead Space 3 and Crysis 3 pending.

I also wanted to play Wolfenstein: The New Order as soon as its price drops a bit and a few other indie games that have been receiving excellent reviews, like This War Of Mine.

See you soon.

Sloppy APIs and getrandom

Posted on . Updated on .

Not long ago, the getrandom system call was introduced to Linux with a patch from Ted Tso. It was included for the first time in kernel 3.17, released a few days ago. It attempts to provide a superset of the functionality provided by the getentropy system call in OpenBSD. The purpose is having a system call that will let you obtain high quality random data from the kernel without opening /dev/urandom. This helps when the process is inside a chroot and protects the process from file descriptor exhaustion attacks.

If you’re a C programmer and as paranoid as I am about clean APIs and integer types, you may have noticed getrandom is a bit weird in this regard, and you’ll also notice getentropy is much cleaner. getrandom takes a buffer pointer and the number of requested bytes as a size_t argument. size_t is unsigned and is supposed to be able to represent the size of any object in the program. Specifically, size_t is usually an unsigned 64-bits integer in the typical 64-bit Linux system.

However, the return type of getrandom is a simple integer (32-bit, signed) that may indicate the number of bytes that were actually read. Out of context, I think such an interface is a bit sloppy. The call to getrandom may result in a short read just because it wouldn’t be able to return the proper result due to data types. Short reads are possible in other situations with getrandom, but are mostly mentioned in the context of passing a flag to read from /dev/random instead of /dev/urandom.

In context, it does make sense. If you request a very large number of random bytes you’re going to wait a long time while they are being computed. This may catch you by surprise. So it probably doesn’t make sense to request a large number of random bytes in the first place. In fact, if you check Ted Tso’s patch, you’ll notice getrandom returns an error if the requested size is over 256 bytes.

The interface in OpenBSD solves all these problems right away. First off, the manual page mentions you shouldn’t be using it directly, and provides references to better APIs. In any case, an error will be returned if you request more than 256 bytes (minor complaint: provide a named constant for this just in case the value changes in the future). This is mentioned explicitly in the manual page. If not, there will be no short reads, and the integer value returned will only be used to signal errors, and not to tell how many bytes were actually read. Super-clean. If you want to request more than 256 random bytes, you’re responsible of splitting the requests over a loop, but generally you won’t need to do so. 256 bytes are "more than enough for everybody".

Contrast that with getrandom. Even if you request only 4 bytes, nothing in the API tells you there won’t be a short read. In practice, the implementation will not result in short reads for requests below 256 bytes with the default behavior, sure, but the API leaves that open. So you end up having to code something like this if you want a direct equivalent to getentropy that’s guaranteed to work well (note: coded on-the-go and not tested).

int getentropy(void *buf, size_t nbytes)
{
        if (nbytes > 256) {
                errno = EIO;
                return -1;
        }

        int ret;
        size_t got = 0;

        while (got < nbytes) {
                ret = getrandom(buf + got, nbytes - got, 0);
                if (ret < 0)
                        return ret;
                got += ret;
        }

        return 0;
}

Specifically, the simple getentropy equivalent from Ted Tso will work in practice, but it’s not solid according to the proposed API.

I think it’s unfortunate that a system call may fail, partly, just because of API data types. The surprising part is this happens for some common system calls too. The history trail is not specially good. For example, take a detailed look at the common standard read system call. It takes a size_t argument to indicate how much to read, yet returns the amount that was really read as a ssize_t, which is signed (hence the initial S) and only allows representing numbers half as big. This allows returning -1 to signal errors, but then you have to read further to notice that -1 will be returned and errno set to EINVAL if the requested size is larger than SSIZE_MAX.

The standard write system call has a similar problem. However, not all manpages in every system mention the SSIZE_MAX limitation. For example, in my Linux system the manpage for read mentions it, but the manpage for write does not. It may not have that limitation (I haven’t checked), but if you’re coding portably, the manpage won’t help you detect the problem.

In my very humble opinion, having the requested size as a size_t is very practical because it allows the following kind of code to work without casts.

struct foo a;
...
ssize_t ret = write(fd, &a, sizeof(a));

But would it really hurt to have one more argument to separate the error signaling from the amount of bytes written in a short write?

int write(int fd, void *buf, size_t nbytes, size_t *wbytes);

In systems and programming languages with exceptions this is not a problem. Real errors are signaled by raising exceptions and the API is not polluted. write could return size_t just like its input argument (note: no intention to start a flamewar about exceptions vs error codes). In the Go programming language, functions many times return two values, one is the normal return value and the other one is an error code.

End of the rant.

Typing this from a new computer

Posted on .

Last week I upgraded my computer changing everything except the hard drives. I’m typing this text from the new system already, having migrated my Linux installation to it, and having reinstalled Windows.

I always thought my previous system ran close to perfection, and the only reason for the upgrade was gaming. I wanted a new graphics card and, with it, I didn’t want other part of the system to be a performance bottleneck, so I ended up upgrading it almost completely. I’m now using an Intel Core i5-4690S, 8 GiB of RAM and an EVGA GTX 760 Superclocked ACX. The performance is just amazing, and the price very fair.

As I mentioned, I also migrated my Linux installation to this new computer. By migrating I mean I continue running the same system without reinstalling, preserving the state of the OS and personal data. It’s the third time I did such an operation, and it’s been at least ten or eleven years since I installed this system. I’ve been running the rolling release installation of Slackware Linux since then, across several computers.

I’ve never fully documented my migration procedure. The following notes serve mainly as a reference for myself in the future.

Kernel preparation

I start from the old system by first removing all custom kernel modules I have installed, which these days is mostly the binary NVIDIA driver. Then, I switch from a modular kernel with an initial RAM disk to a huge everything-included kernel that would be suitable to boot any system, including the new one for which I still don’t know which modules are essential to boot.

In Slackware there’s a specific package for that called kernel-huge. Just in case it’s useful, the kernel config and image file are easily available. I proceed to boot my old system with this kernel and without an initial RAM disk to check everything is fine.

Data transferring

I then proceed to transfer all data to the new computer. In this case it meant just plugging the old hard drive in the new computer, but it depends on the situation. The previous time I used an external USB hard drive to copy data from a laptop to my now-old desktop computer, which was booting from a Live-CD or Live-USB. If both systems have Gigabit Ethernet ports, a crossover cable can be a fast solution too.

In this step you may have decided to change the partitioning scheme to be used in the new system, so a bit of data juggling and temporary mounts may be needed. Once the new system has the data, 80% of the work is done and only the other 80% is left.

Finishing touches and booting

Time to try to boot the new system. For that, I usually chroot into the new system from a Live-CD or Live-USB. I first attempt to configure the bootloader correctly. I always install GRUB (these days v2) in the MBR of the appropriate hard drive the BIOS will try to boot. This will probably change with EFI, but I still run my system in Legacy BIOS mode (take into account my gaming OS is Windows 7).

The hard drive order as detected by the BIOS or GRUB may not match the order in the kernel. This happens in my new computer. hd0 and hd1, as detected by GRUB, match to /dev/sdb and /dev/sda respectively. I adjust the GRUB config file accordingly.

After installing GRUB, I pay attention to these other files and tweak everything until the system boots correctly.

  • /etc/fstab, paying attention to the new partitioning scheme, if any, and the hard drive names.

  • Remove autogenerated persistent udev configuration files. In Slackware they sit in /etc/udev/rules.d.

  • Tweak or disable personal commands run at boot time from rc.local and other scripts (no systemd yet in Slackware). Some of those may be related to specific kernel modules or hardware tuning.

  • Review the /etc/smartd.conf configuration file for possible changes or disable SMART until the new system is ready.

Post-booting

At this point I’m happy because I should be running my system in the new computer.

The next step is using mkinitrd-command-generator.sh from Slackware team member AlienBOB to find out which modules I need to boot the computer, and reverting to a modular kernel with an initial RAM disk.

Finally, re-enable or review everything that may have been temporarily disabled previously from boot scripts and SMART. Also, save a fresh copy of the new ALSA settings with “alsactl store”.

I then reinstall the NVIDIA driver and other binary kernel modules, re-configure X11 as needed and tweak any specific system scripts and tools I have stored in /usr/local/sbin and similar places, checking if anything needs to be changed or if there’s been a change in device names.

And that’s it. Typically it’s an evening’s work. Reinstalling Windows, on the other hand, and having to apply around 200 security updates takes ages, but at least it’s mostly unattended.