Ryan Finnie

Retro controllers for the RetroPie

RetroPie computer with 8Bitdo SN30 Pro

This post was originally part of a longer planned article about building a RetroPie game system. That will be coming later.

Also, let me stress that controller design is a very personal preference, and has been known to cause intense discussions and the occasional bar fight. The following are my own opinions. Put down your pitchforks.

My favorite controller is the Xbox 360 controller. I consider its feel to be perfect, with one notable exception which I’ll get into soon. The analog sticks feel great, and are in the perfect locations. The face buttons have just the right amount of travel. The shoulder buttons are just long enough for my personal play style (using just the index finger to control both the trigger and shoulder buttons, with the finger pad controlling the trigger and the fleshy part near the finger base to bump the edge of the shoulder).

Many people prefer the Xbox One controller, but I think it’s slightly inferior in many ways: the trigger and face buttons are a tiny bit too stiff, the length of the shoulder buttons are slightly shorter, and the analog sticks have a slight texture I don’t prefer. Don’t get me wrong, overall I still rate the One controller as a very close #2, but still prefer the 360 controller.

Note that I’ve been avoiding talking about the D-pad, which was the exception I mentioned previously: the 360’s D-pad is terrible. I’m lucky if the direction I want is the direction I get. The One’s D-pad is light years better. But it was rarely an issue in the 360 generation since the D-pad was often relegated to option selection (for example, cycling through weapons).

So why not simply use an Xbox controller on the RetroPie? The Xbox controllers are what I call “analog-first”; that is to say, they work best when playing games designed for analog movement (so, modern). In my hands, the most comfortable position is for the thumbs to be resting close to the rest of the hand. On Xbox controllers, this means left thumb rests on the left analog stick, right thumb rests between the face buttons and the right analog stick. Perfect for modern games.

But the RetroPie isn’t a system for modern games. In retro games, the D-pad is the main attraction. I need a “digital-first” controller. So what options are available for a RetroPie system?

8Bitdo SN30 Pro / SF30 Pro

8Bitdo SN30 Pro The SN30 Pro (or SF30 Pro; the only difference is Super Famicom-style cosmetics) is essentially a Bluetooth SNES controller, with the addition of dual analog sticks and trigger buttons. Besides those additions, 8Bitdo has nailed it emulating the feel of the buttons compared to an authentic SNES controller. If you grew up with the SNES, the SN30 Pro will feel exactly the same.

In my opinion, the Nintendo cross-style D-pad is the best for digital movement, so there are no compromises here. And since the D-pad is the leftmost part of the SN30 Pro, it fits the “digital-first” desire. But since it does include dual analog sticks and trigger buttons, it can be used for PS1 games or similar.

RetroPie bluetooth setup is relatively easy, once you put the SN30 Pro in Switch compatibility mode (hold Start+Y before pairing). Input lag is present (as will be with any Bluetooth controller) but is negligible.

There are only a few downsides, one of which is the shoulder buttons are hard to locate while gaming. See, the SN30 Pro is slightly thicker than an SNES controller, but is still rather thin. Fitting both sets of shoulder and trigger buttons means the shoulder buttons are thinner than normal SNES shoulder buttons, and are harder to hit. I’ve solved this by going into the libretro setup for each core which utilizes shoulder buttons only (SNES, GBA) and told it to treat the trigger button the same as the shoulder button. Once I did that, I found myself subconsciously using the trigger buttons when needed, which work fine.

(For this reason, I would recommend against the SN30 Pro for modern gaming, especially with games like Mirror’s Edge, which requires you quickly and accurately move between shoulder and trigger buttons.)

Another downside I’ve found is while the controller is supposed to sleep after 15 minutes of input inactivity, it doesn’t appear to do so. I’m guessing something with the RetroPie menu system is continually polling the controller and keeping it awake. The solution is to remember to manually turn off the controller when you’re done (hold Start for about 5 seconds until the status light turns off).

And finally, the exterior closely matches an original SNES controller, for good and for bad. That is, it’s a little small for adult hands, and ergonomics have progressed since the 90s, mainly wing grips for better holding. Playing for hours on end can cramp my hands a bit.

The 8BitDo SN30 Pro+ is due to be released during the 2018 holidays, and looks to solve the shoulder/trigger issues and adds wing grips while otherwise seemingly keeping all the other functional elements identical. However, it strays just a tiny a bit from the SNES clone aesthetic versus the original SN30 / SN30 Pro. I have high hopes, and will be buying one when it’s released.

Original SNES / SFC controller

The original SNES controller is widely considered to be the best controller of its day. Ergonomic (for the period), responsive, well built, and well laid out. The issue is it’s digital-only, so you’re limited in game choice for emulated systems like the PS1.

You will need a USB adapter for the SNES controller, but those are readily available online. I’ve got a pair of 15 year old Lik-Sang (rest in peace) adapters, which oddly aren’t recognized by EmulationStation, despite appearing as standard USB gamepads on everything else I’ve used them on over the years. I haven’t actually looked into why they’re not being recognized yet, but these specific adapters haven’t been made in literally decades and plenty of other modern equivalent models exist.

Edit: Turns out the Lik-Sang Super SmartJoy consumes too much current for the Raspberry Pi. Daisy chaining it through a powered USB hub works fine.

Playstation DualShock 4

Red DualShock 4 Later I ordered a Mayflash Magic-NS and a DualShock 4 controller. The Magic-NS is a USB adapter which accepts nearly any mainstream wired or wireless controller (DualShock 3/4, Switch Pro, Xbox 360/One) and allows for play on the Switch or PC (using Dinput or Xinput). Pairing and usage works fine, and I didn’t perceive any additional input lag.

I’m not sure about the DualShock 4. It fits my “digital-first” layout since the D-pad is on the top left. I think the Nintendo-style D-pad is much better, but the Sony-style D-pad isn’t bad. It’s more comfortable for long play than the SN30 Pro due to the modern wing grips. And it works well for nearly all retro games, with one major exception: Super Metroid.

First, the option buttons (share/options) are flush/recessed with the controller, making them harder to press unless you really mean it. This is great for most games since their retro equivalents (select/start) are rarely used during gameplay. But by default, Super Metroid uses the select button heavily for weapon switching. (Yes, I know you can remap buttons within the game, but I’ve got over 20 years of muscle memory built up.)

Also – and this is something I didn’t even notice until trying to play Super Metroid on the DualShock 4 – the SNES’s (and SN30 Pro’s) face buttons are not equidistant. The space between the top and bottom buttons are less than the space between the left and right buttons. The result of this is it’s much easier to quickly switch between the top, bottom and right buttons. Super Metroid takes advantage of this, with the bottom being run, right being jump and top being fire. (The game designers were likely well aware of this since they relegated weapon cancel to the left button, which is harder to hit if you’re centering your thumb around the other three.)

I busted out the digital calipers and measured the distances between horizontal and vertical face buttons on a number of controllers:

  • SNES / SN30: 17mm horizontal, 11mm vertical
  • Switch Pro: 13mm horizontal, 11mm vertical
  • Xbox One: 11mm horizontal / vertical
  • Xbox 360 / DualShock 4: 13mm horizontal / vertical
  • DualShock 3: 15mm horizontal, 13mm vertical

This explains why I was having a harder time with Super Metroid on the DualShock 4: it’s harder to quickly move between three face buttons due to the equidistant layout and longer space between buttons. Comfortable for two buttons at a time, but not three.

Super Metroid is a relative outlier here, since so many games only tend to rely on two face buttons at once, but considering it’s one of my top five games of all time, it’s a definite concern.

Other controllers

As mentioned, the Magic-NS can adapt nearly any first-party controller, so there are no limits to personal preference. If you prefer the D-pad to be on the bottom-left even for retro games, you could use an Xbox 360, Xbox One or Switch Pro controller. I’ve got a DualShock 3, which has easier to press control buttons than the DualShock 4, but I don’t like the feel of the wing grips on older DualShocks. (Also, I’m pretty sure it’s a counterfeit, but it’s hard to tell since apparently Sony really lowered the design quality on the later-model DualShock 3s.)

And of course nearly any third-party wired controller should work fine. I’ve got a horribly cheap feeling Logitech F310 controller. I don’t even remember buying it, but it somehow showed up in my box of USB accessories. I would never consider using it for actual gameplay, but RetroPie supports it fine.

Linux on the Libre Computer Tritium ALL-H3-CC H5

Linux on ARM hardware has been a bit of a hobby of mine for years now. My first ARM Linux installation was on a Linksys NSLU2 NAS. It was horribly, unbelievably slow. But hey, it was fun.

I’ve got a Cubietruck in my closet. Two Utilities. More Raspberry Pis than I know what to do with. We’ve got a fair number of ARM systems at work. Up until about 5 years ago they were mostly assorted vendor devel / eval boards, and we maintained an internal wiki page for dealing with them, named WhichArmsSuckTheMost. Most of those were killed in favor of Calxeda blades, which looked to be the future at the time. Sadly, Calxeda went out of business, but we still have a few clusters left. These days the new hotness is HP Moonshot, and the specs on those are humbling even by normal server standards. (8 cores, 64 GiB RAM, and a modest 250GB SSD per compute cartridge. Multiply by 45 for every 2U chassis.)

Last year I found a Kickstarter for the Libre Computer Tritium, a series of Allwinner-based ARM boards in a Raspberry Pi 3-compatible form factor. I backed the 2 GiB RAM model, which was expected to be fulfilled in January. That target was missed since, well, it’s a Kickstarter project, and I eventually forgot about it, until it unexpectedly showed up in my mailbox Friday.

Libre Computer Tritium ALL-H3-CC H5

The hardware is slick and good quality, and indeed it fits in standard Raspberry Pi cases (though the chip layout isn’t identical, so cases which expect e.g. a heatsinked SoC chip to be in a certain place won’t be compatible). When it arrived, I started looking at what to do with it.

Here’s where the company sort of dropped the ball, in my opinion. They offer 3 models: a 512MiB Allwinner H2+ and a 1 GiB Allwinner H3 (the H2+ and H3 are effectively identical 32-bit SoCs), and a 2 GiB 64-bit Allwinner H5 (which is what I got). The H5 is effectively a completely different and incompatible SoC to the H2+/H3. Further confusing the matter is the model name for all three is “ALL-H3-CC”.

As of this writing, there is only a single Armbian image, but just for the H2+/H3. The comments on the Kickstarter page are understandably confused, with people trying to use that image on their H5 and getting no video or activity other than a red power LED. I did some digging and found the ALL-H3-CC H5 is very similar to Orange Pi PC 2, which has a supported image. I tried it out, and it mostly works. HDMI video (and presumably audio), USB and serial all work, but Ethernet does not (but a random USB Ethernet adapter I added worked fine). I posted my findings on the Kickstarter comments section to help people at least verify their hardware is working until official images are produced.

Update: As of 2018-06-06, Armbian does not yet have a Tritium H5 image, but LoveRPi has produced Tritium H3/H5 images based on Armbian.

But I didn’t want to leave it at that. What follows is by no means a how-to; more like a stream of observations. It’s also specific to the H5 model, as that’s the only one I have.

Turns out U-Boot has everything needed for booting, and works beautifully. It was added just after v2018.05, so you’ll need to compile from git HEAD, using libretech_all_h3_cc_h5_defconfig. This page goes into detail about the whats and whys of booting on Allwinner A53-based SoCs (it’s specific to the Pine64 platform, but the concepts all translate to the H5).

Traditionally, U-Boot works more like lilo did back in the day. Write a boot script, it points to a kernel and initrd at specific locations and boots them. If you’re fancy, you might have the option to boot a backup kernel. But U-Boot has a recent party trick on supported platforms: UEFI booting. Let’s have U-Boot hand off to GRUB to do the actual boot management!

On my system, the first partition on the SD card is 2048 512-byte sectors in, 512 MiB long, MBR partition type “ef”, vfat-formatted. 2048 for the first sector has been the default for a long time, but is important here since the SoC expects the boot code to be 8 KiB from the beginning of the disk, and that’s where you wrote it to as part of the U-Boot compilation above. MBR is recommended since, by default, 8 KiB in on a GPT is actively used for partition data. (I didn’t even know an EFI System Partition was even possible on an MBR until today; thought it was GPT only.) 512 MiB length is quite overkill, but I’d recommend at least 64 MiB.

Update: If you wish to use GPT, it is possible to make it so the area between 8 KiB and 1 MiB is not used by the GPT. This functionality does not appear to be in stock fdisk’s GPT handling, but you can do it in gdisk. Type o to create a new GPT, then x to go to the expert menu, then j to set the partition table beginning sector. By default, this is sector 2 (1024 bytes in), but you can change this to sector 2048 (1 MiB in). Then type m to exit the expert mode and continue partitioning as normal, and the created partitions will default to starting at sector 4096.

The EFI partition layout looks as follows (relative to the partition, which I have mounted as /boot/efi):

/EFI/BOOT/BOOTAA64.EFI
/EFI/ubuntu/grubaa64.efi
/dtb/allwinner/sun50i-h5-libretech-all-h3-cc.dtb

The EFI files are written as part of grub-install -v --no-nvram /dev/mmcblk0, and the DTB comes from the U-Boot compilation. Amazingly, this is all U-Boot needs to hand off to GRUB. You don’t even need a boot.scr! U-Boot’s built-in fallback bootscript will notice BOOTAA64.EFI, load it along with the DTB and bootefi them. GRUB then finds its second stage modules and grub.cfg in the main partition, and continues on as normal.

The implication here is, aside from a HEAD-compiled U-Boot living in the MBR, I have a completely standard Ubuntu 18.04 LTS (“bionic”) ARM64 installation running on the Tritium H5. No extra packages needed, and the kernel is bionic’s standard 4.15. grub-efi-arm64 is installed and managing grub.cfg. USB, SD, Ethernet, serial console and hardware virtualization have all been tested. IR hasn’t been tested (but I see it in dmesg). Sound hasn’t been tested, and video turns off as soon as kernel init is done, but that’s OK for me as I’m using it in a server capacity. The only major problem is the kernel will panic right before shutdown/reboot, so a hands-off remote reboot is not possible.

Full lshw output is here, but in short:

michelle
    description: Desktop Computer
    product: Libre Computer Board ALL-H3-CC H5
    vendor: sunxi
    serial: 828000018ab9133a
    width: 64 bits
    capabilities: smbios-3.0 dmi-3.0 smp cp15_barrier setend swp
    configuration: chassis=desktop uuid=38323830-3030-3031-3861-623931333361
  *-core
       description: Motherboard
       product: sunxi
       vendor: sunxi
       physical id: 0
     *-firmware:0
          description: BIOS
          vendor: U-Boot
          physical id: 0
          version: 2018.05-00424-g2a8e80dfce
          date: 05/27/2018
          size: 1MiB
          capabilities: pci upgrade bootselect i2oboot
     *-firmware:1
          description: BIOS
          physical id: 1
          size: 1MiB
          capabilities: pcmcia shadowing escd cdboot socketedrom
     *-cpu:{0,1,2,3}
          description: CPU
          product: cpu
          physical id: {2,3,4,5}
          bus info: cpu@{0,1,2,3}
          capabilities: fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
     *-memory
          description: System memory
          physical id: 6
          size: 1995MiB

(Futurama naming scheme, if you’re wondering about the hostname.)

If you’re one of the approximately 500 people who got an H5 as part of the Kickstarter, I hope you find this information useful.

dsari (Do Something and Record It) CI 2.0 released

In 2015, I released dsari (Do Something and Record It), a small but very powerful continuous integration (CI) system. Over the past few years, I’ve been adding bits and pieces to it, and realized I haven’t made a proper release since shortly after the initial release.

Version 2.0 includes the following:

  • 90% refactor, and rewritten in Python 3. Note when upgrading you will need to install Python 3 versions of dependent libraries (e.g. apt-get install python3-croniter).
  • Configurable database support: SQLite (default), PostgreSQL, MySQL, or MongoDB.
  • Added dsari-prometheus-exporter for exporting job/run metrics to Prometheus.
  • Added dsari-info command, allowing for CLI retrieval of various dsari job/run information.
  • Added support for iCalendar RRULE schedule definitions (via python-dateutil), instead of or in addition to hashed cron format (via croniter).
  • New job option: concurrent_runs - Allows multiple runs of the same job to run concurrently.
  • HUP reloading is now immediate, rather than next time dsari-daemon is idle.
  • Triggers may now request a time to run the trigger, instead of as soon as possible.
  • Many minor fixes.

Most of these changes were done gradually over the last two years, but what got me on the latest development kick was actually a recent facepalm moment. Recently I had the idea to take Nagios NRPE definitions, run them, and export the data (exit code, running length, etc) to Prometheus, rather than using Nagios directly. But then I started thinking “I’ll need to deal with tracking forks, concurrency, etc, etc. That’ll be a pain.”

“Oh wait, I’ve already solved this. This sounds like a situation where I need to Do Something and Record It(s metrics).”

With that, I wrote dsari-prometheus-exporter and integrated it with dsari. The metrics it exports can be useful; see below for the metrics from a sample job.

The sample1 job is part of a severely constrained concurrency group where only one job run in the group is configured to be running at once. The actual run time is remarkably consistent at about 90 seconds, but since there are a number of jobs in this concurrency group and they all want to run every minute, there is always a waiting list. As a result, this job runs about 70 seconds late on average. But the spread is quite large, from (effectively) immediately, to nearly 6 minutes late.

(The fact that over 1300 runs, the job has spent almost exactly the same amount of time running as being blocked is also interesting, but a coincidence.)

# HELP dsari_run_latency_seconds Length of time spent between scheduled start and actual start
# TYPE dsari_run_latency_seconds summary
dsari_run_latency_seconds{job_name="sample1",quantile="0.01"} 0.0027895
dsari_run_latency_seconds{job_name="sample1",quantile="0.1"} 34.53752
dsari_run_latency_seconds{job_name="sample1",quantile="0.5"} 69.8487545
dsari_run_latency_seconds{job_name="sample1",quantile="0.9"} 170.2028667
dsari_run_latency_seconds{job_name="sample1",quantile="0.99"} 358.96789803
dsari_run_latency_seconds_count{job_name="sample1"} 1300
dsari_run_latency_seconds_sum{job_name="sample1"} 113835.42681500006

# HELP dsari_run_duration_seconds Length of time spent in a run
# TYPE dsari_run_duration_seconds summary
dsari_run_duration_seconds{job_name="sample1",quantile="0.01"} 5.06363971
dsari_run_duration_seconds{job_name="sample1",quantile="0.1"} 90.03290849999999
dsari_run_duration_seconds{job_name="sample1",quantile="0.5"} 90.060451
dsari_run_duration_seconds{job_name="sample1",quantile="0.9"} 90.0872659
dsari_run_duration_seconds{job_name="sample1",quantile="0.99"} 90.10349761
dsari_run_duration_seconds_count{job_name="sample1"} 1300
dsari_run_duration_seconds_sum{job_name="sample1"} 112009.71985500008

# HELP dsari_last_run_exit_code Numeric exit code of the last run for a job
# TYPE dsari_last_run_exit_code gauge
dsari_last_run_exit_code{job_name="sample1"} 143

# HELP dsari_last_run_schedule_time Schedule time of the last run for a job, seconds since epoch
# TYPE dsari_last_run_schedule_time gauge
dsari_last_run_schedule_time{job_name="sample1"} 1524347585.946814

# HELP dsari_last_run_start_time Start time of the last run for a job, seconds since epoch
# TYPE dsari_last_run_start_time gauge
dsari_last_run_start_time{job_name="sample1"} 1524347824.892004

# HELP dsari_last_run_stop_time Stop time of the last run for a job, seconds since epoch
# TYPE dsari_last_run_stop_time gauge
dsari_last_run_stop_time{job_name="sample1"} 1524347913.404784

# HELP dsari_run_count Number of runs performed for a job
# TYPE dsari_run_count counter
dsari_run_count{job_name="sample1"} 1300

# HELP dsari_run_failure_count Number of failed runs performed for a job
# TYPE dsari_run_failure_count counter
dsari_run_failure_count{job_name="sample1"} 56

# HELP dsari_run_success_count Number of successful runs performed for a job
# TYPE dsari_run_success_count counter
dsari_run_success_count{job_name="sample1"} 1244

Let's Encrypt wildcard certificates

The vsix.us IP tests site is now fully HTTPS. About a year ago I converted all the sites I could over to HTTPS with Let’s Encrypt, and while vsix.us itself was converted to HTTPS, the “nocache” wildcard site remained at HTTP because Let’s Encrypt did not have wildcard support at the time.

About a week ago they announced wildcard support, and today I registered *.nocache.vsix.us. The process wasn’t seamless, and this blog post is meant to be some notes on getting it working, not necessarily a guide.

Let’s Encrypt only supports wildcard registration via the ACME v2 protocol and dns-01 validation. I’m not exactly sure why it’s dns-01 only, as they are not checking multiple subdomains, just using the TXT record of _acme-challenge.example.com to validate *.example.com. Unless I’m missing something, http-01 validation on example.com would be just as secure.

In my case, this means configuring BIND for secure dynamic updates. The documentation for certbot-dns-rfc2136 is straightforward, but complicated by the fact that my zones are DNSSEC signed, so BIND itself needs to be able to re-sign the zone on the fly. (In my case, I re-sign the zones myself after making zone updates.) Ultimately I split out nocache.vsix.us to its own zone and configured BIND so it could re-sign upon update.

certbot needs to be at least version 0.22 to support ACME v2, needed for wildcard. The Ubuntu PPA includes 0.22.2 as of this writing, but it does not include certbot-dns-rfc2136 needed for dns-01 validation. But it’s relatively easy to install manually:

git clone https://github.com/certbot/certbot
cd certbot/certbot-dns-rfc2136
sudo pip3 install .

Also of note is 0.22 still defaults to ACME v1. You will need to pass the v2 endpoint via --server, but keep in mind that the v2 endpoint is essentially a completely different service, and it will once again ask registration questions (email address, etc). With all that in place, here was the final invocation for me:

certbot certonly \
  --server 'https://acme-v02.api.letsencrypt.org/directory' \
  --dns-rfc2136 \
  --dns-rfc2136-credentials /etc/letsencrypt/dns-01.ini \
  -d nocache.vsix.us -d '*.nocache.vsix.us'

Red Beans and Rice

Just a short post here (and no pictures, because let’s be honest, red beans and rice doesn’t actually look that appetizing once it’s been reduced down to a brown mush). Every few weeks I’ll stop by Popeyes – I love their spicy chicken strips with buffalo sauce – and will get red beans and rice as a side. A few weeks ago I decided to try to recreate the recipe, mostly by staring into the spice rack, picking a random spice and thinking “yeah, that’ll work”.

After a few iterations, I’ve got a recipe which is not 100% exact to Popeyes, but is still wonderfully tasty. And as usual, is easy to make with pantry items. (You could of course do things like use real celery instead of flakes, add salt pork or shrimp, etc, but this is a good simple base.)

  • 4 cans (approx 15 oz each) red kidney beans, drained
  • 1 can beef broth
  • ½ can water
  • 1 chicken bouillon cube
  • 2 tsp celery flakes
  • 2 tsp parsley flakes
  • 2 bay leaves
  • 1 tsp garlic powder
  • ½ tsp onion powder
  • ½ tsp crushed red pepper
  • ¼ tsp cumin

Combine all ingredients, cover and bring to a boil. Reduce to a simmer for 30 minutes, stirring halfway. Uncover, remove bay leaves and mash moderately (most but not all beans should be broken). Leave uncovered, re-add bay leaves and simmer for another 30 minutes, stirring every 10 minutes. Consistency should be a thick sauce. Remove bay leaves. Serve over rice.

« All posts