Ryan Finnie

Linux on the Libre Computer Tritium ALL-H3-CC H5

Linux on ARM hardware has been a bit of a hobby of mine for years now. My first ARM Linux installation was on a Linksys NSLU2 NAS. It was horribly, unbelievably slow. But hey, it was fun.

I’ve got a Cubietruck in my closet. Two Utilities. More Raspberry Pis than I know what to do with. We’ve got a fair number of ARM systems at work. Up until about 5 years ago they were mostly assorted vendor devel / eval boards, and we maintained an internal wiki page for dealing with them, named WhichArmsSuckTheMost. Most of those were killed in favor of Calxeda blades, which looked to be the future at the time. Sadly, Calxeda went out of business, but we still have a few clusters left. These days the new hotness is HP Moonshot, and the specs on those are humbling even by normal server standards. (8 cores, 64 GiB RAM, and a modest 250GB SSD per compute cartridge. Multiply by 45 for every 2U chassis.)

Last year I found a Kickstarter for the Libre Computer Tritium, a series of Allwinner-based ARM boards in a Raspberry Pi 3-compatible form factor. I backed the 2 GiB RAM model, which was expected to be fulfilled in January. That target was missed since, well, it’s a Kickstarter project, and I eventually forgot about it, until it unexpectedly showed up in my mailbox Friday.

Libre Computer Tritium ALL-H3-CC H5

The hardware is slick and good quality, and indeed it fits in standard Raspberry Pi cases (though the chip layout isn’t identical, so cases which expect e.g. a heatsinked SoC chip to be in a certain place won’t be compatible). When it arrived, I started looking at what to do with it.

Here’s where the company sort of dropped the ball, in my opinion. They offer 3 models: a 512MiB Allwinner H2+ and a 1 GiB Allwinner H3 (the H2+ and H3 are effectively identical 32-bit SoCs), and a 2 GiB 64-bit Allwinner H5 (which is what I got). The H5 is effectively a completely different and incompatible SoC to the H2+/H3. Further confusing the matter is the model name for all three is “ALL-H3-CC”.

As of this writing, there is only a single Armbian image, but just for the H2+/H3. The comments on the Kickstarter page are understandably confused, with people trying to use that image on their H5 and getting no video or activity other than a red power LED. I did some digging and found the ALL-H3-CC H5 is very similar to Orange Pi PC 2, which has a supported image. I tried it out, and it mostly works. HDMI video (and presumably audio), USB and serial all work, but Ethernet does not (but a random USB Ethernet adapter I added worked fine). I posted my findings on the Kickstarter comments section to help people at least verify their hardware is working until official images are produced.

Update: As of 2018-06-06, Armbian does not yet have a Tritium H5 image, but LoveRPi has produced Tritium H3/H5 images based on Armbian.

But I didn’t want to leave it at that. What follows is by no means a how-to; more like a stream of observations. It’s also specific to the H5 model, as that’s the only one I have.

Turns out U-Boot has everything needed for booting, and works beautifully. It was added just after v2018.05, so you’ll need to compile from git HEAD, using libretech_all_h3_cc_h5_defconfig. This page goes into detail about the whats and whys of booting on Allwinner A53-based SoCs (it’s specific to the Pine64 platform, but the concepts all translate to the H5).

Traditionally, U-Boot works more like lilo did back in the day. Write a boot script, it points to a kernel and initrd at specific locations and boots them. If you’re fancy, you might have the option to boot a backup kernel. But U-Boot has a recent party trick on supported platforms: UEFI booting. Let’s have U-Boot hand off to GRUB to do the actual boot management!

On my system, the first partition on the SD card is 2048 512-byte sectors in, 512 MiB long, MBR partition type “ef”, vfat-formatted. 2048 for the first sector has been the default for a long time, but is important here since the SoC expects the boot code to be 8 KiB from the beginning of the disk, and that’s where you wrote it to as part of the U-Boot compilation above. MBR is recommended since, by default, 8 KiB in on a GPT is actively used for partition data. (I didn’t even know an EFI System Partition was even possible on an MBR until today; thought it was GPT only.) 512 MiB length is quite overkill, but I’d recommend at least 64 MiB.

Update: If you wish to use GPT, it is possible to make it so the area between 8 KiB and 1 MiB is not used by the GPT. This functionality does not appear to be in stock fdisk’s GPT handling, but you can do it in gdisk. Type o to create a new GPT, then x to go to the expert menu, then j to set the partition table beginning sector. By default, this is sector 2 (1024 bytes in), but you can change this to sector 2048 (1 MiB in). Then type m to exit the expert mode and continue partitioning as normal, and the created partitions will default to starting at sector 4096.

The EFI partition layout looks as follows (relative to the partition, which I have mounted as /boot/efi):

/EFI/BOOT/BOOTAA64.EFI
/EFI/ubuntu/grubaa64.efi
/dtb/allwinner/sun50i-h5-libretech-all-h3-cc.dtb

The EFI files are written as part of grub-install -v --no-nvram /dev/mmcblk0, and the DTB comes from the U-Boot compilation. Amazingly, this is all U-Boot needs to hand off to GRUB. You don’t even need a boot.scr! U-Boot’s built-in fallback bootscript will notice BOOTAA64.EFI, load it along with the DTB and bootefi them. GRUB then finds its second stage modules and grub.cfg in the main partition, and continues on as normal.

The implication here is, aside from a HEAD-compiled U-Boot living in the MBR, I have a completely standard Ubuntu 18.04 LTS (“bionic”) ARM64 installation running on the Tritium H5. No extra packages needed, and the kernel is bionic’s standard 4.15. grub-efi-arm64 is installed and managing grub.cfg. USB, SD, Ethernet, serial console and hardware virtualization have all been tested. IR hasn’t been tested (but I see it in dmesg). Sound hasn’t been tested, and video turns off as soon as kernel init is done, but that’s OK for me as I’m using it in a server capacity. The only major problem is the kernel will panic right before shutdown/reboot, so a hands-off remote reboot is not possible.

Full lshw output is here, but in short:

michelle
    description: Desktop Computer
    product: Libre Computer Board ALL-H3-CC H5
    vendor: sunxi
    serial: 828000018ab9133a
    width: 64 bits
    capabilities: smbios-3.0 dmi-3.0 smp cp15_barrier setend swp
    configuration: chassis=desktop uuid=38323830-3030-3031-3861-623931333361
  *-core
       description: Motherboard
       product: sunxi
       vendor: sunxi
       physical id: 0
     *-firmware:0
          description: BIOS
          vendor: U-Boot
          physical id: 0
          version: 2018.05-00424-g2a8e80dfce
          date: 05/27/2018
          size: 1MiB
          capabilities: pci upgrade bootselect i2oboot
     *-firmware:1
          description: BIOS
          physical id: 1
          size: 1MiB
          capabilities: pcmcia shadowing escd cdboot socketedrom
     *-cpu:{0,1,2,3}
          description: CPU
          product: cpu
          physical id: {2,3,4,5}
          bus info: cpu@{0,1,2,3}
          capabilities: fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
     *-memory
          description: System memory
          physical id: 6
          size: 1995MiB

(Futurama naming scheme, if you’re wondering about the hostname.)

If you’re one of the approximately 500 people who got an H5 as part of the Kickstarter, I hope you find this information useful.

dsari (Do Something and Record It) CI 2.0 released

In 2015, I released dsari (Do Something and Record It), a small but very powerful continuous integration (CI) system. Over the past few years, I’ve been adding bits and pieces to it, and realized I haven’t made a proper release since shortly after the initial release.

Version 2.0 includes the following:

  • 90% refactor, and rewritten in Python 3. Note when upgrading you will need to install Python 3 versions of dependent libraries (e.g. apt-get install python3-croniter).
  • Configurable database support: SQLite (default), PostgreSQL, MySQL, or MongoDB.
  • Added dsari-prometheus-exporter for exporting job/run metrics to Prometheus.
  • Added dsari-info command, allowing for CLI retrieval of various dsari job/run information.
  • Added support for iCalendar RRULE schedule definitions (via python-dateutil), instead of or in addition to hashed cron format (via croniter).
  • New job option: concurrent_runs - Allows multiple runs of the same job to run concurrently.
  • HUP reloading is now immediate, rather than next time dsari-daemon is idle.
  • Triggers may now request a time to run the trigger, instead of as soon as possible.
  • Many minor fixes.

Most of these changes were done gradually over the last two years, but what got me on the latest development kick was actually a recent facepalm moment. Recently I had the idea to take Nagios NRPE definitions, run them, and export the data (exit code, running length, etc) to Prometheus, rather than using Nagios directly. But then I started thinking “I’ll need to deal with tracking forks, concurrency, etc, etc. That’ll be a pain.”

“Oh wait, I’ve already solved this. This sounds like a situation where I need to Do Something and Record It(s metrics).”

With that, I wrote dsari-prometheus-exporter and integrated it with dsari. The metrics it exports can be useful; see below for the metrics from a sample job.

The sample1 job is part of a severely constrained concurrency group where only one job run in the group is configured to be running at once. The actual run time is remarkably consistent at about 90 seconds, but since there are a number of jobs in this concurrency group and they all want to run every minute, there is always a waiting list. As a result, this job runs about 70 seconds late on average. But the spread is quite large, from (effectively) immediately, to nearly 6 minutes late.

(The fact that over 1300 runs, the job has spent almost exactly the same amount of time running as being blocked is also interesting, but a coincidence.)

# HELP dsari_run_latency_seconds Length of time spent between scheduled start and actual start
# TYPE dsari_run_latency_seconds summary
dsari_run_latency_seconds{job_name="sample1",quantile="0.01"} 0.0027895
dsari_run_latency_seconds{job_name="sample1",quantile="0.1"} 34.53752
dsari_run_latency_seconds{job_name="sample1",quantile="0.5"} 69.8487545
dsari_run_latency_seconds{job_name="sample1",quantile="0.9"} 170.2028667
dsari_run_latency_seconds{job_name="sample1",quantile="0.99"} 358.96789803
dsari_run_latency_seconds_count{job_name="sample1"} 1300
dsari_run_latency_seconds_sum{job_name="sample1"} 113835.42681500006

# HELP dsari_run_duration_seconds Length of time spent in a run
# TYPE dsari_run_duration_seconds summary
dsari_run_duration_seconds{job_name="sample1",quantile="0.01"} 5.06363971
dsari_run_duration_seconds{job_name="sample1",quantile="0.1"} 90.03290849999999
dsari_run_duration_seconds{job_name="sample1",quantile="0.5"} 90.060451
dsari_run_duration_seconds{job_name="sample1",quantile="0.9"} 90.0872659
dsari_run_duration_seconds{job_name="sample1",quantile="0.99"} 90.10349761
dsari_run_duration_seconds_count{job_name="sample1"} 1300
dsari_run_duration_seconds_sum{job_name="sample1"} 112009.71985500008

# HELP dsari_last_run_exit_code Numeric exit code of the last run for a job
# TYPE dsari_last_run_exit_code gauge
dsari_last_run_exit_code{job_name="sample1"} 143

# HELP dsari_last_run_schedule_time Schedule time of the last run for a job, seconds since epoch
# TYPE dsari_last_run_schedule_time gauge
dsari_last_run_schedule_time{job_name="sample1"} 1524347585.946814

# HELP dsari_last_run_start_time Start time of the last run for a job, seconds since epoch
# TYPE dsari_last_run_start_time gauge
dsari_last_run_start_time{job_name="sample1"} 1524347824.892004

# HELP dsari_last_run_stop_time Stop time of the last run for a job, seconds since epoch
# TYPE dsari_last_run_stop_time gauge
dsari_last_run_stop_time{job_name="sample1"} 1524347913.404784

# HELP dsari_run_count Number of runs performed for a job
# TYPE dsari_run_count counter
dsari_run_count{job_name="sample1"} 1300

# HELP dsari_run_failure_count Number of failed runs performed for a job
# TYPE dsari_run_failure_count counter
dsari_run_failure_count{job_name="sample1"} 56

# HELP dsari_run_success_count Number of successful runs performed for a job
# TYPE dsari_run_success_count counter
dsari_run_success_count{job_name="sample1"} 1244

Let's Encrypt wildcard certificates

The vsix.us IP tests site is now fully HTTPS. About a year ago I converted all the sites I could over to HTTPS with Let’s Encrypt, and while vsix.us itself was converted to HTTPS, the “nocache” wildcard site remained at HTTP because Let’s Encrypt did not have wildcard support at the time.

About a week ago they announced wildcard support, and today I registered *.nocache.vsix.us. The process wasn’t seamless, and this blog post is meant to be some notes on getting it working, not necessarily a guide.

Let’s Encrypt only supports wildcard registration via the ACME v2 protocol and dns-01 validation. I’m not exactly sure why it’s dns-01 only, as they are not checking multiple subdomains, just using the TXT record of _acme-challenge.example.com to validate *.example.com. Unless I’m missing something, http-01 validation on example.com would be just as secure.

In my case, this means configuring BIND for secure dynamic updates. The documentation for certbot-dns-rfc2136 is straightforward, but complicated by the fact that my zones are DNSSEC signed, so BIND itself needs to be able to re-sign the zone on the fly. (In my case, I re-sign the zones myself after making zone updates.) Ultimately I split out nocache.vsix.us to its own zone and configured BIND so it could re-sign upon update.

certbot needs to be at least version 0.22 to support ACME v2, needed for wildcard. The Ubuntu PPA includes 0.22.2 as of this writing, but it does not include certbot-dns-rfc2136 needed for dns-01 validation. But it’s relatively easy to install manually:

git clone https://github.com/certbot/certbot
cd certbot/certbot-dns-rfc2136
sudo pip3 install .

Also of note is 0.22 still defaults to ACME v1. You will need to pass the v2 endpoint via --server, but keep in mind that the v2 endpoint is essentially a completely different service, and it will once again ask registration questions (email address, etc). With all that in place, here was the final invocation for me:

certbot certonly \
  --server 'https://acme-v02.api.letsencrypt.org/directory' \
  --dns-rfc2136 \
  --dns-rfc2136-credentials /etc/letsencrypt/dns-01.ini \
  -d nocache.vsix.us -d '*.nocache.vsix.us'

Red Beans and Rice

Just a short post here (and no pictures, because let’s be honest, red beans and rice doesn’t actually look that appetizing once it’s been reduced down to a brown mush). Every few weeks I’ll stop by Popeyes – I love their spicy chicken strips with buffalo sauce – and will get red beans and rice as a side. A few weeks ago I decided to try to recreate the recipe, mostly by staring into the spice rack, picking a random spice and thinking “yeah, that’ll work”.

After a few iterations, I’ve got a recipe which is not 100% exact to Popeyes, but is still wonderfully tasty. And as usual, is easy to make with pantry items. (You could of course do things like use real celery instead of flakes, add salt pork or shrimp, etc, but this is a good simple base.)

  • 4 cans (approx 15 oz each) red kidney beans, drained
  • 1 can beef broth
  • ½ can water
  • 1 chicken bouillon cube
  • 2 tsp celery flakes
  • 2 tsp parsley flakes
  • 2 bay leaves
  • 1 tsp garlic powder
  • ½ tsp onion powder
  • ½ tsp crushed red pepper
  • ¼ tsp cumin

Combine all ingredients, cover and bring to a boil. Reduce to a simmer for 30 minutes, stirring halfway. Uncover, remove bay leaves and mash moderately (most but not all beans should be broken). Leave uncovered, re-add bay leaves and simmer for another 30 minutes, stirring every 10 minutes. Consistency should be a thick sauce. Remove bay leaves. Serve over rice.

Home Sweet Home: The "Things"

It all began a year ago with a dot.

Late last year, when replacing all my home’s lights with LEDs (among other projects), I had been spending a lot of time at Home Depot and Lowes. While at Lowes in November 2016, I noticed they sold Amazon Echo products, and the Echo Dot was only $50. I picked one up and started playing with it.

Echo mostly works as you’d expect from a Siri-like device: you say “Alexa, …” and it responds. The full-sized Echo supposedly has decent sound, but the Dot is compact and has, at best, satisfactory sound for music. However, if you want you may hook it up to a speaker via Bluetooth, and that works well. I’ll usually wake up with “Alexa, play some music”, and it often does a good job figuring out music I would like to hear. “Alexa, what’s the weather like?” “Alexa, what movies are playing?” Etc. A little limited in day to day functionality, but worth the $50 as a novelty.

But I was drawn to the developer functionality. I wrote a few apps, one of which I tried to get submitted for publication, but they kept coming back with technical rejections to fix, and I never completed the process. But it’s still an interesting platform.

I was also curious about the “smart home” integration possibilities. I had mostly dismissed the the whole “Internet of Things” idea as gimmicky, but there was one specific use case I could think of for me personally. The backyard patio lights are a string of overhead lights which run to the side of the house and attach to a standard outdoor outlet which is in a slightly inconvenient location. I had been thinking of ways to fix this, and a weatherproof remote controlled relay switch would be useful.

SmartThings screenshot The Samsung SmartThings hub was on sale on Black Friday 2016 for $50 (which as of 2017 is now its regular price), got good reviews, and seemed to have the best variety of device support. Its two main supported protocols are ZigBee and Z-Wave. Both are point-to-point mesh networks: you can pair both Device A and Device B to your hub, and if, say, Device B is not within range of the hub but can talk to Device A, Device A will relay the commands for it. It also supports several other Internet-connected services, such as the thermostat I already happened to have.

I bought the SmartThings hub, as well as a weatherproof control module. The pairing process was simple: press a button on the module to initiate pairing, accept it on the smartphone app, and now I could turn the patio lights on and off from my phone. It also has Alexa integration, so once those were paired, I could say “Alexa, turn on patio lights”.

A week later, I saw that Lowes had their Iris ZigBee (indoor) outlet control modules on sale, and bought a few to play with. Iris is a competing home automation platform to Samsung SmartThings, but as I understood it, SmartThings supported Z-Wave and ZigBee devices from any manufacturer (which gave it a leg up on competitors which tend to be walled gardens). I began the pairing process and… problem. It simply showed up as “Thing” and I couldn’t do anything with it.

I was all set to return them, but it was then I learned about the full extent of SmartThings compatibility. While the app has a very simple interface and limited information and actions available, it turns out there is actually a web interface for developers. Not only does this interface give you a lot more information (radio levels, debugging events, etc), but it actually allows you to write your own device handlers, in a scripting language called Groovy, which is basically the Java equivalent of Lua, and is often used in Jenkins plugins.

With enough work, I could have reverse engineered the Iris outlet and written my own device handler, but SmartThings has a large developer community, and it’s likely someone has already written a device handler for nearly every Z-Wave and ZigBee device out there, even if it’s not part of the core SmartThings platform. I quickly found an Iris outlet handler, imported it, and then was able to pair the outlets.

An interesting feature of these Iris outlets is, while they are controlled via ZigBee, they also have a Z-Wave radio in them and can act as Z-Wave repeaters, strengthening the Z-Wave mesh as a whole. However, SmartThings does not know about Z-Wave devices which literally do nothing but act as a repeater, so when I added the Z-Wave side, it would fall back to a switched outlet. Not really a problem (in the phone UI, it would show a switch to toggle which did nothing), but it did inspire me to learn Groovy and how to write device handlers, and I wrote a proper device handler for Z-Wave repeaters. It was actually a surprising amount of work to write a handler for a device which does literally nothing.

Temperature graph

Since then, I went a little overboard in the novelty of it, and have bought a number of additional devices. As of now, the current list is:

  • Honeywell Wi-Fi thermostat - Previously mentioned; I happened to have this for a few years before I bought the SmartThings hub. It integrates with Alexa, but also with SmartThings, so I have it paired with SmartThings, which exports everything to Alexa and gives me more options than Alexa directly. But honestly, it’s rare I even touch the thermostat beyond its normal schedule.
  • GE Z-Wave outdoor module - Previously mentioned, currently serving the backyard patio lights. They turn on each day at sundown, and off at 11:30PM. (SmartThings supports dynamic “sunrise” and “sundown” in addition to static times.)
  • Leviton Z-Wave appliance module - Currently serving some LED strip lights I have mounted in the alcove above the TV in the living room; I have them listed as “movie lights”.
  • Iris ZigBee (Z-Wave) smart plug - Previously mentioned; I have two of these but they’re not controlling anything at the moment.
  • Iris ZigBee smart fob - A little fob with four buttons. I can’t remember what I’ve set them to do as I never use it.
  • GE Z-wave in-wall light dimmer and GE Z-Wave in-wall light switch - Z-Wave switches which are meant to replace traditional in-wall light switches. They work like normal light switches (and in the dimmer’s case you can hold on/off to increase/decrease the light level), but are also Z-Wave controllable. I have the dimmer on the front porch light and the normal switch on the outdoor rear wall light.
  • Cree ZigBee 60w equivalent LED bulbs - Yes, the light bulb itself is ZigBee compatible. I have four of them, two for the living room lights and two for the bedroom night stand lights. I’ve got each of the pairs in device groups, so each night, it’s “Alexa, turn off living room lights”. As for the bedroom, I have it set to turn them on at 11PM, as I’ll usually go to bed shortly after and read for awhile. Then “Alexa, turn off bedroom lights”. They’re even dimmable (“Alexa, set living room lights to 50%.”) These are the devices I use most often day to day.
  • SmartThings ZigBee multipurpose sensor - This battery powered sensor has several measurements: ambient temperature, contact (like those small battery powered alarms you can add to windows), axis rotational position, and motion. I currently have two, one mounted to the inside of the front door to monitor when it’s opened, and one mounted to the garage door. The device handler’s “garage door” mode is interesting; rather than use the door contact sensor, it uses the axis rotational sensor. When the sensor is vertical, the garage door is closed. When it’s horizontal, the garage door has been rolled up.
  • Zooz Z-Wave 4-in-1 sensor - Another sensor, though this provides some different measurements than the previous: temperature, humidity, light level and motion. It even, oddly, has a tamper protection switch within the case which trips if opened, and is exposed as an acceleration event. Mounted in the hallway, it’s actually sensitive enough that you could figure out when I take a shower by graphing the humidity levels a few rooms over.
  • TP-Link Wi-Fi smart plug - This is actually neither Z-Wave nor ZigBee, but straight Wi-Fi. Normally it calls home and uses a separate app for control (“Kasa”), but it has a local port open, and accepts obfuscated commands (people, XOR is not encryption). I wrote a device handler and proxy to allow SmartThings direct control of it, and it currently controls a floor fan in the living room during the summer.
« All posts