Raspberry Pi 2 Ubuntu / Raspbian benchmarks

One of the nice things about the Raspberry Pi 2 is it has a Cortex-A7-based ARMv7 CPU, as opposed to the original Pi's ARMv6 CPU. This not only allows many more distributions to run on it (as most armhf distributions are compiled to ARMv7 minimum), but also brings with it the performance benefits associated with userland ARMv7 code. After releasing an Ubuntu 14.04 (trusty) image for the Raspberry Pi 2, I decided to pit Raspbian (which uses an ARMv6 userland for compatibility between the original Pi and the Pi 2) against Ubuntu (which is only compiled to ARMv7). I also benchmarked a Utilite Pro, an ARM system with a faster CPU and built-in SSD, and a modern Intel server.

  • Raspberry Pi B, 700 MHz 1-core BCM2708 CPU, 512 MiB memory, 16 GB SanDisk SDHC Class 4
  • Raspberry Pi 2 B, 900 MHz 4-core BCM2709 CPU, 1 GiB memory, 32 GB SanDisk Ultra Plus microSDHC Class 10 UHS-1
  • Utilite Pro, 1 GHz 4-core i.MX6 CPU, 2 GiB memory, 32 GB SanDisk U110 SSD
  • ASRock Z97 Pro3, 3.5 GHz 4-core Intel Core i5-4690K, 32 GiB memory, 4x 2TB Seagate ST2000DL003 5900 RPM in MD RAID 10

Raspbian wheezy was tested on both Raspberry Pi models, while Ubuntu trusty was also tested on the Raspberry Pi 2, along with the rest of the systems. All installations were current as of today. The systems were tested with nbench (BYTEmark), OpenSSL and Bonnie++.

Results

This is a hand-picked assortment of test results; for the full raw results, see below.

Test RPi B
Raspbian
RPi 2
Raspbian
RPi 2
Ubuntu
Utilite
Ubuntu
i5-4690K
Ubuntu
Numeric sort 217.2 450.72 421.55 334.63 2,385.1
FP emulation 41.334 70.276 55.108 52.454 795.9
IDEA 694.72 1,308.5 1,573.3 1,315 15,059
md5 1024 37,008.46 62,628.86 69,563.39 80,632.53 670,637.40
aes-256 cbc 1024 11,969.50 18,445.31 17,295.36 20,986.47 124,509.53
sha512 1024 8,491.32 11,838.81 20,718.25 25,803.70 431,647.74
whirlpool 1024 1,584.61 2,949.80 2,747.05 2,687.46 135,009.28
rsa 1024 verify 1,540.3 2,649.6 2,630.5 2,890.8 114,074.5
ecdsa 256 verify 73.2 126.3 138.0 161.1 4,329.6
Block output 7,520 11,028 11,299 48,214 62,762
Block input 13,233 23,015 22,997 125,954 284,914
Random seeks 524.7 1,054 874.6 3,218 444.5

Notes

  • Interestingly, many of the BYTEmark tests on the Pi 2 were faster on Raspbian than on Ubuntu. But keep in mind that these are tests from the 1990s, and are not taking advantage of modern optimizations (like the floating point emulation test). Many OpenSSL tests performed better on Ubuntu, but not all.
  • Edit: The slower nbench results in Ubuntu appear to be due to a running LSM (Linux Security Module). When Ubuntu is running with AppArmor (default) or SELinux enabled, it's marginally slower than Raspbian, but with LSMs disabled, it's marginally faster than Raspbian. (The Raspbian kernel has no LSM modules compiled in.) I'm keeping these test results as they are because AppArmor is enabled by default, but keep that in mind.
  • Raspbian/Ubuntu aside, virtually all of the tests were faster on the Pi 2 than the original Pi.
  • Bonnie++ tests were roughly the same between Raspbian and Ubuntu on Pi 2, and were decently faster than the original Pi (though in this test an older SDHC card was used for the original Pi, so it's not apples to apples). The SSD on the Utilite blows them away though.
  • All of the CPU tests are single-threaded, and do not take multi-core performance into consideration.
  • This was not a controlled scientific test. I did not run multiple tests on each system and average them together, and in the Intel system's case, it was an active (but low volume) server.
  • All Bonnie++ tests were run with swap disabled and on the boot drive, except for the Intel system where the boot drive (an SSD) did not have enough space for a full test. (Bonnie++ requires twice the amount of RAM as disk to run. On 512 MiB / 1 GiB / 2 GiB systems that's fine, but I didn't have 64 GiB free on the the Intel system's boot drive.

Continue reading

Raspberry Pi 2 update - (unofficial) Ubuntu 14.04 image available

Download the lastest Ubuntu 14.04 Raspberry Pi 2 image

If you downloaded an older image than the current one, you shouldn't need to reinstall, but be sure to review the changelog in the link above.

Note that this blog post originally contained a bunch more information, which has been moved to a dedicated page on wiki.ubuntu.com.

I've closed comments on this blog post. If you are looking for help, please see this post on the raspberrypi.org forums. If you post there, you'll be reaching a wider audience of people (including myself) who can help you. Thanks for all of your comments!


After my last post, I went and ported Sjoerd's Raspberry Pi 2 Debian kernel patchset to Ubuntu's kernel package base (specifically 3.18.0-14.15). The result is an RPi2-compatible 3.18.7-based kernel which not only installs in Ubuntu, but has all the Ubuntu bells and whistles. I also re-ported flash-kernel based on Ubuntu's package, recompiled raspberrypi-firmware-nokernel, created a linux-meta-rpi2 package, and put it all in a PPA.

With that all done, I decided to go ahead and produce a base Ubuntu trusty image. It's 1.75GB uncompressed so you can put it on a 2GB or larger MicroSD card, and includes a full ubuntu-standard setup. Also included in the zip is a .bmap file; if you are writing the image in Linux you can use bmap-tools package to write only the non-zero bytes, saving some time. Otherwise it's the same procedure as other Raspberry Pi images.

(PS: If this image becomes popular, I should point out ahead of time: This is an unofficial image and is in no way endorsed by my employer, who happens to be the company who produces Ubuntu. This is a purely personal undertaking.)

Ubuntu 14.04 (Trusty Tahr) on the Raspberry Pi 2

UPDATE: I've created a proper Ubuntu 14.04 image for the Raspberry Pi 2.

My Raspberry Pi 2 arrived yesterday, and I started playing with it today. Unlike the original Raspberry Pi which had an ARMv6 CPU, the Raspberry Pi 2 uses a Broadcom BCM2836 (ARMv7) CPU, which allows for binary compatibility with many distributions' armhf ports. However, it's still early early in the game, and since ARM systems have little standardization, there isn't much available yet. Raspbian works, but its userland still uses ARMv6-optimized binaries. Ubuntu has an early beta of Ubuntu Snappy, but Snappy is a much different environment than "regular" Ubuntu.

I found this post by Sjoerd Simons detailing getting Debian testing (jessie) on the Pi 2, and he did a good job of putting together the needed software, which I used to get a clean working install of Ubuntu trusty on my Pi 2. This is meant as a rough guide, mostly from memory -- I'll let better people eventually take care of producing a user-friendly system image. This procedure should work for trusty, utopic, and vivid, and might work for earlier distributions.
Continue reading

Review: Some Cheap Laptop from Best Buy

Last week I sold my primary laptop, a ThinkPad X220. After nearly three years of constant use, it was still in excellent shape, and was still powerful enough for daily use. Frankly, I was amazed how well it held up. None of the keycaps were worn off, everything was still solid, and none of the plastic corners had chipped. A "normal" laptop would survive less than a year of use with me, and a ThinkPad would get maybe two years. For a laptop to survive three years of me was a true testament to its build quality.

There were several reasons I sold it: Primarily, my company buys me a new laptop for every 3 years of service, and my 3 year anniversary was in January. I'm expecting to buy the ThinkPad X250 as soon as it's released, but that has not yet happened. I would have kept the X220 as a backup laptop, except a friend was looking for a used X220 specifically, so I sold it to him.

I figured I could temporarily fall back on my 9 year old ThinkPad T60 (my first ThinkPad), which had been collecting dust. But after a day or so of use, I realized it is no longer suitable for even temporary use. Firefox would choke on the combination of 1GB of RAM and an ancient graphics card. And the only other laptop I owned was even more ancient, a Compaq Presario V2000z (AMD Turion CPU, 512MB RAM, 10 years old).

I needed a usable laptop to get me through the next few weeks, and something for future occasional use. I did some quick research, and settled on Some Cheap Laptop from Best Buy. Some Cheap Laptop from Best Buy was the cheapest available laptop which I could pick up locally and contained a bare minimum of personal requirements:

  • Intel Core i3 Haswell or higher (most of the low-end models were AMD E series, which appear to be equivalent to Intel Atoms)
  • No Chromebooks
  • 4GB or more RAM
  • Under 8lbs

Some Cheap Laptop from Best Buy satisfied these requirements:

  • CPU: Intel Core i3-4030U
  • Memory: 6GB RAM
  • Display: 15.6" 1366x768 TN
  • Storage: 500GB 5400RPM HDD
  • Optical: DVD-RW drive
  • Battery: if you could call it that
  • Weight: 5.05lbs
  • Official designation: HP 15-f019dx
  • Price: $329.99

HP 15-f019dx

Some Cheap Laptop from Best Buy's 15.6" display is massive by my 12.5" tastes, and I don't care for the numpad, but since it's "only" 5.05lbs, it's managable. (By comparison, the X250 is going to be approximately 3lbs.) The entire surface is a nice matte black, and the touchpad feels comfortable, as far as touchpads go. (I miss the TrackPoint already.) My biggest complaint with the touchpad is it's too large, and my left palm tends to slightly rest on the left side of it while using my right finger to natigate, resulting in scrolling instead of mouse movement. The island-style keyboard is usable, but I can't say how well it is since I've never yet owned a keyboard with island keys.

The screen is bright (a little too bright for my taste, but the brightness can be dialed down), but suffers from massive color inversion at angles. The color balance itself was way too blue ("Showroom Syndrome", as is way too common in the marketplace), but was easily corrected with a new ICC profile via my Spyder 2.

Some Cheap Laptop from Best Buy came pre-installed with Windows 8.1. Like all new computers these days, it did not come with any physical restore media, but thankfully the program to burn restore DVDs was easy. (I don't plan on ever needing Windows on this laptop, but it's always nice to have restore media just in case.) I created the restore media, then nuked the hard drive and installed Ubuntu 14.10 Utopic Unicorn. The installation went without problem, and the installed environment has been completely fine.

Like almost all new laptops these days, the top row of keys default to the media keys (brightness/volume control, etc), with the Fn key toggling to F1 through F12. There is an option in the BIOS to flip the logic, which I did. Unfortunately, the Insert and Print Screen functions also share the same button, with Print Screen being the default, and Insert requiring Fn. The BIOS option does not flip this. Since I use Shift-Insert a lot, I flipped them at the udev HWDB level, by putting this in /etc/udev/hwdb.d/60-keyboard.hwdb:

keyboard:dmi:bvn*:bvr*:bd*:svnHewlett-Packard:pnHP15NotebookPC:*
 KEYBOARD_KEY_90=previoussong
 KEYBOARD_KEY_99=nextsong
 KEYBOARD_KEY_a0=mute
 KEYBOARD_KEY_a2=playpause
 KEYBOARD_KEY_ae=volumedown
 KEYBOARD_KEY_b0=volumeup
 KEYBOARD_KEY_b7=insert # swapped with d2
 KEYBOARD_KEY_c5=pause
 KEYBOARD_KEY_d2=sysrq # swapped with b7

and running:

# udevadm hwdb --update
# udevadm trigger

Overall, performance is decent. Web browsing is nice and fast, and I can keep my usual 50 or so terminals open without a problem. One interesting thing is while the i3-4030U CPU is slower than the i5-2540M on the X220, the GPU is actually faster. Minecraft actually runs marginally better on this laptop!

In conclusion, if you are a price-conscious person waiting for the ThinkPad X250 to be released, but have already sold your X220 and need a temporary everyday laptop quickly, I'd recommend Some Cheap Laptop from Best Buy. Except by the time you've read this, it's probably already been discontinued and replaced with Another Cheap Laptop from Best Buy.

Introducing Unladen, easily scalable object storage

In 2002, I managed a datacenter's IS infrastructure. We had a few dozen servers, and a terrible backup server which, in theory, backed up servers to tape. In theory, anyway. I've never encountered a commercial backup server which has worked well. Anyway, I thought, "We have these servers, and they all have free space to some extent. I'd like a system which did backups of servers, split them into chunks, encrypted them, and sent multiple copies of them to the other servers." Nothing ever became of that idea at the time.

Fast forward 12 years. Everything is now Yay Cloud, and many are familiar with object storage, likely through Amazon S3. OpenStack is quickly becoming a large part of many organizations, and OpenStack's object storage component is called Swift. From a client perspective, Swift is a very nice system. The standalone command-line client is decent, and the API is HTTP and very RESTy: "PUT /v1/account/container/object" to upload an object, DELETE to delete it, etc.

The problem is I've never been very happy administering it. The server software is finicky, and for the most part, it requires a very homogeneous storage infrastructure. You'll want to have storage nodes with the same amount of storage per node, and if you have multiple disks per node, you're required to work out the optimal weight map of the disk compared to the entire cluster. The location of data on the nodes is done via hash rings. In theory this is nice since you can work out the location of an object within the cluster based solely on the ring definition file, but in practice it means a ring configuration which you must (manually) keep up to date on all the storage/proxy nodes, and any sort of change to the cluster (losing or adding a disk or node, etc) means multiple total rebalances of the cluster.

This made me think of my idea from 2002, and the result is (or will be) Unladen. Unladen is a Swift client-compatible system which takes a much different approach to the backend.

First, let me say that while I've written a lot of code in the last few weeks and things are looking very usable, this is nowhere near production quality. Everything is still very, very pre-alpha, and it seems that every commit I make does things like breaking compatibility with the old schema, etc. While I encourage you to download and play with it, don't trust it with anything more than "Hello World". Oh, and there's no sort of authorization yet, so everyone (including unauthenticated users) have full access.

Unladen is Swift API-compatible, so if you have an application which supports Swift, it'll likely work out of the box with Unladen. But on the backend is where things get more interesting. All object data is encrypted, and is sent to storage nodes that way. Only the trusted catalog nodes have the keys to each object's data, so storage nodes do not have to be trusted.

In addition to data trust, Unladen also has a concept of availability trust, also known as confidence. Say you have a core set of storage nodes, and give a confidence of 100% to each. You can also have nodes which you do not trust to be as available. You can then define replica targets for certain sets of data. The default is a replica target of 3.0, which means an object's (again, encrypted) data would be stored on three 100% confidence nodes, or six 50% confidence nodes, or a combination of them.

Replica targets and confidence determine how much to store, but weight defines where to store it. And weighting is easy with Unladen. Each storage node can have multiple internal "stores", which are just directories. You just tell Unladen how much each store can hold. (In most cases this will simply be mount directories of disks, and "how much" would be ~90% of the disk's filesystem capacity.) When data is placed on a mount, the weight of the stores' sizes determines the balance. Likewise, each node advertises its total storage capability (or a portion thereof), and balancing across the cluster is done according to the automatically-determined weight map of the cluster.

Unladen is designed to be a cheap, easily scalable object storage system. "Cheap" in the sense that you can easily use your infrastructure's existing servers' spare space to create (or add to) an ad-hoc cluster. Or you could build massive dedicated 500TB storage nodes. Or a combination of those two extremes.

Again, this is a very early work in progress. I was a bit hesitant publicly announcing it before it was "ready", but things have been looking good, and I didn't want this to turn into a "90/90" project (90% done, just need to finish the other 90% before I release it, which of course never happens). And the last major project I announced before it was ready was 2ping, which turned out to be a great success.

Note that this project is purely a personal endeavor, and is not supported or endorsed by the OpenStack project or my employer.

© 2015 Ryan Finnie

Theme by Anders NorenUp ↑