Tagplanet:canonical

Posts to be picked up by http://voices.canonical.com/

twuewand 2.0 released

You may remember about a year ago when I released twuewand, a TrueRand implementation. TrueRand is a hardware entropy generation technique, implemented in software. In a nutshell, it works by setting an alarm for a few milliseconds in the future, and flipping a bit until the alarm is reached. It works due to the fact that time (your computer's RTC) and work (your computer's CPU) are not linked, so the result when the alarm comes due is unpredictable.

TrueRand was invented in 1995, and had mostly been forgotten for the last decade, until I started doing research on it last year. So it was quite a surprise when I was at Dan Kaminsky's talk at DEFCON a few weeks ago, and one of the topics he brought up was TrueRand. (Go check out his presentation slides; I just want to point out that while I'll be focusing on entropy and debiasing here, he goes into a lot of other interesting topics.)

Dan came to roughly the same conclusion as I did, that entropy sources have gotten worse over time, not better, and systems like VMs are almost completely devoid of entropy. Even more worrying, a paper published this year came to the conclusion that approximately 1 out of every 200 public keys on the Internet are easily breakable, not due to weaknesses in the encryption, but by bad entropy being used when generating the keypair. TrueRand may have been forgotten, but it's needed today more than ever. Dan and I talked for awhile after his talk, and went over a few things by email in the week following. twuewand 2.0's new features are influenced by those discussions.

Dan proposed a number of enhancements for TrueRand, mostly centered around other ideas for measuring variances given only a CPU and RTC, but what caught my eye was his idea of enhancing debiasing.

Many forms of random data are random in the technical sense, but are prone to bias. As an theoretical example, take a gun placed in a sturdy mount and pointed at a target not too far away. Most of the time, shots from it will hit the same spot every time (0), but occasionally they won't (1). So you're left something like 00000001000001001100000010000010; mostly hits, but with random misses. So it's random in a technical sense, but the distribution is heavily weighted toward one side.

The simplest method of debiasing is known as Von Neumann debiasing. Bits are processed as pairs, and any pair that is both 0 or both 1 is simply thrown out. Out of the pairs that are left, {0,1} becomes 0 and {1,0} becomes 1. So in the example above, the Von Neumann debiased output would be 00101. The data is now distributed better, but as you can tell, a lot was lost in the process. This is an extreme example since the data was heavily biased to begin with, but in data without a lot of bias, you still lose at least 50% of the bits (I've found 70-75% in real-world twuewand usage).

Dan thought, "Hmm, that's an awful lot of data simply being thrown out. We can't use the discarded data in the direct output, but perhaps we can use it to better (de-)influence the final output." He came up with a method he called modified Von Neumann, which I refer to in twuewand as Kaminsky debiasing.

The incoming bit stream is still run through Von Neumann, and put into an output buffer. However, all bits (whether they pass Von Neumann or not) are fed to a SHA256 stream. Occasionally (after the input stream is finished, or a sufficient number of bytes are put into the output buffer), the SHA256 hash is computed[1], and used as a 256-bit key for AES-256-CBC encrypting the output buffer. This way, only the bits which pass Von Neumann influence the output directly, but all bits help indirectly influence the output as well.

So twuewand now supports Kaminsky debiasing, and will use it by default if Digest::SHA and Crypt::Rijndael are installed.


Now, I want to clear up a mistake I made in my last post. I said that feeding twuewand output to /dev/urandom on Linux systems influences the primary pool, increasing entropy. First, you can actually write to either /dev/random or /dev/urandom, the effect is the same. But more importantly, entropy is NOT increased by writing to /dev/[u]random. It's merely "stirring the pot". If your system is out of entropy and you are blocking on /dev/random, no amount of writing to /dev/[u]random will unblock. (Directly, that is. If you're banging on your local keyboard to do this, you're slowly increasing entropy, but you could be doing the same thing in a text editor, or to /dev/null.)

Unfortunately, there is no way to increase entropy in the primary pool via standard system command line tools or character devices. However, there is a Linux ioctl, RNDADDENTROPY, which does this. So I wrote a small C wrapper, which takes STDIN and feeds it to the ioctl. This requires root of course. The utility is called, boringly enough, rndaddentropy, and is distributed with the twuewand tarball. It will be built by `make` on Linux systems.

I must point out that this utility gives you a very excellent method to shoot yourself in the foot. The lack of command line tools to directly access the primary buffer is most likely by design, since this bypasses most of the in-kernel filters for random input. Injecting, say, the contents of /bin/ls to the primary pool would be a great way to become one of those 1 in 200 statistics. Only use this utility to feed high quality entropy (such as by twuewand, or something like an entropy key).


Dan Kaminsky will be publishing software in the future called DakaRand, which does much of what twuewand currently does, but incorporates some of his other ideas. He provided me a work-in-progress copy, which looks very interesting, but it is not available to the public yet for a number of reasons. Be on the lookout for that when it is released.

Update (2012-08-15): Dan released DakaRand 1.0 a few hours after I made this post. Go check it out.


[1] In Dan's proposal, after the SHA256 hash is computed, it would then be run through Scrypt's hashing algorithm. This is not done in twuewand for two reasons. First, Crypt::Scrypt does not currently provide a low-level method to just do hashing; instead it wants to create a full digest which is unsuitable for this purpose. Second, Dan has been debating whether this step is necessary or desirable at all, and Scrypt has "undefined effects on embedded hardware".

IPv6 autoconfiguration in a nutshell

[ This was written at 2AM, and I tended to ramble. Hopefully I'll come back tomorrow and clean it up, add links to RFCs and Wikipedia entries where relevant, etc. ]

I've been explaining this a number of times in the last few days, so I figured I'd get this down in blog form to avoid repeating myself (even more). IPv6 is much like IPv4 in most respects, but its autoconfiguration can seem rather alien to people already familiar with IPv4 (that is, DHCP).

On an IPv6 network, the default router runs a service called the Router Advertisement daemon. Its primary purpose is to send out packets saying "hey, I'm the default router for the 2001:db8:1234:abcd::/64 network".

It actually doesn't matter what the router's IPv6 address actually is on 2001:db8:1234:abcd::/64; indeed, it (usually) doesn't even tell your what it is. Instead, it sends out RAs using its "link-local" fe80:: address. Link-local addresses are specific to the LAN segment you're on, and every device is required to have one (derived from the MAC address). It's sort of a middleware layer of doing things, between Layer 2 and Layer 3. But the important thing is you can talk to other devices if you know their link-local fe80:: address (and you're both on the same segment), and you can route to them.

So it really doesn't matter if the router's address is 2001:db8:1234:abcd::1/64 or 2001:db8:1234:abcd::dead:beef/64 or if it doesn't even have an address; if you're doing autoconfiguration, your OS will most likely be routing to it based on its link-local address. For example, my home router's link-local address on the LAN side is fe80::6a05:caff:fe02:4e4d/64, which it uses to send RAs.

By default the router sends an RA every minute or so, so the client device could theoretically set up autoconfiguration without ever talking to the router. But it will also respond to Router Solicitation requests being broadcast to the network. These requests are responded to (via broadcast) with the exact same RA packet it would have sent on a timer, except this time it happened on-demand.

Now, the RA lets the client device know what the LAN's IPv6 network space is. It also sends a series of flags which can make autoconfiguration possible.

The most relevant is the Autonomous address-configuration ("A") flag. This flag provides stateless autoconfiguration, and essentially says "you are permitted to construct your own IP address in the network I mentioned". The client will then use the MAC address of the interface as a basis for constructing an IP address. In the above example, 2001:db8:1234:abcd::/64 was the network. Say my interface's MAC address is 00:19:d2:43:c0:c4. Using the recommended MAC-munging method, this becomes 2001:db8:1234:abcd:219:d2ff:fe43:c0c4/64.

The client then uses Duplicate Address Detection (DAD) to make sure nobody on the network is already using this address. (DAD is also used to make sure the link-local address mentioned above is also unique to the network segment.)

So that's it! The client now has enough information to construct IP information, and in theory it never even had to send a packet. (I say "in theory" because DAD requires sending out a multicast packet. Plus, most OSes send out a Router Solicitation to trigger the Router Advertisement.) In this example, the router, fe80::6a05:caff:fe02:4e4d, sent out an RA stating the network is 2001:db8:1234:abcd::/64, and the "A" flag was set. The client used this information to construct the IP 2001:db8:1234:abcd:219:d2ff:fe43:c0c4/64, which it gave to itself, with the default route being fe80::6a05:caff:fe02:4e4d.

Now what about DNS? In a dual-stack environment, the client probably already has an IPv4 DNS address by now, and can certainly use that to look up IPv6 (AAAA) records. But for pure IPv6, there are two methods for getting DNS servers to the client.

First up is RDNSS, a relatively recent extension to the RA standard. It's simply an extension which gives one or more DNS servers in the RA packet. Simple, but not often used today.

The RA also has a "Managed address configuration" ("M") flag. This simply means, "there is a DHCPv6 server on this network. You may go look for it for more granular control." Also known as stateful autoconfiguration.

DHCPv6 is similar to DHCPv4, but has a number of differences. I'm not going to go into it in detail, but you can do pretty much everything you can do with DHCPv4: pooled address allocations, individual address allocation, DNS servers, domain search lists, etc. It's worth noting that if you have both the "A" and "M" flags in the RA set (and you're running a DHCPv6 server), the client will use both of these methods. That is to say, it'll construct an IP address using stateless means, then request an IP address from the DHCPv6 server, and will bind both of these addresses to the interface.

Side note: IPv6 has no concept of network or broadcast addresses. In this post's example, 2001:db8:1234:abcd::/64 was the network, but 2001:db8:1234:abcd:: can also be a device. However, most networks I've seen continue the old tradition of giving the .1 (or in IPv6's case, :1) address to the default router.

IPv6 in Apple iOS

I solved my problem from a few months ago with my iPhone and iPad not being able to get IPv6 info. It was actually a combination of two problems.

Last year I bought a Netgear N750DB router as an access point to replace my Linksys WRT54GL. Turns out the N750DB will send multicast traffic from a wireless device back to itself. This breaks IPv6 Duplicate Address Detection (DAD), and I was also seeing problems on my Ubuntu laptop. So I went back to the WRT54GL, and while stateless autconfiguration started working on the laptop again, it was still not working on the iPhone and iPad.

The second part was the iOS devices needed to be rebooted. No idea why, but after a hard reboot, they started pulling RA (and DHCPv6, which I had configured in the meantime). I've since bought a Linksys E4200 v1 to replace the N750DB, and everything is now working great.

Note that it's actually hard to tell when iOS has an IPv6 address. There are literally no IPv6 options in iOS, and the IPv6 address itself is not displayed anywhere. The only hint is in the network config page. If you have a DHCPv6 server or giving out RDNSS via RAs, the IPv6 nameserver and/or domain will show up in "DNS" and "Search Domains", though will likely be cut off. The only OS-level way to get meaningful IPv6 address/route information is a utility app called IPv6 Toolkit ($2 but worth it in this case). You can also go to a web site like vsix.us (which of course I wrote) to see what your IP address is.

Note that even after fixing my AP and rebooting my devices, when I went to vsix.us in Safari, it was still preferring v4 over v6. I was able to solve that by clearing all cached info in Safari.

apt-get dist-upgrade

This weekend, I upgraded the OS on my main colo box, from 32-bit Debian lenny to 64-bit Ubuntu 12.04 precise LTS. This was an installation which I've had for over 8 years. It was originally installed in early 2004, when Debian sarge was in its long beta freeze. It had been dist-upgraded over the years, from sarge to etch to lenny, and was quite well maintained. And over the years, the hardware has been upgraded, from a Pentium 4 with 512MB memory and 80GB usable disk, to a Pentium D with 2GB memory and 160GB disk, to a Core i7 with 9GB memory[1] and 320GB disk.

However, it was still an 8-year-old installation, and I wanted to get to a 64-bit OS. Debian lenny's support was dropped in February, and Ubuntu precise seemed like a good target, so I made the decision late last year to replace the OS.

I bought two 1.5TB WD Black SATA drives and installed precise on them a few weeks ago, in a RAID1 set. My colo box in San Jose has a 3ware 9650SE 2-port SATA 2 card, and I had an identical card available at home. Thankfully 3ware cards store RAID configuration on the disks themselves, so it was just a matter of installing the OS on the surrogate server at home and shipping the disks to San Jose to be physically installed. I also bumped the memory up to its maximum of 24GB, as long as the case was open (memory is insanely cheap right now).

The precise install was a minimal install, with just networking and SSH configured. I then took a backup of the lenny install and put it in /srv/oldos. The idea was once the datacenter swapped out the disks and powered it on, I'd go in and, for example, "chroot /srv/oldos /etc/init.d/apache2 start" for all the essential services. I could then migrate the services into the new install one at a time.

With a few small bumps[2], that strategy worked well. However, a LOT of stuff is on this box, and it took me a most of the weekend to get everything right. Here's a sample of what this server is responsible for:

  • HTTP/HTTPS (Apache, a dozen or so web sites, plus MySQL/PostgreSQL)
  • SMTP (Postfix, plus Mailman, Dovecot IMAPS)
  • DNS (BIND9)
  • XMPP (ejabberd)
  • BitTorrent (bittornado tracker, torrent clients for Finnix)
  • Minecraft
  • Mumble/Murmur
  • rsyncd (Finnix mirrors)
  • 2ping listener
  • The all-important TCPMUX server
  • Various AI, TTS, text manipulation and audio utilities for X11R5 and the Cybersauce Broadcasting Corporation

As of Monday, everything is now running off the new precise install. I installed this server over LVM, and have only allocated about 500GB of the 1.5TB usable space. There is a relatively small 120GB root partition, a 160GB /home partition, and a 250GB /srv partition. The idea is not much more than the OS should be on root (and should never need more than 120GB), /home is self explanatory, and all projects (web sites, etc) go in /srv. If /home or /srv need to be expanded, it can be done remotely relatively easily by umounting them, expanding the partitions using LVM, and running resize2fs. I've also left most of the space unallocated in case I want to run KVM guests some day, though I don't have an immediate need.


[1] Yes, 9GB memory. Triple channel, so it was 3x2GB + 3x1GB modules.
[2] I had two significant problems with the migrations:

  1. The problem with picking a custom third-party Festival voice for the voice of the Cybersauce Broadcasting Corporation is that it looks like it has not been updated in years, and is specific to only a handful of Festival versions. So now I have a minimal, yet 300MB chroot running (64-bit) Debian lenny for the sole purpose of providing Festival's text2wave utility with the custom voice.
  2. I'm using two packages[3] not available in Debian/Ubuntu: a Perl module (AI::MegaHAL) and a PHP module (crack), both of which are published at my PPA. Both are sufficiently ancient and have not been updated upstream for years. Both compiled fine in 64-bit precise and at first appeared to run fine, but would then crash. Turns out both were not 64-bit safe, and required patching. Thankfully this is easy to roll out with PPAs.

[3] Okay, three, technically. I've packaged the tw_cli utility for the 3ware RAID card, but cannot distribute the package through my PPA because it's a non-free binary.

2ping 2.0 released

2ping 2.0 has been released today. User-visible changes are minor, but behind the scenes, a major update to the protocol specification has been implemented, justifying the major version bump:

  • Updated to support 2ping protocol 2.0
    • Protocol 1.0 and 2.0 are backwards and forwards compatible with each other
    • Added support for extended segments
    • Added extended segment support for program version and notice text
    • Changed default minimum packet size from 64 to 128 bytes
  • Added peer reply packet size matching support, turned on by default
  • Added extra error output for socket errors (such as hostname not found)
  • Added extra version support for downstream distributions
  • Removed generation of 2ping6 symlinks at "make all" time (symlinks are still generated during "make install" in the destination tree)

2ping is a bi-directional ping utility. It uses 3-way pings (akin to TCP SYN, SYN/ACK, ACK) and after-the-fact state comparison between a 2ping listener and a 2ping client to determine which direction packet loss occurs.

© 2014 Ryan Finnie

Theme by Anders NorenUp ↑