Ryan Finnie

Velociraptor Aerospace Dynamics: Evolution of a nuclear dinosaur logo

Velociraptor Aerospace Dynamics logo (current)In 2011, I started Velociraptor Aerospace Dynamics. The reasoning was practical: In 2010, I had made over $1,000 from Google ads, and once you hit $800 in miscellaneous net revenue, the IRS basically considers you a small business, whether you like it or not. At that point, it makes sense to go the full route, forming an LLC (a perk of living in Nevada is cheap, easy LLCs), tracking expenses against revenue, etc.

The name was more whimsical than practical. Technically you’re not required to have a DBA (Doing Business As) name; I could have just become Ryan Finnie LLC. However, as most of my ad revenue came from the parody site velociraptors.info, I figured Velociraptor Aerospace Dynamics was topical yet sufficiently irreverent. The additional myth is I chose the name to try to get defense contracts based on the name. I am not allowed to divulge if that is true or not.

(Google ad revenue has since tanked. I recently finished my 2016 taxes, and noticed I had not even received a check from Google last year, as I did not reach the $100 minimum payout.)

While on a trip to Boston in April 2011, I created what would become the first VAD logo. I don’t consider myself an artist, but knew what I wanted it to look like: a raptor riding a falling nuclear bomb while waving a hat, ala Slim Pickens’s Major Kong in Dr. Strangelove. Using my ThinkPad’s TrackPoint to draw in Inkscape, I sketched it out.

VAD logo (really old)

Predictably, it wasn’t that great. A three-year-old with crayons could have done better.

VAD logo (old)

A few weeks later, I found my old Wacom tablet and gave it another go, with much better success. I was able to clean up and simplify the raptor, and it ended up being almost exactly what I wanted. The bomb looked okay, but was still pretty rough, and my knowledge of Inkscape wasn’t good enough to clean it up sufficiently. When using the logo in media, I’d usually export it to PNG, bring it into Photoshop/Gimp and add color manually.

VAD logo (not quite current)

In 2014, I did a major revision of the logo. The bomb was completely recreated digitally; that is to say, one node at a time, and it looks much more clip-arty (which was the original intent). The raptor was tweaked slightly (I believe just filling in the small bit in the hat), and color was added directly to the SVG as fill layers. This is the version you all know and love.

But over the years, there were a few small issues I wanted to correct:

  • While it’s not visible at lower resolutions, when zoomed in you notice many small jagged angles on the raptor. Vector graphics are supposed to be an infinitely scalable format so you don’t have “bitmapping”, but ironically this is the opposite of bitmapping. Basically, there are too many nodes on the raptor; too much detail.
  • The raptor’s body outline width is uneven, and slightly thicker than the bomb’s outline overall.
  • The rear fin of the bomb was created as a perfect rectangle, which isn’t right as the bomb itself is skewed toward the viewer a bit. The rear fin itself should also be skewed slightly.
  • The raptor’s tail is kinked slightly at the end.
  • The raptor’s head is perfectly flat right at the top.
  • The hat is not hat-like enough. It’s supposed to be a bowler hat, but doesn’t always look like that. (In fact, at low enough resolution, it sort of looks like the raptor is flipping you off.)

All of these issues are very minor, and most are not even visible or noticeable in most media. But I knew.

A few months ago, I imported the logo (along with the previous two attempts for historical interest) into a Git repository. SVG is basically code, and changes are hence easy to track, so Git was a logical idea. And recently I spent a day and did a lot of work under the hood.

Velociraptor Aerospace Dynamics logo (current)

The raptor’s been completely redone. Before, the body outline was a two-dimensional path; now it’s a single curved line with a uniform thickness, equal to the bomb outline. (The legs and arm remain as paths as they need to be slightly variable.) The bomb’s rear fin has been skewed to better match the perspective of the bomb as a whole. All of the other concerns have been addressed, though in my opinion the hat is still not hat-like enough. However, there’s only so much you can do without scaling up the size of the hat, and it’s indisputable that a raptor waving a small hat is funnier than a raptor waving a large hat.

Overall I’m very happy with the work. Granted, most people would not notice the differences unless pointed out, and it’s not significant enough to, say, throw out my stockpile of stickers and order them again with the new design. But at Velociraptor Aerospace Dynamics, our logos deserve nothing but the best.

As long as it’s the best clip art of a cartoon dinosaur riding a bomb.

Additional fun facts:

  • The bomb design is based on Fat Man, and its dimensions and perspective closely match the main photo on Wikipedia, though my version was completely freehand.
  • The raptor herself was loosely based on Randall Munroe’s raptors in Xkcd. There’s even a tracing bitmap layer in one of my old SVGs, though the final product ended up being almost nothing like the Xkcd raptors.
  • The VAD raptor does have a name. Her name is Ada, short for Adaptor. Adaptor the raptor.
  • I cannot look at the VAD logo for more than about a minute before I start giggling.

Home sweet home: home network wiring

Keystone 4 port wall plateThese days, the average person’s “home network” is a Wi-Fi router (provided directly by the cable company, as I see from most of the APs within range), with a laptop, a smartphone, and maybe a few “things”. But back in the day, when men were men, women were women and networks were classful, we did things with wires.

In my previous part in the series, I mentioned that one of my longtime goals was to have Decora light switches, something easily obtainable once I had my own house (literally done on the first day). Another even older goal was to have a whole-house Ethernet network. Back in the 90s, I was jealous when two of my friends rented a house and were able to string Cat5 everywhere. Throughout the various apartments I lived in, the network was usually a wired network in the home office, and sometimes an AP in client bridge mode to connect the living room to the office. In my last apartment, I was lucky enough that the wall behind the TV in the living room abutted the office, so I drilled a small hole and ran a cable through it, and patched it up again when I moved out.

But when you throw over a quarter of a million dollars toward home ownership, you get an actual house! You can cut holes into walls without upsetting the landlord! Exciting times!

My first significant home improvement project was about a week after signing: wiring the house for Cat5e. Each room already had RJ-11 (telephone) and coax running to it, but in almost all of the rooms, the RJ-11 was a baseboard “biscuit block”, and the coax was just drilled through the edge of the floor. The RJ-11 all ran through the crawlspace and terminated in a jumble of wires at the NID (telco demarcation point) on the side of the house.

I ripped all of that out and spent two days crawling through the crawlspace, running Cat5e and repositioning the coax. Each room had a new hole cut into it, with a low voltage mounting bracket and a 4-port keystone wall plate. One of the keystone jacks would be for the coax, and the remaining three for Cat5e.

Keystone 4U distribution rackAs luck would have it, the laundry room had the perfect place for mounting a distribution panel. I mounted a 4U wall mount bracket, a shelf, a 32 port keystone patch panel and a 16 port 1U managed switch, and ran all of the Cat5e to it.

When working in an under-house crawlspace, having two spools of cable really helps. Three would have been even better since each room was getting three feeds, but I couldn’t personally justify that. Two-way radios also help for communication between the crawlspace and the house to coordinate where to drill and when to feed cable. (It didn’t help that the person helping me was legally deaf. True story.)

In addition to the runs to the rooms, I ran two feeds from the patch panel directly back to the NID, but those are only attached on the patch panel side. I have no current need for telco services (internet is cable, and home telephone is optional in my case, but I do have VoIP for the novelty of it), but if the need arises, it can be easy to manage. All I would need to do is attach one of the pairs in one of the feeds at the NID, and patch it in at the distribution panel. This configuration could also allow for easy FTTP (fiber to the premises) integration, if that ever becomes an option in my area.

The coax still goes directly to the cable company’s demarcation box on the side of the house, as that’s fine management-wise. The cable company’s box is large enough to comfortably terminate 5 coax feeds, and you don’t need to reconfigure the layout as often as with an Ethernet network.

Sometimes people ask why I went with Cat5e in 2014, as opposed to Cat6a or Cat7 for 10Gbps Ethernet. Basically, it’s expensive, more expensive than I could justify. While there’s something to be said for future-proofing, the cable and jacks were (and still are) many times more expensive than their Cat5e equivalents. But if the need ever arises in the future, the holes are drilled and cut, the physical mounts are all mounted and everything is keystone, so it wouldn’t be too difficult to replace the wiring.

The actual logical network is far less interesting IMHO, but worth mentioning. Internet come in via cable to a cable modem, then to what I call the Omniserver: an Ubuntu PC with 3 GigE ports, i5-4690K, 32GiB RAM, boot SSD and 4x 2TB drives in RAID10. This system is the current evolution of an effort to centralize what used to be a half-dozen machines. It’s the router/firewall, file server, and has a handful of VMs for development, testing and a single Windows VM (which literally just runs Quicken).

One Ethernet line comes in from the cable modem, two go out to the office switch. Both the office and distribution rack switches are managed (ZyXEL GS1900-24E and GS1900-16, respectively), and I have plenty of ports, so I may as well take advantage of them with bonding, even if I don’t strictly need it. The link between the two switches themselves are also dual bonded.

Wireless comes in the form of an ASUS RT-AC87U 802.11ac access point. Interestingly, the case has a label on the first and second switchports which says “teaming port”, but the internal switch chipset doesn’t physically support bonding, and it’s not mentioned in any of the advertisement specs or documentation. I’m guessing it was an intended feature during development but was cut close to release.

Home sweet home: light bulbs

Once again, I have grand plans for writing about something – in this case home automation and home improvements (as I write this, the Roomba is dutifully cleaning the living room and scaring the cat) – but if I’m not careful I keep trying to write books.

So let’s just talk about light bulbs.

I became a homeowner in October 2014. My house was more or less move-in ready, and while there wasn’t a lot I needed to do[0], there was a lot I wanted to do. Literally the first thing I did after getting the keys was go to Home Depot and buy a bunch of Decora light switches, and that evening I went through the house and replaced all the switches. I’ve always liked the look of Decora switches, but had never lived in an apartment which had them installed. It was a good first project, and much easier to do when the house was literally empty.

The house features recessed lighting in many places, with about 15 BR30 pot lights in various parts of the house. For example, the kitchen itself has 7 pot lights (6 spaced evenly above the kitchen, and one directly above the sink). Several of them were burned out in various places, and I wanted to replace them all with more energy efficient bulbs, as the kitchen alone would be consuming 455 watts with incandescent bulbs.

October 2014 wasn’t that long ago, but a lot has changed since then. Back then, LED bulbs were available, but were expensive and of varying quality, and LED BR30 spotlights just weren’t available. So I went with 15 watt BR30 CFL bulbs at $5 each. Good price, but the trade-off was quality. Their 2700K color temperature was consistent, but only once they warmed up. Depending on several factors, sometimes they would start out at nearly full color and brightness, but most often it would take about a minute to warm up, starting at a red hue and brightening.

The rest of the house (lamps, laundry room, etc) used my existing standard A19 CFL bulbs, which I had taken with me between apartments. The rule of thumb for CFLs were a lifetime of about 7 years, but I’d had them for well over a decade and had replaced maybe two over the years. They too had a warm-up time, but it was much less drastic than the BR30 floodlights.

The second thing I did was replace the lighting in two of the bedrooms. These bedrooms have vaulted ceilings (actually the entire house does), with alcoves a few feet high, above the closets, but had no permanent lighting. At some point in the house’s life, someone had the great idea of mounting lights up there, specifically cheap metal fluorescent tube mounts, which are meant for permanent industrial installation, but they just let them float on the bottom of the alcoves, and used bare Romex wire leading into a drilled hole.

Besides the bad idea of using buzzing 4000K fluorescent tubes in bedrooms, they were badly mounted and unsafe. I replaced them with permanently mounted outlet boxes, and for the lighting itself I used two adjustable floor lamp spotlights per bedroom, focused up at the middle of the ceiling. But since they’re now standard outlets, they can be anything.

Things stayed like that for the next two years. But about two months ago, I replaced the lights in the garage. Originally they were your standard hanging fluorescent shop lights: two ballast hoods, each with two 4 foot T8 fluorescent tubes. (I bad-mouthed fluorescent tubes in the previous paragraph, but they’re fine for a garage.) But one of the tubes had burned out, and the ballast was failing on the other (it would buzz horribly, even by fluorescent tube standards, and would flicker for minutes until it warmed up).

I replaced them with drop-in replacement LED units which are meant to replicate the look of standard fluorescent tube hoods. I actually replaced each of the original hoods with two LED hoods. And each LED hood is about twice as bright as its replacement, so the garage is now about 4 times as bright at about 35% energy savings.

While buying these hoods, I noticed they now have LED BR30 floodlights available, at decent cost too: $5 each (in quantities of 6), same as the BR30 CFLs used to be. This started a bit of a snowball effect, and within the next month, I had replaced all of the lights in my home with LEDs. The recessed lighting was the first to be replaced, obviously. Instant on and full brightness, and at 10.5 watts each versus 15 for the CFLs (or 65 for incandescents).

I found a sale on Cree A19 bulbs at $1.50 each, so most of the rest of the house got those. The hallway bathroom had 4 incandescent globe lights which were meant to look good on their own, so I found LED globes which have a very nice looking pattern in the middle.

My workbench in the garage has 2 foot fluorescent enclosure which I like, so I ended up using a retrofit LED tube. This requires rewiring the enclosure to remove the ballast and convert it to direct AC drive, but was worth it.

The security motion light on the outside of the garage was one of those dual floodlights you’ve seen everywhere, but each bulb was 100 watts each. I replaced it with an all-in-one unit which puts out much more light, is 5000K[1], and is only 25 watts total. And it looks like Geordi LaForge’s VISOR from Star Trek TNG.

I even saved the info from all of these, and compiled them in a spreadsheet (geek!): brand/model, form factor, color temperature, whether it’s dimmable (most LED bulbs are now, but compatibility with dimmers is spotty), lumens, wattage, and equivalent replacement wattage. As of today, I have 45 lights, putting out a total of 42,460 lumens and consuming 564.5 watts for an overall ratio of 75 lumens per watt. These 45 lights would consume 2,708 watts if they were not LED, resulting in a theoretical energy savings of 79%.

[0] The biggest problem with the house is it’s 25 years old, and the roof’s (original) shingles are rated for 20 years. They’re almost bare and tabs tend to break off whenever it’s windy (which in northern Nevada is all throughout spring and fall), but luckily the underlay is in great shape and water-tight. So while it’s something which was disclosed during the sale and I know it’s a problem, I’ve had a few years to put it off. Possibly I’ll get the roof replaced this year.

[1] All of my internal lighting is 2700K, the standard “warm” temperature. The garage is 4000K to replicate standard fluorescent tubes, but the security light is 5000K. Higher color temperature is better for security lights, as it’s easier to pick out features. Those typical orange street lights (sodium vapor HPS) you see everywhere are so ubiquitous because they are among the most efficient lighting available in terms of lumens per watt (even better than LED). But police officers hate them because it’s hard to make out people under that light.

A brave new world of blogging

About a month ago, I converted my blogs over to Jekyll. (I’ve also made the leap and converted all of my web sites over to HTTPS, and upgraded my main colo box from Ubuntu 14.04 to 16.04, taking the opportunity to retire a ton of old PHP code during the upgrade from PHP 5 to PHP 7. It’s been a busy few weeks.)

Part of the process of converting the blogs to Jekyll was storing the site data and posts in Git repositories. This was mostly for my own benefit, and I certainly could have kept the repositories private, but instead chose to push them to GitHub (finnie.org, blog.finnix.org) to serve as examples for others who may wish to do the same thing.

At the bottom of each page, I do have a small “Git source” link to the corresponding page on GitHub. I didn’t expect anyone would really notice, and I certainly did not expect anyone to make a pull request against the repositories. So I was surprised when I saw this pull request, by someone who fixed the formatting on one of my imported posts.

Unexpected, but certainly welcome! So feel free to open Issues or PRs against those repositories, but I would discourage you from actually writing new posts and asking for them to be published via a PR.

As for writing posts, editing was an unexpected complication. While I spend most of my day in a terminal editor[0] which is fine for writing code and the occasional documentation, I found I wasn’t comfortable doing free-form writing there. I’ve simply been used to writing blog posts in a web browser for over 15 years (LiveJournal, then Wordpress), with instant spell checking and a relatively quick preview available. While I’ve got a few things I want to write about, I found myself hesitating with the actual writing because of the editor situation.

I went looking for an editor I could use just for Markdown blog post writing, and eventually came across ReText. It’s primarily a Markdown editor which is good for this purpose, has built-in spell checking, Markdown highlighting, a dropdown list of common Markdown formatting, and most usefully, allows for side-by-side editing and live preview. This is the first post I’ve written completely with ReText, and while it’s not perfect, it’s made it easier for me to just write.

[0] Most of the time it’s nano, if you must know. Yes, get it out of your system. I’ve been using nano née pico since my first days on “the Internet”, a 1992 dialup BSD VAX account which had the full UW suite. So yes, I’ve been using nano since before some of you were born.

Sure, Let's Encrypt!

This weekend I converted all of my web sites to SSL (technically TLS), using Let’s Encrypt certificates.

For a few years, I had been using StartCom’s free SSL cert for a non-essential web site on my main server. StartCom was effectively the only viable free certificate authority at the time, with good browser support. CAcert wasn’t too useful since almost nothing ships with its CA certificates, while Comodo has free SSL certs which expire after 90 days, essentially a free demo.

I had been planning on tackling global SSL since earlier this year. The biggest hurdle is adoption of Server Name Indication, or SNI. Before SNI, you were limited to one SSL certificate per IP address, much like the time in the 90s before name-based virtual hosts. And like name-based virtual hosts, even after the technology was developed, it was slow to be adopted since name-based vhosts and SNI both require the web client to have support for them.

I had set a self-imposed delay until April 2017, with the 5-year EOL of Ubuntu 12.04 LTS (precise). While overall, this was a good round number to stick to, the technical reason was wget had not yet added SNI support until just after precise was released. After that, remaining incompatibilities would be at an acceptable level (basically just Windows XP).

This weekend I was playing around, and ordered a few new StartCom certs as a test for a few sites (finnie.org, vad.solutions), with the idea of setting up dual HTTP/HTTPS, and not switching over to HTTPS only until 2017. Things were working well, until I noticed when the iPhone browser tried to connect, it immediately dropped the connection. Likewise, when trying with Safari on a Mac, it came back with “This certificate has an invalid issuer”. Examining the cert path side-by-side with the old site (with a cert from February), both the old site and the new sites had identical issuer certificates, but Apple browsers were not trusting the new certs.

I spent hours trying to figure this out, thinking it must be an SNI-related server-side issue. Eventually I came across a post by Mozilla Security which explained the issue. Apparently StartCom Did a Bad (backdated cert issuances and lied about its organizational structure), and got the mark of death by the major browser vendors. However, the way the vendors did that was by continuing to trust issued certs before a certain date a few months ago, but anything after that is not trusted, which explains why my old site continued to work. Firefox and Chrome worked fine with the new certs, but that’s because their change has not yet hit released products yet, while Apple’s been quicker on the matter. (This also explains why StartCom has gone from free 1-year certs to free 3-year certs: they’re essentially worthless.)

So I started looking into Let’s Encrypt. I had known about the project since it was launched last year, but didn’t give it much thought until now. Primarily, I thought the project wouldn’t be too useful as a new CA would take years to get into the major browsers and OSes to the extent it would be viable. Also, it had sounded like I needed to run an agent which took over configuration of Apache for me, something I did not want to do.

Turns out I was wrong on both fronts. Their intermediary signing cert is actually cross-signed by an established CA (IdenTrust), so even if the browsers and OSes don’t have the Let’s Encrypt root CA, certs they sign will still be trusted. On the software side, you still do need to run an agent which handles the verification and ordering, but it doesn’t need to completely take over your web server. You can run it in “certonly” mode and point it at a web site’s root (it needs to be able to place a few challenge/response files there for verification), but lets you handle the resulting certificate. The certs are only valid for 90 days, but the idea is you idempotently run the renew command from cron daily, and it’ll seamlessly renew the certificates.

Let’s Encrypt has since become the world’s largest certificate provider, by raw number of certificates. And since their cry is “convert all your sites over to SSL”, the implicit implication is SNI is here and we don’t need to worry too much about older clients. So hey, may as well get going now!

Most sites were trivial to convert. For example, 74d.com has nothing except “hey, maybe some day someone will pay me an insane amount of money for this domain”. Other sites required more planning. The biggest problem with starting to serve an existing site via HTTPS is all images, CSS and JS also needs to be served via HTTPS, even if the domain is configured to completely redirect from HTTP to HTTPS. finnie.org is a deceptively massive site, with decades worth of junk layered on it (the domain itself is actually coming up on its 20 year anniversary in April), so it took a lot of grepping for places where I had hard-coded HTTP content. I’m sure I’ve missed places, but most of it has been fixed.

finnix.org is another example of requiring a lot of thought. The main site itself, a mostly self-referential MediaWiki site, was easy to do. But it also has about a dozen subdomains which required individual examination. For example, archive.finnix.org is a static site which just serves files; trivial to redirect to HTTPS, right? Problem is, they are mostly fetched by apt, which 1) does not follow HTTP redirects, and 2) does not support HTTPS unless an additional transport package is installed. So if I switched that to HTTPS only, it would break a number of Finnix operations. In the end, I decided on serving both HTTP and HTTPS, and setting it up so if you go to http://archive.finnix.org/ it’ll redirect to https://archive.finnix.org/, but individual files can be retrieved via either HTTP or HTTPS.

In total, I’ve created 22 certificates, covering 41 hostnames. As Let’s Encrypt is now essentially the only free CA, I wish them well, and have even donated $100 to their non-profit. This is really putting all my eggs in one basket; once you go to HTTPS-only for a site, it’s very hard to go back.

Edit: It’s been pointed out that you can get around SNI issues by using multiple Subject Alternative Names (SANs) with Let’s Encrypt, as SAN has much more older support than SNI current does. I had been using SANs for multiple similar hostnames on a certificate (e.g. www.finnie.org had a SAN for finnie.org), but thought certbot required them all to have a single document root. Turns out you can define a “webroot map” for multiple hostnames to multiple document roots, and there are no defined limits to the number of SANs you can use (though it appears the accepted effective limit in the industry is about 100).

The big downside of one cert with multiple SANs is you are now publicy advertising a group of which sites you are administering, but in this case I’m fine with that. I’ve changed things so 37 of my hostnames are now covered under one certificate.

Also, the part about Ubuntu precise’s wget not supporting SNI is no longer correct. Thankfully SNI support for wget had been backported to precise in May 2016.