Overactive Vocabulary

When In Doubt, Ameliorate

RSS

The Third Decade

Or, I’m Stepping Back from Daily Operations at Spreedly

My professional life has run roughly in decades: a first decade of getting into the software industry, doing lots of consulting and contracting, and observing businesses being run. The second decade grew out of the first decade and was spent founding a product company and figuring out - alongside a lot of really smart people - how to grow Spreedly into something successful. The boundaries between these vocational periods are of course fuzzy, but over the course of 2018 I’ve unknowingly been wrapping up my second decade, and the end of 2018 has become a clear break from my time working on Spreedly on a day-to-day basis.

Here’s the thing: Spreedly’s in a fantastic spot to win. I feel really good about stepping away right now with the company profitable, compounding sales growth, a great leadership team, and a product that has one of the lowest tech debt loads of any production system I’ve ever worked on. And I’m not disappearing into a black hole: I’ll remain a shareholder, continue to advise the company via my position on the board, and be cheering the team on from the sidelines every step of the way.

Why am I leaving? As these things usually are, it’s complicated. Here’s the short version: there were differences of opinion on my role as Spreedly grows past 50 employees. And I mean “opinion” sincerely - I haven’t even been present at a product company as it went from 50 to 100 employees before, much less been an executive working on it at that stage, so my perspective shouldn’t weigh more. And since I wasn’t able to get on the same page about my role going forward, I didn’t want to get in Spreedly’s way as it continues to grow.

What’s next for me? I ask rather, “What better time for a mid-life crisis?” and I’m only half-joking: while I’m not in crisis, I have about a million ideas for what I might want to work on next, but 2019 is all about getting back to the basics of creation: writing consistently and programming consistently. These two skills have formed the bedrock of my career to date, and I’m excited (and terrified) to pull my creative focus away from Spreedly and put it into a series of smaller projects designed to hone these two fundamental skills while exploring a series of hypotheses about what my next major endeavor might be.

Alongside the fundamentals, I also expect to do a lot of networking in 2019, so do drop me an email if you want to grab coffee or even just video chat. This goes double if you’re a CEO, CTO, or VP of Engineering in a company in the 10-50 employee range trying to figure out how to wrangle engineers and the software they build - that’s where all my scars are from and I feel like I can speak with some confidence about what works and what doesn’t.

Weep not for me! I’m super excited to dig into the third decade of my career. And worry not for Spreedly: it’s set up to win, top to bottom, with an amazing team building an amazing product. Instead, grab some sunglasses: the future is bright everywhere you look. 😎

Warcraft, DOSBox, IPX, and Tomato Shibby, Oh My!

I’m a nerd. A total, unadulterated, no-holds-barred, nerd. Let this post serve as proof for posterity…

This little vignette starts with Warcraft. See, I have kids. Quite a few kids. And they’re getting old enough it’s fun to play games with them. Now, if I was a normal father, I’d buy them modern computers and we could play modern multiplayer video games like normal people. I am not a normal father; rather, if you’ll recall, I’m a nerd, which makes me a nerd father. Which means, when I think, “it’d be fun to play a multi-player video game with the kids,” my mind harkens back to some of the most fun I had playing multiplayer as a kid: Warcraft.

More specifically: Warcraft II: Tides of Darkness.

OK, first step is, can we even play it on a computer made in the last decade? Turns out we can - and quite easily - using DOSBox. DOSBox is a nerdy dad’s best friend when it comes time to introduce his kids to the classic experience of a LAN party. Since around here even our “old computers” run OS X, I’m a big fan of Boxer for making DOSBox a cinch to setup and manage.

So far things aren’t too nerdy, but we’re nowhere near done yet. The next step is networking, and this is where it gets interesting. Old games mostly use IPX, and IPX is pretty much a dead protocol on modern networks. DOSBox has a cool trick up its sleeve, though: it translates IPX to UDP, and it works without a hitch.

While hitchless, this IPX-over-UDP setup is kind of a pain; one player has to set up their DOSBox to be the server, and the other players have to know and enter the IP address of that player to connect. Kind of a pain in the tuckus to do every single time we want to play, so surely we can do better, right? Wouldn’t it be awesome if the network at the Talbott house had an always-on IPX server on tap? Why yes it would!

Since we’re not the first ones on the internet to wish for such a thing (is it possible to be the first at anything on the internet these days?), some searching around turns up ipxrelay, which is such a beautiful, dependency-less, single-purpose tool it makes me want to weep for joy. A quick:

1
2
3
4
git clone git://git.zytor.com/games/dosbox/ipxrelay.git
cd ipxrelay
make
./ipxrelay

And we have a working standalone IPX server. Now, I wouldn’t be comfortable running this on the big bad internet without a deeper understanding of its operation and security, but for internal LAN parties? Sign me up!

But that brings us to our next challenge: where do we run it? I don’t maintain an always-on server on our home network, nor do I really want to. But actually that’s not true, since I have three routers running Tomato Shibby, and they’re nothing but mini-servers that have lots of idle capacity that could be put to good use. So I decided to get ipxrelay running on my biggest/fastest Tomato router. No sweat, right?

No sweat, except that running a program written in C on a device like an Asus router means cross-compiling that program. My highest-powered router is a MIPS device, and Tomato itself has a very distinct and limited set of libraries available, all of which must be taken into account. But people do this all the time, so should be easy, right?

I’ll save you the multiple weekends and evenings of futzing, hair-pulling, puzzling deadends, and searching in circles. For some reason, nowhere on the internet could I find: “To cross-compile a program for a Tomato Shibby target do A, B, and C.” Until now, since that’s exactly what I’m going to do in as clear a terms as I can for the sake of future searchers.

Tomato Shibby Cross-Compiling for Dummies

It’s pretty simple really:

  1. Be on a Linux box
  2. Checkout a copy of the Tomato Shibby code
  3. Use the included, pre-built compiler to build your application

Well, it’s simple now that I know what to do. Follow these steps to avoid great pain!

1. Be on a Linux box

So first I need a Linux box to do the building on since I’m on OS X and I don’t need or want the pain of getting cross-compiling working in an environment different than what’s already used by the Tomato Shibby developers. In real life I built a VM from scratch, but with time for reflection and a blog post to write I’ve remembered I have Vagrant set up and ready to go here already. Here’s a little ‘cast of me setting up a Debian Vagrant box:

2. Checkout Tomato Shibby

Now that I have a Linux environment to build in, I need to clone the Tomato repo and checkout the tag my router is at:

Using --branch and --depth reduces the size of the checkout somewhat.

3. Use the Cross-Compiler

One rule of step-wise recipes is that one step will always be more complicated than the rest. This is that step. We need to take the program we want to compile, tweak its Makefile to use our cross-compiler, and build the program:

Wow, it looks easy now that I know exactly what to do. sigh

You can copy and paste from the screen cast above - slick, right? - but just in case, the patch referenced is available here: https://gist.github.com/ntalbott/75a8aa269b2d310bf02f, and the ipxrelay repository is here: http://git.zytor.com/games/dosbox/ipxrelay.git/.

Running ipxrelay on the Router

Now that we have a cross-compiled ipxrelay, we need to get it saved somewhere safe on the router, and ensure it’s running with the right options. I’m running mine from a USB flash drive I have plugged in, but I’ll leave the setup of that as an exercise to the reader. Once it’s somewhere safe - your options are /jffs (wiped each time you restart the router), some kind of added storage like a my USB drive, or a memory card mounted at /mmc if your router supports it - the next step is to get it fired up. Here’s the Firewall script I have in the Administration->Scripts area of Tomato to start it up:

1
/mnt/ROUTER/bin/ipxrelay --port 2130 --address 192.168.1.1 --pidfile /var/run/ipxrelay.pid

My IPX relay is going to run on port 2130, only bind to my internal IP, and put out a pidfile. The shutdown script then uses the pidfile to properly clean up:

1
if [[ -e /var/run/ipxrelay.pid ]]; then kill `cat /var/run/ipxrelay.pid`; rm /var/run/ipxrelay.pid; fi

And there you have it, an always persistent IPX relay on your internal network! You might think we’re done, but we can go even deeper!

DOSBox Automation

What’s better than having IPX networking consistently available? Using that fact to make our DOSBox games auto-networked, of course! The final thing I did was to create a couple of batch files to auto-configure the IPX setup so that when we launch the game, it Just Works™. We need to put three files in our DOSBox C: to make this work: first, FIND.COM from FreeDOS since DOSBox doesn’t come with a FIND built-in. Second, START.BAT, which tries to start IPX networking, and fails out if it can’t:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
@echo off

del ipxnets >nul

ipxnet disconnect >nul
ipxnet connect 192.168.1.1 2130 >nul
ipxnet status >ipxnets
find "Client status: CONNECTED" ipxnets >nul
if errorlevel 1 goto neterror

call game
goto end

:neterror
echo Error starting networking...
echo Is ipxrelay started on the router?
goto end

:end
del ipxnets >nul

And finally, the super-simple GAME.BAT, which just makes START.BAT generic:

1
launch war2

DOSBox can be configured to launch START.BAT when it starts, and voila! You’re all set to crush your offspring - or siblings, or whoever you can convince to have a LAN party with you - at Warcraft 2!

The conclusion of this little piece is the same as the beginning: I am a total nerd. And as aggravating as it sometimes is, it’s a lot of fun, too!

Warcraft 2 Screenshot

Racing

Racing is so different than anything else I’ve ever done. I guess you could call me a primitive racer - I enjoy that most ancient of races, the kind where you put one foot in front of the other as fast as you can - and I can’t deny it, there’s something that fascinates me about how my body’s able to not only run, but to run like it was made from the beginning to do so. There’s a cathartic joy in just walking out the front door and following wherever your feet take you, even if it is over a well-trodden course. When I started running, I imagined it as a habit to build for my own good, like eating my greens. Now it’s an activity that I return to whenever I can, like eating my greens, since it turns out greens are delicious both in reality and metaphorically.

But racing is such an odd thing. Running in and of itself isn’t odd; it’s just another activity that I undertake on a regular basis. I train, and I push myself, I try to get better, and I return to it as an enjoyable process for its own sake. I can run socially, carrying on a solid conversation while maintaining a pretty good clip, though I’m generally alone listening to an audio book. And I run at my own convenience: I pick the time, the place, the distance, the pace, etc.

When I line up to start a race, though - then it gets weird. For one thing, I’ve chosen to get up and run at a crazy hour on a Saturday morning, when I could’ve just as easily waited a few hours and run out my front door. Or it might be a Sunday afternoon when I’d normally be enjoying a quiet nap. Or maybe I’m at a programming conference, dragging myself out of bed after a late night of talking tech to wander through a strange city and try to find the starting line.

And it’s not like I’m actually going to run socially in a race - ha! When I race, I like to actually race, so I’m spending all of my breath and focus on pushing my body to the limits of what it can do, and I can’t spare an iota of it on the people around me. Racing is super self-centered: it’s you, the road, and an ever-ticking clock. I don’t even try to listen to anything while I race, since the necessary apparatus is too much of a distraction. And if I go to a race with a friend, I’ll wish them luck right before the gun and not see them again until one or the other of us is cheering the other across the finish line.

Then there’s the fact that in any race large enough to have an official course and chipped time tracking, there’s no way I’ll come in first. I go out and run a race knowing that I’ll make a good showing, but also knowing I’ll get trounced by some 23 year old that was smart enough to take up running 5k’s before he turned 30. I’m not even really a risk to the other guys in my age bracket since, while I love running, I love coding even more and so I won’t be spending those extra hours training that I’d need to in order to place even bronze for 30-35.

But here’s the thing: I love to race. Every time I get out there I think, “What in the world am I doing here?” and then the gun fires and I’m off and I’m loving it. Someone else has laid down a very specific set of constraints - run this route, start at this time - and now I get to push myself to do the absolute best that I can within those constraints. I’m not being social, and yet I have this whole host of humanity in so many (admittedly all very fit) varieties, and I get to run beside and behind and around them. And while I’ll never come in first, I get to race myself on the course, seeing if I can best my own best and show that I have continued to improve and grow since the last time I crossed the line.

Racing is so bizarre, and I love every minute of it.

Monitoring Hotness

I’ve spent the last couple of weeks hooking up a bunch of monitoring and analytics to our systems at Spreedly. It’s long overdue, but with the big launch of our new messaging we needed more data so we weren’t flying blind and could make decisions going forward based on hard numbers. The work has encompassed both business metrics (page views, signups, subscriptions, …) and devops metrics (response times, request counts, …).

I’ll admit, we’ve tried to do this a couple of times before, and every time I’ve been stymied by two things: first, the overwhelmingness of both trying to choose between a lot of well regarded options, and at the same time trying to figure out what we should be tracking. Second, hooking up metrics is not a trivial task, and when there are other things I can be doing that at least appear to add more value to the bottom line, I tend to quickly get distracted and pulled back into “regular work”.

The big difference this time around was that I found a couple of tools that cut way down on the difficulty of getting started, and I’m super excited to tell you about them.

Biz Metrics - Segment.io

segment.io

Remember how I mentioned having lots of interesting choices, but being overwhelmed trying to pick? Using Segment goes a long way towards solving that problem. To put it in the terms of my day-to-day world, Segment is basically the “Spreedly of Business Analytics” - you hook up one time to Segment’s API, and then you can turn on dozens of different services in your site with just a quick configuration change in their dashboard.

This had two huge advantages for me: first, it simplified the API that I had to implement against, since by looking at a whole pile of API’s and building a solid over-arching API, Segment has come up with something incredibly simple and easy to use. They also have a library for just about any language, making the implementation even easier.

The second big advantage with Segment is that it makes it a cinch to trial multiple analytics services simultaneously and see which one(s) are the best fit. Tools like KISSmetrics and Mixpanel aren’t cheap, and yet they take a fair amount of effort to integrate from scratch. Using Segment in front means I can trial them in parallel without biting off two significant integration efforts.

segment.io

And speaking of KISSmetrics and Mixpanel: both tools are pretty awesome. Justin is like a kid in a candy store, taking the raw information and events filtering in and building out all kinds of interesting funnels, counters, and segments. And I can tell he’s deep in the numbers, since he keeps finding places where I’m counting things wrong!

Devops Metrics - Librato

Librato Metrics

The biggest stumbling block to me with devops metrics has always been how many moving pieces there are. You have to figure out what to collect, collect it somehow, get it into some kind of data store, then hook up something to consume that data and eventually expose it in some kind of useful format. It’s all a bit overwhelming when you’re just getting started, and at the local Devops Meetup I was explaining some of my frustration to a few sympathetic ears from Github and Heroku, which both have excellent internal devops metrics.

One of them suggested I get started with Librato, and boy am I glad they did. Leveraging Librato’s awesome librato-rails gem, I had some basic metrics showing up in a dashboard within a few hours of getting started. And since Librato is taking care of the infrastructure, I’m in a great place to add more metrics and incrementally increase our visibility into the health of our infrastructure.

Performance Dashboard

We’ve always had some visibility into our business metrics and our infrastructure, but like so many things when you’re in startup mode, it was one kludge after another. It’s so exciting to feel like we’re on a solid footing now, both in terms of having better metrics today and in being able to iteratively expand what we’re monitoring going forward.

Relix 2.0.0

Relix is a Ruby library that makes it easy to build and use various types of secondary indexes backed by Redis. We use it heavily at Spreedly to give us fast access to our Riak-backed models (we didn’t use Riak’s secondary indexing since it didn’t exist when we started building Spreedly Core). Relix’s README is full of details on it’s philosophy and usage, and I’ll be doing a post eventually about why and how we use it at Spreedly.

Relix 2.0.0 brings a major version bump, due to the fact that it now requires Redis 2.6. This allows us to leverage Lua scripting, which is a big win for some use cases, especially since Relix does all it can to pipeline Redis requests.

The new feature driving the usage of Lua scripting is the ability to index and retrieve the list of values being indexed by a multi index:

1
2
3
4
5
6
class User
  relix.multi :account_id, index_values: true
end

# Enables this call
User.lookup_values(:account_id)

So whereas normally you’d look up all the users with a given account_id, lookup_values allows you to look up all the account id’s that are indexed. This is super handy for doing aggregate lookups by a multi index:

1
2
3
4
User.lookup_values(:account_id).each do |account_id|
  users_for_account = User.lookup{|q| q[:account_id].eq(account_id)}
  # Aggregate processing for the users in the account
end

2.0.0 also adds a deprecation mechanism and uses it to deprecate direct access to IndexSet#indexes in favor of IndexSet#[].

You can grab 2.0.0 hot off of Rubygems, report any issues you encounter on Github, and contact me via the details on my Github profile if there’s anything I can help with.

And let me know if you’re using Relix - I’d love to hear about anything and everything it’s being used for!