Author Archives: aaron

When Things Go Wrong

When a new relationship is starting out, things are perfect. Your prospective suitor is shiny and new, and you spend time getting to know what each other likes and dislikes, how they like to do things. Whether that exploration is over long meals, coffee dates, movies, whatever — that exploration is part of the joys of dating.

After a while, though, sometimes the relationship hits a rough patch. And the getting-to-know-you part of the relationship plays a huge role in whether or not it survives the rough patch. Did you learn the ways that you each like to communicate? Did you work on solving problems that were small before something bigger came along? Do you have a depth of history together that can help you weather the storms?

These are all similarly felt in a strong business relationship as well, of course — although there are fewer movies and dates.

When we start talking with, and working with, a new client, we spend time understanding how they work, what is important to them, and how they communicate. We talk about how to keep the lines open, to address small things before they get big. And if small issues do come up, we try to get them resolved as quickly as possible.

But sometimes, bigger issues do appear. It is only through these preliminary steps of understanding, though, that we build a strong foundational relationship that carries us through these conflicts.

When we talk with other agencies and contracting firms, we find that they focus a lot on the many good things they can do (and that’s important). But they’re often more reticent to discuss what they do when there’s a bump or hiccup. If you (or they) are in it for the long haul, though, it’s worth asking the follow-up question.

It’s just a little bit of insurance — you don’t ever want to need it, but when you do, it’s good information to have.

Data Structures Are Dead…Long Live Data Structures!

Eons ago, when I was first learning how to code, we spent a fair amount of time focusing on data structures at the heart of development. In that time, both memory and fixed storage were expensive and scarce, so by learning how the base structures worked (and learning how to code them up from scratch), we could pick the most economical version and save an extra kilobyte of memory or two — a precious commodity!

These days, neither memory nor storage are terribly expensive nor scarce (although in our networked environment, there are still reasons to conserve). So it may not be much surprise that there are many programmers who couldn’t write an implementation of a doubly-linked list or b-tree.

Yet it’s not stopping them from being productive, great developers. Why? Because in today’s framework and higher-level language-driven world, those lower-level structures are often abstracted away or handled behind the scenes. Take Cocoa, for instance, which has classes for arrays, sets, dictionaries, and more — all with optimizations and methods to do querying, modification, and more built in.

Sounds like unbridled progress to many, and I certainly agree with the ‘progress’ part. But unbridled? I’m not so sure. I continue to think there’s value in knowing how the fundamental data structures work — for at least two reasons:

First, in knowing the internals, you can make more informed decisions about which to use when. With Cocoa, for instance, Apple has released a whole set of tech docs about the best uses for each of the “collection” types. It’s helpful and well-written, but mostly superfluous if you have the underpinnings needed to internalize that kind of thing.

Second, there’s still a role for optimization. Hardware and networks continue to get faster and frameworks continue to get more efficient — but it is poor form to rely on those increases when you don’t have to. And to optimize, there’s really no substitute for knowing the basics of how things work.

So maybe we’re seeing the slow erosion of understanding about low-level data structures like stacks, queues, and heaps. And for some, maybe that’s OK. But I’m not ready to see them go just yet.

Should Lorem Ipsum Die?

Last month, Paul Souders posted a piece that was one-part futurism, one-part opinion, one-part rant, entitled: “Content-first design ain’t herding cats.” He raises a handful of design trends and then makes leaps to what he perceives to be the best responses.

Amidst some of the all-caps and bolded pieces is the underlying premise that content should always drive design, and thus, the content should be present before the design is done. Not necessarily a revolutionary sentiment, but certainly an admirable one — sort of the web design version of form following function.

In practice, though, it seems like it’s a goal but not often the reality. Usually, content is getting reworked (or generated) as the design is being done as well. So Souders’ contention that we should “[k]ill Lorem Ipsum for good” might ignore how projects tend to work. Or at least how they’ve tended to work in our experience.

That’s not to say design drives content — quite the opposite. But often, a design framework will be in place well before the content is “done” (put in quotes because when is content ever really done?). When the content is ready to go into production, there are often tweaks and alterations to the design. But to hold one up altogether for the other would extend project lifespans considerably.

Other responses are predicated on the idea that walled gardens and Instapaper will be the primary way we view content in the future — web scrapers that take someone else’s content and puts it in their display. I’m not so sure about that future, both for copyright and commercial reasons. Big brands aren’t going to want some third-party app effectively removing their specific brand pieces (trade dress), nor will they stand for it for long, if it threatens to become ubiquitous.

Further, it’s difficult to see how the Instapapers of the world would handle more intrinsically dynamic content, where design necessarily is a bit removed from content because the content can vary.

Thinking hard about content before thinking hard about design is a good idea. A great idea, even. Holding up the design process until the dots and twiddles are done to accommodate a scaper-driven future? I’m not convinced.

The Paradox of Black Boxes

When I was a young programmer, I was developing an application for a client — it may have been for Windows 3.1 or Mac System 6, I don’t remember — and my code just wouldn’t work. I banged my head against it for quite a while, and just couldn’t see any errors.

So I made an intellectual leap. One of youth and hubris, in part. I concluded that there was a bug in the system’s API. Somehow, in the millions of lines of code that comprise the (then-new) operating system, a bug had been placed that was causing my code to fail.

Incensed, I dashed off a bug report and posted indignantly to a programming community (probably on CompuServe or AppleLink). My unhappiness at the bug was trumped only by my self-righteousness at finding someone else’s.

But of course, you know the end of this story. My indignation and feeling of superiority evaporated quickly when the programming community noted no such error in THEIR code, which took me back to my own — where I found my problem. My happiness at solving the problem was eclipsed by my embarrassment.

This may be an anecdote borne of, as I mentioned, youth and hubris, but it also derives from another phenomenon: the black box. My code was relying on someone else’s code, so when mine didn’t work, it was intuitive to blame the mystery “other” code.

This happens all the time today — likely more than at any time in recent memory — because of the rise of modularized code. It is trivial today to take code libraries from multiple places, call a remote API or two, and package it up as a new application. In fact, in some cases, it increases one’s efficiency by several orders of magnitude.

This is the virtue of the black box. We can drop one in whenever we need specific functionality and we don’t have to do much to make it work.

The difficulty, however, is in provenance. Who knows how that code was written, or how well it works? You can get a sense of the latter often by doing some web searching, but if you are using a narrow-niche kind of code where few use it, or you are using a “black box” in a corner-case/niche-y way, you may not be able to rely on the commons.

So what’s the answer? Surely it’s not to build everything yourself from scratch. If that were true, we’d all be releasing our own versions of iPhone operating systems to go with our applications, and it would be worse than the Android fragmentation! (Kidding, kidding. Mostly.) Rather, it’s to be judicious when using external libraries and APIs, to do so where you can degrade gracefully if something breaks, and to code defensively, handling any errors or anomalies that could involve the foreign code.

Because even though it’s unlikely, it’s possible that next time, it really WILL be the black box code that’s broken, and not my application. And that will make my younger self smile.