Tag Archives: change

Reading the Landscape

Sometimes, it’s useful to pause for a moment and look around to scan the environment. Of course, we help our clients do that all the time, because it’s a key part of the design process. But it’s not so often that we do it for ourselves.

When our developers got started in this business, the landscape consisted of a handful of big companies providing services on mainframes and a few types of microcomputer. Then the desktop computer revolution arrived, and things exploded. Some big companies failed or broke up (hello, Ma Bell and Unisys), and many smaller players came into the space.

When the Internet revolution happened a decade or so later, the process repeated. Today, with the ubiquity of the technology and the near-universal need for companies and brands to leverage web, mobile, and social technologies, demand is high. And so are the suppliers.

We recently conducted a more in-depth look at who (besides us) is providing quality services out there — and who, ahem, was providing less-good services, shall we say. The results were not too surprising, but remarkable nonetheless.

While there is still a coterie of small providers and one-person shops, the largest segment of the community is more the medium-sized agency (like Square Lines, really), combining anywhere from a few to a couple of dozen professionals in the company. At the larger end, there are relatively few companies of great size.

Given the lessons of history, it will be interesting to see what happens to this group. If (or, more likely, when) we come across the next disruptive technology, it seems likely that some of the bigger players (and perhaps a fair amount of the medium-sized ones) will be in trouble.

This is why keeping up with emerging technologies is such a big part of what we do, and why it is a big part of what other successful firms do as well. As the paths of the past tell us, those who don’t are destined to fail.

Getting Better (Or: We Used To Be Worse)

I’m a bit of a pack rat. Somewhere, I have encomia that goes back decades. And in the digital world, this is true as well.

So when I got a new computing device recently and needed to migrate a bunch of data, I had an opportunity to look back at projects we’ve done over the years, and that I had done for at least a decade before. What I found wasn’t all that surprising, but it was instructive.

Starting with my early work, it was, well, fair. I mean, it worked and all (in fact, one web application I wrote a decade ago is still in production use on a public website – think about how much on the web has changed in that time!). But it wasn’t coded all that well, and the emphasis was on getting it done quickly and briefly. (In other words, the code was hard to read and not well documented.)

Then I look at a project from five years later, and it’s all different: sure, sure, it works – that much is the same. But under the hood, the code isn’t just quick, it’s efficient. And there are actual comments throughout. The front-end design and the ‘look’ of the work is leagues better (by that time, I didn’t do it myself, and we had a team with expertise in that area). It’s a professional product.

Then I turn to a project we finished last year, and it’s more advanced yet. The amount of growth isn’t as dramatic, but it’s still there. Development patterns are being used effectively and both programmatic and business logic is easily intuited from the code (and its documentation), and the separation of functions allows for easy manipulation of pieces and parts. It’s a great result.

This progression is hardly unique, I think – every person and every firm, if they’re stretching themselves and striving for self-improvement, has something similar. But we don’t talk about it much, because we worry that clients (or, more specifically, prospective clients) think “we get better all the time” means the same as “we aren’t really ready for prime time yet.”

But that’s pretty shortsighted thinking. Even the best sports pros are always working on their game. We can always improve – after all, that’s the whole point of professional development, right?

It was heartening to see that transformation over the last couple of decades. Who knows what the next project will bring?

The Rise And Fall Of Programming Languages

Everybody out there who is fluent in conversational Latin, please raise your hands. (Peering out) Not many. How about Esperanto? Perhaps a few more, although you all seem to have shaggy beards and a perpetually quizzical expression. (Not that there’s anything wrong with that.)

Human languages come and go, even though they are so closely identified with a people. There are efforts to keep them going wherever possible, and records indicate that there may be as many as a thousand or more so-called “endangered languages.”

So it should come as no surprise with the pace of technology that there are endangered programming languages as well. Some, like B, were stopgaps until something else came along. Others, like COBOL, were historically important but really aren’t around much today (other than a lingering small group).

When does a programming language become pervasive enough to get interested? And when does it wither enough in ubiquity that it’s no longer relevant for a project? Both are tough (and squishy) issues.

In terms of the upswing, my bias is to get interested pretty early in gestation — not necessarily to use the language for a client project, but to get a sense about where language (and compiler) development is going. I’m not likely to use Ada or even Haskell when building something that others will need to maintain, but, as an example, looking at how Haskell handles lazy evaluation and “first-class” functions is fascinating, and broadens the knowledge of the team.

So perhaps the better questions are about when to use a language for a project that will be released into the wild — and when to stop doing so, as a language’s star is falling? The answers to both are really the same: when maintenance and long-term expertise is easy and relatively cheap to find.

We’d love to be in lifetime engagements with clients. And many of our clients are with us for many years. But we don’t assume that, and we don’t want to build something that will create hassles for the client later. So that means, no matter how much we love Forth, we’re probably not going to use it to build a web application. There just aren’t enough people out there to support it. (Plus, that’s not really a great use of the tool.)

But let’s take a tougher example: perl. Fifteen years ago, it was everywhere. If you didn’t know it, you weren’t considered serious about building for the web. As PHP has usurped some of that space, perl remains a widely-used language (although more and more, it seems to be confined to the back-end and server side).

But man, I love perl. It has an ability to work with bits of data and patterns that is perhaps matched, but rarely surpassed. Contrary to some of its reputation, it can be elegant — but it doesn’t force it. (Why is there so much bad perl code? Bad perl coders.) And the CPAN archive of modules and third-party libraries is peerless.

What to do, then? Objectively, perl’s fortunes are falling. Has it passed the threshold of use on a major project? Well, as of this writing, I’d say no — but it’s getting close. The thumb on the scale that balances cost-benefit of using a language for a project is getting kinda heavy. It’s probably in the space where we will build and maintain perl-based projects that are already in that language, but are unlikely to starting something from scratch in it.

Which is sad, but for every one of those, there’s an Objective-C or a C# that’s climbing up the charts. Goodbye Esperanto, hello Mandarin.

Data Structures Are Dead…Long Live Data Structures!

Eons ago, when I was first learning how to code, we spent a fair amount of time focusing on data structures at the heart of development. In that time, both memory and fixed storage were expensive and scarce, so by learning how the base structures worked (and learning how to code them up from scratch), we could pick the most economical version and save an extra kilobyte of memory or two — a precious commodity!

These days, neither memory nor storage are terribly expensive nor scarce (although in our networked environment, there are still reasons to conserve). So it may not be much surprise that there are many programmers who couldn’t write an implementation of a doubly-linked list or b-tree.

Yet it’s not stopping them from being productive, great developers. Why? Because in today’s framework and higher-level language-driven world, those lower-level structures are often abstracted away or handled behind the scenes. Take Cocoa, for instance, which has classes for arrays, sets, dictionaries, and more — all with optimizations and methods to do querying, modification, and more built in.

Sounds like unbridled progress to many, and I certainly agree with the ‘progress’ part. But unbridled? I’m not so sure. I continue to think there’s value in knowing how the fundamental data structures work — for at least two reasons:

First, in knowing the internals, you can make more informed decisions about which to use when. With Cocoa, for instance, Apple has released a whole set of tech docs about the best uses for each of the “collection” types. It’s helpful and well-written, but mostly superfluous if you have the underpinnings needed to internalize that kind of thing.

Second, there’s still a role for optimization. Hardware and networks continue to get faster and frameworks continue to get more efficient — but it is poor form to rely on those increases when you don’t have to. And to optimize, there’s really no substitute for knowing the basics of how things work.

So maybe we’re seeing the slow erosion of understanding about low-level data structures like stacks, queues, and heaps. And for some, maybe that’s OK. But I’m not ready to see them go just yet.