Tag Archives: languages

Aspects of Love

Programming languages come and go at a frightening clip. Less often but still with some regularity, frameworks appear and disappear as well.

But it’s more infrequent that entire approaches to coding come along. Functional programming has been around forever. Object-oriented programming has been around nearly as long. But it’s only been a little more than a decade that the idea of <em>aspect-oriented programming</em>, or AOP, has been on the scene.

If you’re not familiar with AOP, some background might be helpful. A key idea behind AOP is that there are functions that your code needs to do which go across (or cross-cut) the entire application. In other paradigms, you then need to code bits of those functions all over the place. But in AOP, you do it only once.

One prototypical example is logging. If you’re writing an application that does anything significant, you probably write out to some kind of log when it happens. In a user-based application, for instance, you probably want to log every time you add a new user or delete a user.

Under other approaches, you need to call some kind of logging hook in your addUser and deleteUser functions (or methods). But with AOP, you can define a separate logging function, and tell it to attach to the deleteUser and addUser functions. The parts of the code that actually do the work with users aren’t touched at all.

Why is this helpful? Well, suppose you want to change how logging works or how you call the logging piece. You’d need to touch every place it’s called. With AOP, you just change the logging piece once, and you’re done.

Or suppose you decide well after the design stage that you want to add a layer of security to every user routine to make sure the person trying to make a change has permissions to do so. You could go find all of the functions and add calls to them, or you could just create a security-check routine and attach it to the relevant functions.

Compiled languages like Java and C++ have had this idea for a while now, but it has been slow to come to interpreted languages. With the Flow framework, for instance, PHP now has AOP capabilities. There’s still some roughness around the edges, but it’s a big step toward widespread adoption of this approach.

Watch this space…

The Rise And Fall Of Programming Languages

Everybody out there who is fluent in conversational Latin, please raise your hands. (Peering out) Not many. How about Esperanto? Perhaps a few more, although you all seem to have shaggy beards and a perpetually quizzical expression. (Not that there’s anything wrong with that.)

Human languages come and go, even though they are so closely identified with a people. There are efforts to keep them going wherever possible, and records indicate that there may be as many as a thousand or more so-called “endangered languages.”

So it should come as no surprise with the pace of technology that there are endangered programming languages as well. Some, like B, were stopgaps until something else came along. Others, like COBOL, were historically important but really aren’t around much today (other than a lingering small group).

When does a programming language become pervasive enough to get interested? And when does it wither enough in ubiquity that it’s no longer relevant for a project? Both are tough (and squishy) issues.

In terms of the upswing, my bias is to get interested pretty early in gestation — not necessarily to use the language for a client project, but to get a sense about where language (and compiler) development is going. I’m not likely to use Ada or even Haskell when building something that others will need to maintain, but, as an example, looking at how Haskell handles lazy evaluation and “first-class” functions is fascinating, and broadens the knowledge of the team.

So perhaps the better questions are about when to use a language for a project that will be released into the wild — and when to stop doing so, as a language’s star is falling? The answers to both are really the same: when maintenance and long-term expertise is easy and relatively cheap to find.

We’d love to be in lifetime engagements with clients. And many of our clients are with us for many years. But we don’t assume that, and we don’t want to build something that will create hassles for the client later. So that means, no matter how much we love Forth, we’re probably not going to use it to build a web application. There just aren’t enough people out there to support it. (Plus, that’s not really a great use of the tool.)

But let’s take a tougher example: perl. Fifteen years ago, it was everywhere. If you didn’t know it, you weren’t considered serious about building for the web. As PHP has usurped some of that space, perl remains a widely-used language (although more and more, it seems to be confined to the back-end and server side).

But man, I love perl. It has an ability to work with bits of data and patterns that is perhaps matched, but rarely surpassed. Contrary to some of its reputation, it can be elegant — but it doesn’t force it. (Why is there so much bad perl code? Bad perl coders.) And the CPAN archive of modules and third-party libraries is peerless.

What to do, then? Objectively, perl’s fortunes are falling. Has it passed the threshold of use on a major project? Well, as of this writing, I’d say no — but it’s getting close. The thumb on the scale that balances cost-benefit of using a language for a project is getting kinda heavy. It’s probably in the space where we will build and maintain perl-based projects that are already in that language, but are unlikely to starting something from scratch in it.

Which is sad, but for every one of those, there’s an Objective-C or a C# that’s climbing up the charts. Goodbye Esperanto, hello Mandarin.

Where to Start?

I get asked every couple of months how to become a developer — someone excited about technology who wants to learn “how to code.” And I think that’s great. But they’re going about it all wrong.

There’s coding, and there’s programming, and there is a difference between them. One is a prerequisite to doing the other well.

If you are a coder, you can (probably with one or maybe two languages), attack problems and solve them. It may not always be elegant or efficient, but it works. You’re able to Get It Done.

But if you’re a good coder, you can (with whatever language is thrown at you, and probably choosing the one that is best suited to the task) attack problems and solve them as well. You can do it quickly, efficiently, and with as much simplicity as possible (without over-simplifying and missing the target). You can Get It Done Right.

So what’s the difference? The good coder is also a good programmer.

Learning how to program is mostly language-independent. It’s about how to think like the computer. How to spot common kinds of problems and solve them algorithmically. To use one of my favorite example, when to use a quick sort or a shell sort. What kinds of data structures work better in different cases. And so much more.

Almost none of that depends on a single language. In fact, learning those things in a language you think you’ll be using on “real” projects is probably a BAD idea. Which is why many universities use languages like Ada or Scheme. By doing that, you get (at least) two benefits: you can abstract the language away and focus on the underlying programming; and when it comes time to do “real” work, you’ll be learning a new language, which helps cement the programming concepts.

It’s no coincidence that many self-taught developers are coders — but not all. The key is to search through the midst…and find a programmer.

Making Comments Count

Earlier this week, Graham Lee wrote an opinion piece about what separates a good code comment from a bad one. It’s thoughtful and well-written, with plenty of examples and some excerpts from the relevant literature.

I must admit that my philosophy on code commenting has changed quite a bit over the years, as both I and my projects have gotten more sophisticated. (Well, the projects have gotten more sophisticated; I think I’ve just gotten older.)

When I was just starting out, commenting code seemed like a boring waste of time. Why on Earth would I spendwaste valuable time putting in comments when it was perfectly clear what the code was doing? Besides, I wrote the code, and I’d surely be the only one ever asked to maintain it, so there’s little point.

Of course, this was also a time in my career when coding projects numbered maybe into the hundreds of lines. So, arguably, there may have been some merit in the notion that comments wouldn’t have added a lot of value.

Then came the time when I needed to ACTUALLY go back and revise some code I had written years earlier. Yeesh. If there was ever a way to feel you had amnesia, this was it. I was looking at code I had written — I knew I had written it — but couldn’t recognize it for the life of me. I had gotten so much better in the language and the algorithmic design that I recognized the old code for what it was: a mess. And a virtually uncommented-one at that.

If losing a hard drive full of data is what makes most people fervent backer-uppers, getting baffled by my own historical code was what drove me to commenting. And in a multi-person team, it’s essential, since comments can also document required parameters and their meanings, and actions that need to be picked up by someone else. Heck, I even use comments to myself in the code to remind myself where I stopped work one day and where I need to pick up the next.

Beyond the act of commenting, though, there are some key differences between good ones and bad ones. But I’ll let Graham Lee pick up that ball. He’s done a good job of it in his article. Worth a read!