Flatten Your Conditionals!

Deep nesting is a pet peeve of mine. I’m going to show you what deeply nested code is and discuss some strategies for keeping things tidy. It’s my opinion that deep nesting is a sign of sloppy code.

You know, code like this (with my condolences to the author):

    if (productId != nil) {

        NSLog(@"EBPurchase requestProduct: %@", productId);

        if ([SKPaymentQueue canMakePayments]) {
            // Yes, In-App Purchase is enabled on this device.
            // Proceed to fetch available In-App Purchase items.

            // Initiate a product request of the Product ID.
            SKProductsRequest *prodRequest = [[SKProductsRequest alloc] initWithProductIdentifiers:[NSSet setWithObject:productId]];
            prodRequest.delegate = self;
            [prodRequest start];
            [prodRequest release];

            return YES;

        } else {
            // Notify user that In-App Purchase is Disabled.

            NSLog(@"EBPurchase requestProduct: IAP Disabled");

            return NO;
        }

    } else {

        NSLog(@"EBPurchase requestProduct: productId = NIL");

        return NO;
    }

This code is hard to understand. It’s hard to understand because error handling is distant from the error checks (for instance, the check for nil is at the beginning but the error and return are at the end!). It’s hard to understand because the important parts are deeply indented, giving you less headroom. If you want to add additional checks, it’s hard to know where to add them – and you have to touch lots of unrelated lines to change indent level. And there are many exit points scattered throughout. GROSS.

Whenever I see code like this I cringe. When I get the chance, I like to untangle it (or even catch it in code review). It’s soothing, simple work. To be sure, the functionality of the code is fine – it’s purely how it is written that annoys me.

There’s a key thing to be aware of in the structure of this code – it has a bunch of early outs related to error handling. This is a common pattern so it’s worth walking through the cleanup process. Let’s pull the first block out:

    if(productId == nil)
    {
        NSLog(@"EBPurchase requestProduct: productId = NIL");
        return NO;
    }

    NSLog(@"EBPurchase requestProduct: %@", productId);

    if ([SKPaymentQueue canMakePayments] == YES)
    {
        // Initiate a product request of the Product ID.
        SKProductsRequest *prodRequest = [[SKProductsRequest alloc] initWithProductIdentifiers:[NSSet setWithObject:productId]];
        prodRequest.delegate = self;
        [prodRequest start];
        [prodRequest release];

        return YES;
    }
    else
    {
        // Notify user that In-App Purchase is Disabled.
        NSLog(@"EBPurchase requestProduct: IAP Disabled");
        return NO;
    }

    // Never get here.
    return NO;

It’s a LOT better, but now we have a return that can never be run. Some error handling code is still far from the error detecting code. So still a little messy. Let’s do the same cleanup again on the second block:

    if(productId == nil)
    {
        NSLog(@"EBPurchase requestProduct: productId = NIL");
        return NO;
    }

    NSLog(@"EBPurchase requestProduct: %@", productId);

    if ([SKPaymentQueue canMakePayments] == NO)
    {
        // Notify user that In-App Purchase is Disabled.
        NSLog(@"EBPurchase requestProduct: IAP Disabled");
        return NO;
    }

    // Initiate a product request of the Product ID.
    SKProductsRequest *prodRequest = [[SKProductsRequest alloc] initWithProductIdentifiers:[NSSet setWithObject:productId]];
    prodRequest.delegate = self;
    [prodRequest start];
    [prodRequest release];

    return YES;

See how much cleaner that is? Beyond saving indents, it also exposes the structure of the algorithm a great deal more clearly – check it out:

  1. Check for nil productId; bail if absent.
  2. Log productId if it is present.
  3. Check if we can make payments/IAP is active; bail if not.
  4. Submit the product info request.
  5. Return success!

The code and its “flowchart” now match up nicely, and if you modify one, it’s easy to identify the change in the other. This might seem like a little thing, but I find it shows that the purpose + structure of the function is well set up. And if you can’t write the function without violating this rule, it’s often a very solid clue you need to introduce some more abstraction – tactics such as breaking stuff up into helper methods, reorganizing your data structures a little bit, centralizing lookups/checks, and so on.

Something to keep in mind next time you find yourself hitting tab more than a couple times – flatten your conditionals!

Ludum Dare 26 & Loom

Are you a fan of Ludum Dare? I’ve loved watching it for a long time. The huge community of excited developers is fantastic to watch, and some great games come out every time. More than that, LD is a great opportunity. In fact, such a good opportunity that we’re giving LD participants a huge deal on Loom (but more on that later).

The incredible opportunity in an event like LD is that it gets you to finish something. It’s so common for projects to run on and on and on and on… Professionally, you could work in AAA games for a decade and only ship a few games. Imagine being a professional painter and only making 10 paintings in your whole career.

There are big lessons you only learn when you finish. Like – was the feature you spent 80% of your time working on what made the game fun, or was it the feature you added at the last minute on a lark that made the whole game work? Is your gameplay immediately understandable? How much is your fun driven by content vs. gameplay? What dumb things kept people from enjoying your game (like missing DLLs, unclear instructions, installer issues, and so on)? What REALLY goes into the last 20% it takes to ship?

You also get the big endorphin rush of releaseIt feels GOOD to ship. Even if you decide the project was a failure, completing it is good. You can put it on the shelf and refer to it later. And it’s motivating to know you’ve gotten something DONE and don’t have to think about it any longer.

It’s easy to get stuck in the doldrums of project creation. You end up going around and around creating new things on new tech. It’s shiny and in some ways fun, but you never experience the growth and maturation that comes from shipping and sharing your creation with the world. Shipping – even something small – gets you out of that rut.

Take some time and participate in Ludum Dare 26. Creating and finishing a small game project is one of the best investments you can make in yourself – not just as a game developer but as a professional. It’s easy to overlook how valuable this can be.

And of course – Loom is a great fit for making small games fast. Through LD26, use the code GO_LD26 to get 50% off all Loom subscriptions. Get Loom and go make something cool!

Loom is Launched!

Screen Shot 2013-02-28 at 11.59.20 PM

Howdy!

You may have wondered what I’ve been up to since PushButton Labs and PushButton Engine. Nate Beck, Josh Engebretson, and I are proud to share our latest creation, the Loom Game Engine, with the world. It’s a native mobile game engine with live reloading of code and assets, a great command line workflow, and a powerful AS3-like scripting language.

Check out this sweet video demoing Loom:

We’re giving away Loom Indie Licenses (normally $500) for FREE until Mar 29, the last day of GDC. We’ve already given away almost $2,000,000 in licenses. Get yours now!

TCP is the worst abstraction.

TCP is the worst abstraction.

You are hopefully familiar with Leaky Abstractions as described by Joel Spolsky. The idea is that when you add layers to hide messy details, you can mostly avoid having to know what exactly is going on – until something breaks. Think of it as putting a smooth plastic coating on your car. Everything is really simple and zero-maintenance until your engine breaks and now you’re peeling plastic back trying to figure out which part is on fire…

TCP makes some big promises. “Your data will magically arrive in order and on time!” “Don’t worry about it, I’ll retry for you.” “Sure – I can send any amount of data!” “Hahah, packet sizes? I’m sure we don’t have to worry about those.”

Let’s talk about springing leaks. Just like when your upstairs neighbor’s toilet springs a leak and you have to deal with the concrete realities that a high flow water source above your bedroom ceiling introduces, springing leaks means you can’t use your abstraction anymore – you now have to work with the underlying system, often at one remove (or more!) because you’re working through the abstraction you chose to shield you from this in the first place!

TCP is leaky as a sieve. TCP says “I’ll just act like a stream and send bytes to someone on the internet!” But here are just a few areas where TCP breaks:

  • If you send too much data at once (the OS buffer fills and the write fails; you then have to resend).
  • If you send too little data at a time (the OS will sometimes fix this for you, see Nagle’s Algorithm, which can be good or bad depending on when that data needs to go over the wire).
  • If you try to read too much data at once (again, the OS receive buffer has limited size – so you have to be able to read your data in chunks that fit inside that limit).
  • If you transfer data at the wrong rate (the TCP flow control rules can be a problem).
  • If you try to read too little data at a time (then OS call overhead dominates your transfer speeds).
  • If you want to assume data has arrived (it may not have, you have to peek and see how much data there is and only read if there is enough, which necessitates careful design of your protocol to accomodate this).
  • If you want to initialize a connection in a deterministic fashion. (You have to do a bunch of careful checks of domain/IP/etc. to make sure it will even go through and once the connection is initialized you have to figure out if it’s alive or not. It can also take quite a while to establish a connection and get data flowing, see efforts like SPDY)
  • If you are on a lossy network (it will incur arbitrary overhead resending lost data).
  • If you want to manage latency (you have to take care to send data in correct packet boundaries).
  • If you want to connect through a firewall (good luck with that one).
  • If you want to use nonblocking IO. (You have to do a bunch of platform specific crud and even then not all actions are nonblocking; you have to live in a separate thread and block there.)
  • If you want to run a popular service. (There are a lot of ways the OS can be tricked by outside parties into mismanaging its resources leading to starvation/denial of service attacks.)

IMHO, TCP is an abstraction in name only. If you want to get any kind of decent results from it, you have to fully understand the entire stack. So now not only do you have to know everything about TCP, you have to know everything (or at least most of it) about IP, about how the OS runs its networking stack, about what tricks routers and the internet will play on you, about how your protocol’s data is set up, and so on.

I came to networking in a roundabout way. I did a couple of small TCP projects in my teens, but I spent most of my formative programming years (18-23 or so) working with Torque, which uses the User Datagram Protocol (UDP). Here’s what UDP code looks like:

// Send a packet.
sendto(mysocket, data, dataLen, 0, &destAddress, sizeof(destAddress));
// Receive a packet.
recvfrom(mysocket, data, dataLen, 0, &fromAddress, sizeof(fromAddress));

It’s very very simple and it maps almost directly to what the Internet actually gives you, which is the ability to send and receive routed packets from peers. These packets aren’t guaranteed to arrive in order nor are they guaranteed to arrive at all. In general they won’t be corrupted but it would behoove you to check that, too.

This is primitive, like banging two rocks together! Why do this to yourself? Well – it depends. If you just need to create some basic networking behavior and don’t care if it’s subpar, TCP works well enough, and if you have to, you can get it to sing for certain situations. And sometimes TCP is required because of firewalls or other technical issues. But if you want to build something that is native for the network, and really works well, go with UDP. UDP is a flat abstraction. You have to take responsibility for the network’s behavior and handle packet loss and misdelivery. But by doing so you can skip leaky abstractions and take full advantage of what the network can do for you.

Sometimes it’s better to solve hard problems up front, rather than ignoring them and hoping they go away

Some Thoughts on Build Servers

Continuing from last week’s thoughts about build systems, let’s talk about build servers.

Say you’ve gotten a build system, like CMake, up and running in your project. Now the developers on your project are doing consistent builds across all your different platforms, and people are hopefully not missing important build steps anymore. However, there can still be differences between systems, and it’s hard for any one developer to try building on all platforms. Additionally, they could have some left over crud from old builds that throws things off.

The next step in sanity for your project workflow is to set up a build server. This is a box (or boxes) that sits there and pulls down clean copies of your codebase and does full builds, from scratch, all the time. It is kept in a pristine condition so that you don’t e.g. introduce a new dependency in your production binaries by updating Visual Studio. To do these builds, it runs some continuous integration package that lets developers trigger builds, check up on their status, and view build logs to find out why things broke. CI packages can also do fancier tricks like run unit tests, upload builds to QA or even the public, tag releases, and so on. (We’ll get into advantageous use of these in a later post.)

There are a lot of great build server options out there, and I’ve worked with or evaluated many of them. Let’s walk through them in order:

Tinderbox Mozilla’s Tinderbox was one of the first public continuous integration packages. Much like Bugzilla, it was the first place many people looked – and most people moved on to look at something better suited to their needs right away. This is not necessarily a knock on Bugzilla or Tinderbox, as they and their derivatives have continued to serve Mozilla just fine over the years.

PMEase QuickBuild QuickBuild was my first experience running a build server. We used QuickBuild 1.0 for C++ Torque builds on multiple platforms, which was weird and new – at the time, most build server packages – including this one – were very Java oriented. Luckily, we could call out to MSVC and XCode from Ant! PMEase has kept with it, and now they’re at version 4.0. I found them to have very responsive support, and QB itself was nice to configure and work with. It’s more expensive than some of the other options ($3k/site), but if you have the budget it’s worth a look.

Bamboo From Atlassian, I had high hopes for Bamboo, as JIRA is a powerful and reliable tool. However, when I last evaluated it (in late 2010 or so), I found its paradigm hard to understand. I just couldn’t figure out how I was supposed to use it – it had a lot of proprietary terminology that confused me. Additionally, it did not have good support for building all topic branches, which was a big part of the workflow I wanted my team to use. Looking it over again, I’m not sure it has improved on either front. However, I hold Atlassian in fairly high regard, so I am hopeful that someone else has figured it out and can enlighten me. 🙂

Jenkins/Hudson After a sale to Oracle, Jenkins was forked from Hudson to continue open source development. It runs on a wide variety of platforms, and it’s easy to set up. It has a large set of plugins of varying levels of maturity, and it’s not that hard to write your own. However, key plugins (for instance, the Git plugin) can have frustrating gaps and holes, and because it’s community driven, bugs can linger for a long time. The REST API is also weak in places, making it hard to extend with custom tools/scripts. My experience is that Jenkins is a solid choice for simpler projects but if you want to push your build server it can fall apart on you. We used it for several smaller projects, where it worked great, then we took it into a project with a large, 50+ member team of artists and developers. In that scenario, we ended up having to extend it heavily with custom scripts, mostly to add functionality it should have had to begin with.

JetBrains TeamCity TeamCity is free for lighter use, although heavier usage requires purchasing licenses from JetBrains. TeamCity has very solid Git/JIRA integration, and a well thought out UI. Setup isn’t hard and it has good support for adding distributed build agents. We’ve been very happy with it for our C++ and Ruby projects. It has good support for building topic branches, too.

In the end, what is crucial for continuous integration software? It should be reliable, and especially robust in the face of broken builds. It should be easy for the team to understand and use, especially when they are debugging build issues. It should be able to scale to build across all your platforms, quickly – it shouldn’t take more than 10 minutes or so for a full build across all platforms to complete.

Some Thoughts on Build Systems

Note: You might also want to read Some Thoughts on Build Servers, which discusses software packages for running automated builds on a shared server.

SmokestacksThe hardest part of software development is often the road from code in a repo to an artifact in the user’s hands.

There are a million ways you can ruin yourself along that long and winding path. You miss a DLL or dependency. You mark a flag wrong and it won’t run on a non-developer system. A setting gets changed on the system doing the compile and builds mysteriously fail. On multiple platforms (and who isn’t on at least a couple?), you forget to test on all 5 of your platforms and find out the build is broken – obviously or subtly – on one of them.

One building bock that helps cut down on this pain is a build tool – a tool to manage what files are built in what order and with what settings. If you can fire off your build from a single command line command, it dramatically reduces the risk of breakage due to outside factors – and helps a lot with setting up build boxes. At this point I’ve worked with nearly every option: make, msbuild, xcodebuild, rake, Maven, Ant, CMake, premake, qmake, and even a couple of home brew systems. Here are my thoughts on each of them:

GNU Make. The granddaddy of all build tools. Cryptic syntax, most widely used on POSIX-compatible environments like Mac or Linux. It can do anything you want it to, if you’re willing to dive deep enough into it. Provides very little hand holding. Tools like automake and autoconf expand capabilities quite a bit, but they are anything but intuitive, and if your goal isn’t a UNIX command line tool, they may be frustrating to work with. Makefiles are generally shippable if you are willing to put enough smarts in them (since they are fundamentally built on top of the shell). Make files are easy to generate, and many tools exist to programmatically do so (more on those later).

MSBuild. The successor to nmake (with a brief detour to devenv.exe), it owes a lot of its legacy to make. However, it’s fully integrated with Visual Studio, so if you have a Visual Studio project, it’s easy to drive. In general, vcprojs are pretty easy to programmatically generate, and also easy ship to other Windows developers, which is a big bonus. No viability for sharing cross platform, except possibly in the context of Mono development.

XCodeBuild. The command line tool for compiling XCode projects. It works just like XCode does, minus the goofy UI. Great for doing OSX/iOS builds, nothing doing for any other platforms. XCode project files are relatively easy to ship to people, although there can sometimes be little subtleties that screw you up. Once nice thing about XCode’s build model is that it’s fairly easy to call your own scripts at various points in the build process. The downside is that xcodeproj’s are finicky and hard to generate.

Rake. Ruby is pretty sweet, and Rake builds on it in the Ruby way – that is, with a domain specific language tailored to the task at hand. The downside is that the docs are inscrutable – you pretty much need to be prepared to look at a lot of examples and dive the code to understand it well. But it responds well to hacking and generally gets the job done. Since Rake just sequences commands it works great for non-Ruby projects – it’s basically a much better Make.

Maven. I have seen Maven used very well in real Java projects, and abused heavily in non-Java scenarios. If you grok the Maven way and are prepared to conform to its view of the world, you can get good results. But in general I think it is much more trouble than it’s worth in anything but enterprise Java contexts.

Ant. I’ve used Ant several times on non-Java projects, to good results. Ant is powerful and has some nice capabilities for filtering/selecting actions. However, it also has an obtuse XML syntax that becomes cumbersome in complex build scenarios, and it can be finicky to set up all your Ant tasks properly.

CMake. CMake is ugly, but it’s an effective kind of ugly. The CMake language is gross and its codebase is complex, with important features often being driven by subtle combinations of settings. But the docs are pretty decent, the community is large, and it has good update velocity. It also generates pretty good project files for most IDEs. And it is pretty easy to hook arbitrary commands into key points in the build process, which is a big win. CMake is bad if you want to do a lot of file processing or complex logic, but good for making and running project files that work across many platforms – including iOS and OSX.

premake. Of all these technologies, I most want premake to rock. It uses Lua, which is an easy and low-dependency language, and it has a pretty good set of modules for emitting different projects. Most of the time, projects can be shipped, which is big, too. However, the core generators are finicky, and we had compatibility issues. And development velocity isn’t as high as we’d like. So we ultimately had to drop it. However, I think it’s worth a look again in the future.

QMake. QMake is mostly associated with QT development, and exists to facilitate the preprocessing that QT requires to generate all of its binding + convenience features. It takes a simple configuration language, and can be effective. However, its support for mobile platforms appears to be rudimentary and it does not produce project files – just sequences build commands.

Homebrew. My main experience here was a custom project generation tool I developed at GarageGames. (Ultimately, many others have touched it, and I believe that it is still in use as of this writing.) We decided to go the homebrew route because we needed to ship high quality project files to our customers. None of the existing tools could produce these (premake is now probably the closest). And our existing process of hand-tweaking projects resulted in a lot of broken releases. We ended up using PHP to process hand-written project file templates. It worked because we had a large enough team to be able to spend a few man-months refining it until it was good enough. The main take away from that experience was that it’s not as hard to do as you’d think – it’s just matter of patience and groveling through exemplar build files to learn the format. The real cost is maintaining on-going compatibility with all the different versions of all the IDEs. I hope that someday GarageGames releases this tool as open source.

So, with all those out there to consider – what am I using today? Well, we are using a hybrid of Rake and CMake. We use CMake for all the project generation + compilation, while Rake deals with sequencing calls to CMake and make or xcodebuild or what have you – mostly for the build box’s benefit. Our project targets iOS, Android, Mac, and Windows, and so far this combination has worked out well.

Ultimately, you want a tool that builds from a single command and doesn’t require user intervention to produce final build artifacts. Otherwise, you will be constantly chasing your tail as you move from developer to developer or platform to platform. Any of these tools can achieve this, so it’s a question of choosing the tool or combination of tools that fit your situation the best. Good luck!

Fast Bitmap Fonts in Flash

FreeType Font Metrics Chart
I got fed up one day, and wrote a simple bitmap font renderer, BMFontRenderer. It parses bitmap font data from a generator like BMFont or Hiero (link on middle right of sidebar) and renders text of your choosing to a BitmapData.

BMFontRenderer is under the MIT license, so you can use it as you like.

Here’s an example:

// Load the font.
var bmfont:BMFont = new BMFont();
bmfont.parseFont(font);
bmfont.addSheet(0, (new fontSheet()).bitmapData);
            
// OK, draw some text!
var out:BitmapData = new BitmapData(200, 100, true, 0x0);
bmfont.drawString(out, 0, 0, "Hello, world!");

(You can see the complete example in BMFontRenderer’s GitHub page.)

Great. So – why would you want to use this, given TextField is right there, waiting for you? (Translation: why did you get fed up, Ben?)

It comes down to control. TextField has a TON of knobs and buttons you can set. They all do semi-obscure things which are, in and of themselves, very exciting, but confusing to work with if you aren’t a font expert. Worst of all, it pulls font data from hard-to-inspect places that are populated by mxmlc at compile time or located on the user’s system, so you get different visual results depending on who is running your app, where and when you compiled it, and maybe even what browser it’s in.

The Flash IDE does a good job of hiding all this, and for beautiful animated vector text created by an expert in the tool, there is a great workflow. But when I need to show a high score at an artist-selected size in an artist-selected-and-provided font that has artist-approved antialiasing so it looks good on top of the artist-created background, it can be a lot easier to let the artist export the exact characters they want, how they want them to look, to a PNG. Then all I have to do is copy pixels around.

Fonts are less restrictive about licensing if you ship pixels instead of TrueType/vector data. Shipping raster data can save you a couple grand in license fees, nevermind cut down on your download quite nicely.

Doing it this way also puts 100% of the font handling code under your control. There are no parts you can’t debug, analyze, optimize, timeslice, or otherwise fiddle with. Never underestimate the value of this when you have a deadline. This is why I love libraries like Sean Barrett’s stb_truetype.h, which is a self-contained TrueType font renderer in a single C file.

There’s another fantastic bonus. Once an artist has characters in a PNG, they can open them up and tweak them – distress them, add glows or cutouts, or anything else that Photoshop can do. That’s a big realm of possibilities, and a lot easier than learning a font authoring tool.

To be sure, Flash’s built in font rendering has solid uses. If you need to dynamically animate, scale, or rotate text, you need vectors, and Flash has got you covered. If you want to display fully arbitrary unicode text, you may need to fall back on system fonts to fit in your download budget (PushButton Engine has a nice glyph cache for speeding up rendering, though). Or if you are working with people who are very comfortable in the Flash IDE, why not use a system they are familiar with?

For everything else, there’s BMFontRenderer. Enjoy!

Flash Player: A Declining Asset?

4 YEARS AGO – “A DECLINING ASSET”

I’m working at a technology startup and today I am talking to one of the founders. He looks at me and says, “Our main product is a declining asset.

This is the product that generates 90% of our revenue and pays both of our paychecks. It’s the one that made our company a success, put us on the map.

Uh oh.

NOVEMBER 12, 2011 – ADOBE’S BIG IDEA

If you watched the Digital Media section of Adobe’s recent analyst meeting, you know that Adobe is putting a lot of focus on HTML5. Their recent announcement regarding dropping mobile web browser support for Flash Player caused a lot of turmoil, too, along with a shift in direction for the Flex SDK, their enterprise app framework.

If you look at the marketplace and the technologies at play, it seems that Adobe has realized that Flash’s position in the marketplace is eroding, that the erosion probably can’t be stopped, and they need to treat Flash as a declining asset. Just to review, here are some reasons that Flash’s position is eroding:

  • The many 3rd party mobile, native, and web-targeted development tools like Corona, Moai, Unity and others.
  • Non-Adobe Flash runtimes like Scaleform, Iggy. Companies like The Behemoth have their own Flash-compatible runtimes, too.
  • And of course the big one – HTML5. It can handle more and more enterprise apps, animation/multimedia content, and 3D. Browser vendors are in competition but increasingly targeting Flash-like capabilities.

Long term, HTML5 and other non-Flash technologies are unlikely to go away. Adobe may as well be proactive about owning the space rather than fight an unwinnable battle to keep everyone on Flash.

One more point to consider: Flash is made up of three big pieces. You have the tools, like Flash Builder and Flash Pro. You have the runtime, like the web plugin, the standalone player binaries, and AIR for desktop and mobile. And finally, you have the platform itself – the file formats, AVM specification, compilers, and APIs that define the behavior of Flash content.

They are all independent to a greater or lesser degree. The only part that probably wouldn’t migrate to HTML5 is the actual runtime (but see Gordon). And Adobe has been rumbling about compiling AS3 to JS/HTML5 and supporting C via Alchemy 2.

LABELS AND COMMUNITIES

Now, the funny thing about that conversation from four years ago is that, because of the mental label of “declining asset” we assigned, (at least) two interesting things happened. First, the company got acquired and tried to diversify into a couple of new markets. Second, I along with a few other guys left the company and went on to start a new one.

But the “declining” product continued to make more money than ever before. And in fact, it lives on today, despite the original company getting liquidated by its owner when the diversification strategy didn’t work out. So what does it mean, exactly, to be a declining asset?

I think “declining asset” is a label you put on something to help you make decisions. In Adobe’s case, the decision they made was to move their long term focus toward HTML5 and away from Flash Player.

There are some important things to keep in mind with the communities that develop around technologies and products. First, realize that the conversation is often dominated by the vocal minority – so what is said most often and loudest often doesn’t reflect on the actual needs of your user base. Second, realize that the people who post on your forums are emotionally invested in the product, have it as part of their identity, and they will be deeply unsettled by any signs that support is fading. Finally, realize that users often have a limited perspective. Community members are not tracking major market trends, they are looking at how they can meet their immediate needs (like getting contract work or finishing a specific project).

In other words, the community tends to act like a mob.

And I saw no better example of this than when I was on a group video chat last week and saw Flash professionals practically weeping, calling out Adobe representatives, demanding, threatening to break up, over these announcements. It was more like seeing your drunk friend vent over his ex-girlfriend than it was watching a group of well-respected developers discuss their future. Everything is in a turmoil, it’s the end of the world, everyone is screwed, etc.

REPORTS OF FLASH’S DEATH

Ok, but that isn’t actually the end of “Flash” as a whole. Probably. Even though it really sounds like it. Let me explain.

Adobe has a ton of outs from this situation that let them preserve their and your investments. The most obvious out is replacing Flash Player with HTML5. You export from Flash Pro or Flash Builder and it runs directly on HTML5. In fact, they have been inching towards this in different forms for a while now (the conversion tool on Labs, Edge, Muse, etc.).

Even if they drop AS3 and go with JS, their tools can still be useful. If Flash Pro can still create banner ads and interactive experiences for a large audience, who cares what the output runs on? Life will continue relatively unchanged for a lot of Adobe customers.

There’s also a more subtle out:

HTML5 has its weaknesses. Lots of them. But public opinion supports it. Maybe it’s just a Betamax vs. VHS difference. Or maybe HTML5 is doomed due to the conflicting goals of vendors and the difficulty of the implementation task.

Maybe HTML5 ends up being great for less demanding uses – like basic enterprise apps, ads, motion graphics, etc. – but can’t get it together for highly demanding and integrated stuff like games. Adobe can keep Flash around and focus specifically on the game use case – which, by the way, is also highly beneficial for non-game apps, since they tend to use subsets of game functionality – and get as much value from it as possible for as long as possible.

Between the games angle and inertia, Flash could remain relevant for years. It could even end up totally dominating that space for a long time to come, even as HTML5 takes over the bottom of the market, due to being able to be more focused and agile.

CONCLUSIONS

Let me add two caveats. First caveat: At some point you can only expect so much out of a platform – you can’t get a guarantee that it will remain relevant for ten years. Even proven, still-relevant technologies like C have had their death announced many times. At some point you just have to say, “well, N years more relevance is good enough and I’ll re-evaluate in a year.”

Second caveat: Maybe Adobe screws the pooch and that’s that. Maybe they cut too many resources from Flash. Maybe they don’t build good stuff on HTML5. Maybe they ruin everything. So don’t bet the farm. Make sure you learn a few different technologies well. It will make you a better developer, even if you still just do Flash work day to day. And you’ll sleep easier knowing that if worst comes to worst you have an out. I’ve never seen a successful programmer regret having learned a new language or paradigm.

I don’t think Adobe is making bad decisions, just difficult ones.

Bottom line: Flash is a declining asset, but declining assets aren’t dead or even out of the fight. Everyone needs to look at technologies on their merits and see if it’s a good fit for your needs. There are a lot of places where Flash will continue to be a good fit for a while to come – and the places where it is ambiguous deserve careful consideration regardless of Adobe’s stated plans.

(Thanks for reading! If you liked this article, please consider voting it up on HackerNews or Reddit)

Molehill and the Display List

One of my posts on the Flash display list was quoted recently in a post by Amos Laber on his excellent blog. He said:

So developers like Ben Garney are opting to write their own renderers in order to gain better performance, but that is not an ideal long term solution. A much better one would be to utilize both multi-threading and GPU hardware acceleration for the standard flash Display List.

An example of a very basic game UI. We are seeing an uneasy alliance between Stage3D and DisplayObject. They work together but not fantastically. How can Adobe reconcile these two different worlds? As Amos points out, it’s a lot like the bad old days in the early 90s when UI libraries were non-existant for OGL/DX and games got by with the bare minimum in terms of UI.

Flash is pivoting from a rich content web runtime to a platform. Things that were previously built into the player need, in my opinion, to become a minimal native API that is enriched by powerful libraries. The display list is a great example of this. 90% of what the display list does can be done as well or better by pure AS3 (in fact, if you look carefully, many of the native DisplayObject methods are actually implemented in AS3). So why not move all that functionality into an AS3 library that comes with the platform, and focus on making the remaining 10% as good and generally useful as possible?

It’s like how an operating system has just a few basic routines for working with the file system, on top of which people build a wide variety of powerful tools like Finder, Explorer, Google Desktop, Alfred, bash, and so on.

The Flash team and the surrounding community has done the software world a tremendous service by developing great ways to build rich interactive experiences. Tweening and the display list are key foundations to those techniques. But they can work anywhere, and in almost any language – take a look at Sparrow, for instance, which provides a lot of the Flash API on iOS.

If I was going to make a prediction for Flash’s future, it would be that long term the display list will take a step back in favor of core APIs like Molehill. Of course, there will still be a display list or display list like APIs, but they will be conveniences on top of fundamental capabilities. This not only follows the trends seen in OS X, Windows, Java, and other platforms, but also enables more innovation and choice on the part of developers.