Adobe, Please Buy HaXe.

haxe_and_neko If you’re in the Flash space, you’ve probably seen a lot of coverage of HaXe lately. HaXe targets Flash 9 and 10, as well as JavaScript, PHP, NekoVM, and most recently C++. You can write code once and easily retarget it to different platforms, which is very exciting. This has been used to good effect by Hugh Sanderson, who is developing an iPhone path.

HaXe is cool not only for platform compatibility with things like the iPhone (and who knows – maybe other platforms like Android or Windows Mobile), but also for its language features. They deployed support for the new Flash 10 high-performance memory intrinsics before the official Adobe compiler. HaXe has generics/templates, decent conditional compilation, inlining, advanced typing a lot more. The biggest claim to fame HaXe has is a big performance margin over AS3, as well as better memory management options.

And performance is a big deal when you’re an interpreted language trying to run on Netbooks and televisions.

Since my day (and sometimes nights, too) job is working on a Flash game framework, HaXe looms large on my radar. AS3 and HaXe can already interoperate so if you want to use a HaXe physics library from your ActionScript 3 code, there’s no problem. But should we switch the core engine over? Having a seamless way to target iPhone would be big. Having an easy way to run your server code without requiring an integration with Tamarin would also be nice.

Ralph Hauwert over at unitzeroone summarizes the HaXe situation very well:

The language is great and has many features ActionScript lacks. It baffles me how one man army nicolas can single handedly write a language and a multi target compiler, and in many respects stay on top of the ActionScript compiler.

My first problem with Haxe is just that. The one man army. Should nicolas decide he doesn’t feel like it anymore, the platform is dead. Sure, there’s more people working on it, but in it’s core it seems to be that nicolas is the driving force. My second problem is that although there is a huge pool of ActionScript developers, which can work together within the industry standard called ActionScript. There’s not even near that many people who’ve even touched haxe, let alone the number of developers who are able to call themselves “Haxe Professionals”.

HaXe is very cool. But it doesn’t have the ecosystem and infrastructure to justify targeting it exclusively. As a small startup, we only get to target one platform for our technology, and AS3, although it leaves performance on the table, wins in terms of the pool of potential developers and the accessibility of the toolchain. How many people out there are using PHP or Java vs. x86 assembly?

If the basic technology is good enough, ease of use always wins out over performance. So for now we are going to stick with ActionScript 3 for PushButton Engine. You can write compelling content on it. If you really need HaXe for something, the option is there to link it in.

But I want to put the plea out to Adobe. I love you guys and all you have done in the Flash space. But please update your compiler technology. At least give us the compiler optimization wins like inlining and generics. It’s simply embarassing that one guy’s work can outperform you on your own VM!

Adobe, you have smart people working for you. You have smart people in your community. You have pretty decent tools. None of us want to jump over to a third-party technology, but if HaXe keeps beating you at your own game, it might be necessary. Buy HaXe if you must, or else match it with your in-house tech, but either way – I know you can solve this problem.

The web is catching up with you on the player front. You could dominate on the tools front and make the world better for everyone. Please. As a middleware developer, I really don’t want to have to support anything other than a single mainstream language and platform. 🙂

Technical Notes on O3D

Google released O3D, a web plugin for 3d rendering, today. It’s a pretty sweet piece of work. Definitely check it out if you have any interest in 3d rendering, the web, or JavaScript.

There are a couple of cool pieces to this puzzle, and I wanted to call them out to other people who might be reviewing this technology. The two sentence review: I am blown away. They got this right.

(FYI: This post isn’t meant to be a full walkthrough, just a quick indication of the cool jumping off points in the codebase.)

Stop number one is the graphics API abstraction. This is rooted in o3d/core/cross/renderer.h, which provides an abstract interface for performing render operations. This is sensibly designed, oriented for SM2.0 through SM4.0 level hardware. It will also deal with DX11 class hardware, although it won’t expose all of the niceties DX11 gets you. It wraps GL (o3d/core/cross/gl/) and D3D9 (o3d/core/win/d3d9). The purpose of all this is to provide a common ground for all the higher level code to issue draw commands from. It is not directly accessible from JavaScript (although some of the related classes like Sampler and State are.)

As an aside, writing a Pixomatic or Larrabee backend would not be hard. At GarageGames, we had a rendering API similar to this (unimaginatively called GFX), and it was fantastically useful. Last time I checked, there were DX8, DX9, DX10/11, GL, and Pixomatic backends for it in varying states of usefulness. We even had one guy write a backend that would stream draw commands over a socket, which was pretty cool. In the context of O3D, they have an implementation of Renderer that queues everything into command_buffer, then streams it to a command buffer server running in a separate thread, which issues the actual draw commands. It’s unclear whether this is used in the current version of the plugin based on casual inspection, but it’s a great example of what good design can get you.

The next piece is the DrawList, which is where most of the heavy lifting for drawing happens. JavaScript, of course, is not really desirable to have in your inner rendering loops, so you queue everything you want to draw (DrawElements) into a DrawList. This is wrapped by higher levels, of course, but it represents the lowest level API that’s available to JS code. All the sorting and state management to get stuff on screen happens in this area.

Alongside the DrawList stuff, you find the material system, which is pretty slick. You write your shaders in their shading language, and it converts to HLSL SM2.0 and Cg (arbvp1/arbfp1). This is enough to do almost anything you might want to (as fantastic as SM3.0+ is it’s not really necessary for most rendering). There’s a full SAS system so you can interface programmatically with your shaders.

Above this, you get into the scenegraph. Now, I subscribe to Tom Foryth’s views on scenegraphs, which is that they are basically a bad idea, but I think the Google guys were smart and set up their API so intelligent developers can avoid being screwed by the scenegraph. The scenegraph-esque stuff they do have (they call it a rendergraph) is powerful enough you can do most rendering without going nuts, and lets a lot of the heavy lifting stay in native code, where it will be fast. You end up doing retained mode-ish things, but since JS is slow, and most JS developers come from a DOM background, it works better than you might expect.

There are a lot of cool miscellaneous features, too. They embed the fast V8 JS engine right in the plugin so you can have consistently fast JS execution. They support falling back to an error texture (via Client.SetErrorTexture) when you fail to bind a sampler. You can group objects via Pack objects for easier management and stream things from archives, too. There are debugging aids and libraries for math and quaternions.

You should check out the samples. There’s a lot of impressive stuff. There’s the beach demo in the video at the top of this post, but they also do some cool stuff with animation, HUDs, text rendering, picking, and stenciled teapots. And everything is done in JavaScript right in the web page. Even the shaders are embedded in <script> tags!

O3D is a great piece of technology, and I hope it thrives. I’m excited to see what people build on this (and I’ve got a few ideas myself ;).

PlayStation Home: Serial Killer Edition

My Default Playstation Home Avatar
My Default Playstation Home Avatar

PlayStation Home came out not too long ago. Penny Arcade pretty much said what needs to be said about it, but I had to try it myself. Naturally the first thing you do is create your character from a randomized (I hope) default character.

To the left is the default character that I got.

Sony appears to be targeting the coveted 18-25 male Caucasian skinhead/serial killer demographic. I’m kind of surprised that they didn’t filter the default characters to avoid characters that appeared obviously threatening/defective. If this guy showed up at my front door, I’d call the cops.

Naturally, I ended up tweaking him to try and make him freakier, but honestly, what I ended up with was less terrifying. He resembled a creepy real life version of Robbie Rotten.

He stood out nicely compared to all the cookie cutter people on Home, but nobody really cared. In fact, it was hard to communicate with people or interact with the world at all. The loading times were ludicrous. The difference between last gen and this gen is that you can let things load in the background while you stare at the room you’re stuck in. Why would Sony spend millions of dollars and years of time on something like this?

Does anyone else get the feeling that big game projects don’t have fun on their agendas?

Web Service Explosion: Put Out The Fire With FriendFeed

How many web services do you use that have a section for linking into your other web services? I love having my internet presence linked together. But the number of links grows quickly. 10 services means 50 links that you have to maintain.

Most of these integrations are there just to post nice status messages from one place to another or provide a link to a profile. Why not use a central service like FriendFeed to handle the details? The FriendFeed API is almost there. If FriendFeed had better options for the API to push outwards, 90% of that service configuration box could go away.


A bunch of great papers related to Larrabee from Siggraph 08. Great perspective on parallel computing in general.

My read is that the GPU guys don’t have a clue. A lot of their talent has moved on. They’re focusing on maximizing flops without realizing that it isn’t always the flops that matter – it’s how you spend them. There’s for sure a market for teraflops of raw power, but LRB’s feature set maps so beautifully to rendering that I think the reduced performance is going to more than make up for it.

The trend in rendering is towards branch-heavy shaders doing random access. GPUs are fast when you’re doing branchless shaders with linear access patterns – but the trend is forcing them into the realm that Intel has been dominating for thirty years.

I guess we’ll see where things are in five years. But you can see which way I want things to go. ;)