Ludum Dare 26 & Loom

Are you a fan of Ludum Dare? I’ve loved watching it for a long time. The huge community of excited developers is fantastic to watch, and some great games come out every time. More than that, LD is a great opportunity. In fact, such a good opportunity that we’re giving LD participants a huge deal on Loom (but more on that later).

The incredible opportunity in an event like LD is that it gets you to finish something. It’s so common for projects to run on and on and on and on… Professionally, you could work in AAA games for a decade and only ship a few games. Imagine being a professional painter and only making 10 paintings in your whole career.

There are big lessons you only learn when you finish. Like – was the feature you spent 80% of your time working on what made the game fun, or was it the feature you added at the last minute on a lark that made the whole game work? Is your gameplay immediately understandable? How much is your fun driven by content vs. gameplay? What dumb things kept people from enjoying your game (like missing DLLs, unclear instructions, installer issues, and so on)? What REALLY goes into the last 20% it takes to ship?

You also get the big endorphin rush of releaseIt feels GOOD to ship. Even if you decide the project was a failure, completing it is good. You can put it on the shelf and refer to it later. And it’s motivating to know you’ve gotten something DONE and don’t have to think about it any longer.

It’s easy to get stuck in the doldrums of project creation. You end up going around and around creating new things on new tech. It’s shiny and in some ways fun, but you never experience the growth and maturation that comes from shipping and sharing your creation with the world. Shipping – even something small – gets you out of that rut.

Take some time and participate in Ludum Dare 26. Creating and finishing a small game project is one of the best investments you can make in yourself – not just as a game developer but as a professional. It’s easy to overlook how valuable this can be.

And of course – Loom is a great fit for making small games fast. Through LD26, use the code GO_LD26 to get 50% off all Loom subscriptions. Get Loom and go make something cool!

Loom is Launched!

Screen Shot 2013-02-28 at 11.59.20 PM

Howdy!

You may have wondered what I’ve been up to since PushButton Labs and PushButton Engine. Nate Beck, Josh Engebretson, and I are proud to share our latest creation, the Loom Game Engine, with the world. It’s a native mobile game engine with live reloading of code and assets, a great command line workflow, and a powerful AS3-like scripting language.

Check out this sweet video demoing Loom:

We’re giving away Loom Indie Licenses (normally $500) for FREE until Mar 29, the last day of GDC. We’ve already given away almost $2,000,000 in licenses. Get yours now!

TCP is the worst abstraction.

TCP is the worst abstraction.

You are hopefully familiar with Leaky Abstractions as described by Joel Spolsky. The idea is that when you add layers to hide messy details, you can mostly avoid having to know what exactly is going on – until something breaks. Think of it as putting a smooth plastic coating on your car. Everything is really simple and zero-maintenance until your engine breaks and now you’re peeling plastic back trying to figure out which part is on fire…

TCP makes some big promises. “Your data will magically arrive in order and on time!” “Don’t worry about it, I’ll retry for you.” “Sure – I can send any amount of data!” “Hahah, packet sizes? I’m sure we don’t have to worry about those.”

Let’s talk about springing leaks. Just like when your upstairs neighbor’s toilet springs a leak and you have to deal with the concrete realities that a high flow water source above your bedroom ceiling introduces, springing leaks means you can’t use your abstraction anymore – you now have to work with the underlying system, often at one remove (or more!) because you’re working through the abstraction you chose to shield you from this in the first place!

TCP is leaky as a sieve. TCP says “I’ll just act like a stream and send bytes to someone on the internet!” But here are just a few areas where TCP breaks:

  • If you send too much data at once (the OS buffer fills and the write fails; you then have to resend).
  • If you send too little data at a time (the OS will sometimes fix this for you, see Nagle’s Algorithm, which can be good or bad depending on when that data needs to go over the wire).
  • If you try to read too much data at once (again, the OS receive buffer has limited size – so you have to be able to read your data in chunks that fit inside that limit).
  • If you transfer data at the wrong rate (the TCP flow control rules can be a problem).
  • If you try to read too little data at a time (then OS call overhead dominates your transfer speeds).
  • If you want to assume data has arrived (it may not have, you have to peek and see how much data there is and only read if there is enough, which necessitates careful design of your protocol to accomodate this).
  • If you want to initialize a connection in a deterministic fashion. (You have to do a bunch of careful checks of domain/IP/etc. to make sure it will even go through and once the connection is initialized you have to figure out if it’s alive or not. It can also take quite a while to establish a connection and get data flowing, see efforts like SPDY)
  • If you are on a lossy network (it will incur arbitrary overhead resending lost data).
  • If you want to manage latency (you have to take care to send data in correct packet boundaries).
  • If you want to connect through a firewall (good luck with that one).
  • If you want to use nonblocking IO. (You have to do a bunch of platform specific crud and even then not all actions are nonblocking; you have to live in a separate thread and block there.)
  • If you want to run a popular service. (There are a lot of ways the OS can be tricked by outside parties into mismanaging its resources leading to starvation/denial of service attacks.)

IMHO, TCP is an abstraction in name only. If you want to get any kind of decent results from it, you have to fully understand the entire stack. So now not only do you have to know everything about TCP, you have to know everything (or at least most of it) about IP, about how the OS runs its networking stack, about what tricks routers and the internet will play on you, about how your protocol’s data is set up, and so on.

I came to networking in a roundabout way. I did a couple of small TCP projects in my teens, but I spent most of my formative programming years (18-23 or so) working with Torque, which uses the User Datagram Protocol (UDP). Here’s what UDP code looks like:

// Send a packet.
sendto(mysocket, data, dataLen, 0, &destAddress, sizeof(destAddress));
// Receive a packet.
recvfrom(mysocket, data, dataLen, 0, &fromAddress, sizeof(fromAddress));

It’s very very simple and it maps almost directly to what the Internet actually gives you, which is the ability to send and receive routed packets from peers. These packets aren’t guaranteed to arrive in order nor are they guaranteed to arrive at all. In general they won’t be corrupted but it would behoove you to check that, too.

This is primitive, like banging two rocks together! Why do this to yourself? Well – it depends. If you just need to create some basic networking behavior and don’t care if it’s subpar, TCP works well enough, and if you have to, you can get it to sing for certain situations. And sometimes TCP is required because of firewalls or other technical issues. But if you want to build something that is native for the network, and really works well, go with UDP. UDP is a flat abstraction. You have to take responsibility for the network’s behavior and handle packet loss and misdelivery. But by doing so you can skip leaky abstractions and take full advantage of what the network can do for you.

Sometimes it’s better to solve hard problems up front, rather than ignoring them and hoping they go away

Some Thoughts on Build Servers

Continuing from last week’s thoughts about build systems, let’s talk about build servers.

Say you’ve gotten a build system, like CMake, up and running in your project. Now the developers on your project are doing consistent builds across all your different platforms, and people are hopefully not missing important build steps anymore. However, there can still be differences between systems, and it’s hard for any one developer to try building on all platforms. Additionally, they could have some left over crud from old builds that throws things off.

The next step in sanity for your project workflow is to set up a build server. This is a box (or boxes) that sits there and pulls down clean copies of your codebase and does full builds, from scratch, all the time. It is kept in a pristine condition so that you don’t e.g. introduce a new dependency in your production binaries by updating Visual Studio. To do these builds, it runs some continuous integration package that lets developers trigger builds, check up on their status, and view build logs to find out why things broke. CI packages can also do fancier tricks like run unit tests, upload builds to QA or even the public, tag releases, and so on. (We’ll get into advantageous use of these in a later post.)

There are a lot of great build server options out there, and I’ve worked with or evaluated many of them. Let’s walk through them in order:

Tinderbox Mozilla’s Tinderbox was one of the first public continuous integration packages. Much like Bugzilla, it was the first place many people looked – and most people moved on to look at something better suited to their needs right away. This is not necessarily a knock on Bugzilla or Tinderbox, as they and their derivatives have continued to serve Mozilla just fine over the years.

PMEase QuickBuild QuickBuild was my first experience running a build server. We used QuickBuild 1.0 for C++ Torque builds on multiple platforms, which was weird and new – at the time, most build server packages – including this one – were very Java oriented. Luckily, we could call out to MSVC and XCode from Ant! PMEase has kept with it, and now they’re at version 4.0. I found them to have very responsive support, and QB itself was nice to configure and work with. It’s more expensive than some of the other options ($3k/site), but if you have the budget it’s worth a look.

Bamboo From Atlassian, I had high hopes for Bamboo, as JIRA is a powerful and reliable tool. However, when I last evaluated it (in late 2010 or so), I found its paradigm hard to understand. I just couldn’t figure out how I was supposed to use it – it had a lot of proprietary terminology that confused me. Additionally, it did not have good support for building all topic branches, which was a big part of the workflow I wanted my team to use. Looking it over again, I’m not sure it has improved on either front. However, I hold Atlassian in fairly high regard, so I am hopeful that someone else has figured it out and can enlighten me. 🙂

Jenkins/Hudson After a sale to Oracle, Jenkins was forked from Hudson to continue open source development. It runs on a wide variety of platforms, and it’s easy to set up. It has a large set of plugins of varying levels of maturity, and it’s not that hard to write your own. However, key plugins (for instance, the Git plugin) can have frustrating gaps and holes, and because it’s community driven, bugs can linger for a long time. The REST API is also weak in places, making it hard to extend with custom tools/scripts. My experience is that Jenkins is a solid choice for simpler projects but if you want to push your build server it can fall apart on you. We used it for several smaller projects, where it worked great, then we took it into a project with a large, 50+ member team of artists and developers. In that scenario, we ended up having to extend it heavily with custom scripts, mostly to add functionality it should have had to begin with.

JetBrains TeamCity TeamCity is free for lighter use, although heavier usage requires purchasing licenses from JetBrains. TeamCity has very solid Git/JIRA integration, and a well thought out UI. Setup isn’t hard and it has good support for adding distributed build agents. We’ve been very happy with it for our C++ and Ruby projects. It has good support for building topic branches, too.

In the end, what is crucial for continuous integration software? It should be reliable, and especially robust in the face of broken builds. It should be easy for the team to understand and use, especially when they are debugging build issues. It should be able to scale to build across all your platforms, quickly – it shouldn’t take more than 10 minutes or so for a full build across all platforms to complete.

Some Thoughts on Build Systems

Note: You might also want to read Some Thoughts on Build Servers, which discusses software packages for running automated builds on a shared server.

SmokestacksThe hardest part of software development is often the road from code in a repo to an artifact in the user’s hands.

There are a million ways you can ruin yourself along that long and winding path. You miss a DLL or dependency. You mark a flag wrong and it won’t run on a non-developer system. A setting gets changed on the system doing the compile and builds mysteriously fail. On multiple platforms (and who isn’t on at least a couple?), you forget to test on all 5 of your platforms and find out the build is broken – obviously or subtly – on one of them.

One building bock that helps cut down on this pain is a build tool – a tool to manage what files are built in what order and with what settings. If you can fire off your build from a single command line command, it dramatically reduces the risk of breakage due to outside factors – and helps a lot with setting up build boxes. At this point I’ve worked with nearly every option: make, msbuild, xcodebuild, rake, Maven, Ant, CMake, premake, qmake, and even a couple of home brew systems. Here are my thoughts on each of them:

GNU Make. The granddaddy of all build tools. Cryptic syntax, most widely used on POSIX-compatible environments like Mac or Linux. It can do anything you want it to, if you’re willing to dive deep enough into it. Provides very little hand holding. Tools like automake and autoconf expand capabilities quite a bit, but they are anything but intuitive, and if your goal isn’t a UNIX command line tool, they may be frustrating to work with. Makefiles are generally shippable if you are willing to put enough smarts in them (since they are fundamentally built on top of the shell). Make files are easy to generate, and many tools exist to programmatically do so (more on those later).

MSBuild. The successor to nmake (with a brief detour to devenv.exe), it owes a lot of its legacy to make. However, it’s fully integrated with Visual Studio, so if you have a Visual Studio project, it’s easy to drive. In general, vcprojs are pretty easy to programmatically generate, and also easy ship to other Windows developers, which is a big bonus. No viability for sharing cross platform, except possibly in the context of Mono development.

XCodeBuild. The command line tool for compiling XCode projects. It works just like XCode does, minus the goofy UI. Great for doing OSX/iOS builds, nothing doing for any other platforms. XCode project files are relatively easy to ship to people, although there can sometimes be little subtleties that screw you up. Once nice thing about XCode’s build model is that it’s fairly easy to call your own scripts at various points in the build process. The downside is that xcodeproj’s are finicky and hard to generate.

Rake. Ruby is pretty sweet, and Rake builds on it in the Ruby way – that is, with a domain specific language tailored to the task at hand. The downside is that the docs are inscrutable – you pretty much need to be prepared to look at a lot of examples and dive the code to understand it well. But it responds well to hacking and generally gets the job done. Since Rake just sequences commands it works great for non-Ruby projects – it’s basically a much better Make.

Maven. I have seen Maven used very well in real Java projects, and abused heavily in non-Java scenarios. If you grok the Maven way and are prepared to conform to its view of the world, you can get good results. But in general I think it is much more trouble than it’s worth in anything but enterprise Java contexts.

Ant. I’ve used Ant several times on non-Java projects, to good results. Ant is powerful and has some nice capabilities for filtering/selecting actions. However, it also has an obtuse XML syntax that becomes cumbersome in complex build scenarios, and it can be finicky to set up all your Ant tasks properly.

CMake. CMake is ugly, but it’s an effective kind of ugly. The CMake language is gross and its codebase is complex, with important features often being driven by subtle combinations of settings. But the docs are pretty decent, the community is large, and it has good update velocity. It also generates pretty good project files for most IDEs. And it is pretty easy to hook arbitrary commands into key points in the build process, which is a big win. CMake is bad if you want to do a lot of file processing or complex logic, but good for making and running project files that work across many platforms – including iOS and OSX.

premake. Of all these technologies, I most want premake to rock. It uses Lua, which is an easy and low-dependency language, and it has a pretty good set of modules for emitting different projects. Most of the time, projects can be shipped, which is big, too. However, the core generators are finicky, and we had compatibility issues. And development velocity isn’t as high as we’d like. So we ultimately had to drop it. However, I think it’s worth a look again in the future.

QMake. QMake is mostly associated with QT development, and exists to facilitate the preprocessing that QT requires to generate all of its binding + convenience features. It takes a simple configuration language, and can be effective. However, its support for mobile platforms appears to be rudimentary and it does not produce project files – just sequences build commands.

Homebrew. My main experience here was a custom project generation tool I developed at GarageGames. (Ultimately, many others have touched it, and I believe that it is still in use as of this writing.) We decided to go the homebrew route because we needed to ship high quality project files to our customers. None of the existing tools could produce these (premake is now probably the closest). And our existing process of hand-tweaking projects resulted in a lot of broken releases. We ended up using PHP to process hand-written project file templates. It worked because we had a large enough team to be able to spend a few man-months refining it until it was good enough. The main take away from that experience was that it’s not as hard to do as you’d think – it’s just matter of patience and groveling through exemplar build files to learn the format. The real cost is maintaining on-going compatibility with all the different versions of all the IDEs. I hope that someday GarageGames releases this tool as open source.

So, with all those out there to consider – what am I using today? Well, we are using a hybrid of Rake and CMake. We use CMake for all the project generation + compilation, while Rake deals with sequencing calls to CMake and make or xcodebuild or what have you – mostly for the build box’s benefit. Our project targets iOS, Android, Mac, and Windows, and so far this combination has worked out well.

Ultimately, you want a tool that builds from a single command and doesn’t require user intervention to produce final build artifacts. Otherwise, you will be constantly chasing your tail as you move from developer to developer or platform to platform. Any of these tools can achieve this, so it’s a question of choosing the tool or combination of tools that fit your situation the best. Good luck!

Flash Gaming Summit 2012 Slides

My FGS 2012 talk on the future of Flash and what developers can do about it is available to watch and read online!

I wanted to touch on a couple things based on feedback from different people. First, and this should be pretty obvious, everything in it is my own opinion. I tried to find sources where I could, of course. Second, Flash is obviously not dead and not going to disappear in the immediate term. However, there is a real risk that it could face a long term decline, due to various reasons I explore in the talk. Like I say on slide 12, “declining asset” is a label used to make decisions not a business reality.

Finally, although I’m building some native tech right now, it supports Alchemy as a primary platform – so Flash is still a big part of my business plan. So my money is (literally) where my mouth is in saying that Flash is still a good platform.

View recording of “It’s the end of the world as we know it (and I feel fine)” online now!

View the slides for “It’s the end of the world as we know it (and I feel fine)” now on SlideShare!

Please feel free to post any questions about the talk here. Thanks for reading!


My talk led to some strong reactions!

“Ben Garney is a very engaging speaker… This was my favorite talk of the day!” — Summary post from Autodesk’s community.

Select quotes from the chat log in the video of the talk:

ClutterMedia: he doesn’t know what he is taking about
ClutterMedia: does he own a flash site or build a game
ClutterMedia: if not go fuck him self
PaulGene 2: this guy is depressing me
terrypaton1: Talking about excellent topics Ben
Dragonsnare: gets you depressed and shows you a drink lol
Hamed: what the helll
Hamed: why does he think adobe will ditch the future

(Iain later walked out of my talk in protest.)

Speaking At Flash Gaming Summit, Attending GDC 2012

Click here for slides and video.

I will be presenting “It’s The End of The World As We Know It (And I Feel Fine)” at the Flash Gaming Summit 2012. My session is at 3pm – be sure to come! I’ll be talking about the future of the Flash platform, how to future-proof yourself against upcoming technology sea changes, and sharing some steps I’ve personally taken in that direction.

GDC2012 is on my itinerary, too. I am getting some new, native, mobile-oriented game technology ready for launch. I’ll be doing some private showings at GDC, so if you want to get the skinny, track me down on Twitter or mail me at ben dot garney at gmail.

Have a great pre-GDC crunch! 😉