Category Archives: Technology

Best Practices: Java Function Signatures

[Note: most of these examples use the Guava open-source library; I highly recommend it.]

Say I have a Java function that takes a bag of objects, does some processing, and returns another bag of objects. I could naively write the function like this:

public Foo[] process(Foo[] inputs) { ... }

This is obviously terrible, but let's talk about why it's terrible. First, we've constrained the input to be in the form of an array, which is an inflexible data type used only rarely in Java. Most callers will have their data in a collection of some sort and will be forced to call .toArray(), which is a waste of CPU and memory. The return value is equally bad - arrays are not first-class collections in Java, and in order to do any interesting additional processing they're going to have to do an explicit conversion using, say, Arrays.asList() or Arrays.stream().

Let's look at a slightly less bad version of this function:

public List<Foo> process(Set<Foo> inputs) { ... }

Now at least we're using Java collections. List<> is a useful collection and might do all of the things the caller is interested in doing with the result (but we'll get to that later).

Let's consider the parameter. The function is asking for a Set<>. Why? Does it need a collection of possibly unordered, unique Foo objects? Or does it just stream() them or iterate through them and not really care what order they're in? If that's the case, then why are we requiring passing in a Set<>? The answer is probably we created this as a helper function and the caller was keeping the data in a Set<> so we just copied that. But there's no good reason to require a Set<> if a more general type would suffice.

Likewise, let's consider the return value. Our function probably generates a list of objects internally, so just returning a List<> is fine. But List<> doesn't really tell us anything about the kind of list we've generated. Is it mutable? Immutable? The caller might need to know whether it needs to make a copy of the list to make changes to it! In these cases, especially if we're returning an immutable collection, we might want to make that specific to give our caller a heads-up!

One more time, this time even better!

public ImmutableList<Foo> process(
  Collection<Foo> inputs) { ... }

public ImmutableList<Foo> process(
  Iterable<Foo> inputs) { ... }

Both of these are perfectly fine. I tend to prefer passing in Iterable<> since it's more general and allows for lazy-evaluated sources, but Iterable<> isn't supported by Java's native for-each loop and getting a Stream<> from an Iterable<> requires the more complex call:

Streams.stream(inputs)

Rather than just:

inputs.stream()

So go with whatever you're more comfortable with.

There's still one thing more we can do to improve this, however. Note that the above requires that the Collection<> or Iterable<> be of a specific type. But in the case where we might have polymorphic objects, there's no reason the collection couldn't be of a child type - the function should be able to handle it all the same. So, let's finish with the best possible version of this function for an arbitrary class Foo (if the argument is a primitive type or String we shouldn't do this, though):

public ImmutableList<Foo> process(
  Collection<? extends Foo> inputs) { ... }

public ImmutableList<Foo> process(
  Iterable<? extends Foo> inputs) { ... }

And there you have it! The perfect function signature.
Have fun and happy coding!

Goal-based AI in Red Faction: Guerrilla

I'll sometimes talk about the stretch of time between being in graduate school at Illinois and coming to Virginia. It was possibly the second most tumultuous segment of my life (after this past couple of years), when I dropped out of my PhD program, got married, and got a chance to work in electronic entertainment for the first (and probably the last) time.

I signed on with Volition, Inc. (now Deep Silver Volition) in fall of 2005, when the original Saints Row was deep into production and Red Faction: Guerrilla was struggling to make it out of its pre-production phase. I was one of two (and later, three) programmers assigned to create intelligent AI for the latter game - AI that would be able to function in an open-world, destructible environment! It wasn't an easy task!

The senior engineer on the team had already come to the conclusion that we should using goal-based, backward-chaining classical planning for AI, because FEAR had had so much success with it in the FPS genre already and we were doing a third-person shooter (albeit not on rails to the same degree as FEAR was). The precedent set by FEAR was simple: AIs had simple goals (find the player, kill the player) and sequences of actions that could achieve those goals (hide, step out of cover, snipe, ambush), each with its own animation or animation cycle. In the earlier parlance of game AI, each state became an action, and instead of having rules governing state transitions we had freeform action plans with the only requirement that they be the simplest way to achieve a goal.

But just having goals and actions available wasn't enough. We needed a more sophisticated system that modeled how people in a fight actually might behave.

The first piece: percepts

We realized early on that a lot of the goals a person might pursue in a combat situation are informational. What's going on? Is anyone over there? I hear gunfire, but where are my enemies? In order to choose whether to pursue an investigational goal or a combat goal, we needed AIs to have internal mental state about their beliefs. And the way AIs formed beliefs was through percepts.

Percepts were typically audio (hearing footsteps; hearing gunfire) or visual (seeing a civilian or enemy). Sounds were easy: we tagged every sound a PC or NPC could make with a radius; other NPCs would hear the sound if they were within that radius. Some sounds were obviously signs of combat (explosions, gunfire) while some weren't (footsteps). When an NPC heard a sound, it created one or more beliefs along the lines of "explosion over there!" "strange footsteps in my building" "somebody's hurt"- and these beliefs would decay over time.

Percepts - especially those linked to line-of-sight - could be very expensive to calculate, and so we put all requests to perform eye raycasts (how we had to determine line-of-sight, since our geometry was destructible and we couldn't have pre-baked LOS calculations) on a queue. The delay between any particular NPC requesting a raycast to the player (or anyone else) and actually getting it tended to only be a few frames, but along with the planning delay it tended to nicely simulate the time it would take a normal person to react to something new in their field of vision; AI did not have that weird property of reacting immediately to being able to see you.

The second piece: orders

For military units, orders were a second big part of the puzzle. Orders consisted of things like "guard this building", "guard that person", "patrol this route", "kill the enemy". These didn't normally affect which goals were available (though they could, for things like guarding and patrolling, which were also goals) but tended to limit what actions were available to NPCs.

For example, until a building was destroyed, NPCs assigned to guard it would almost never consider an action plan that required them to leave. This prevented the problem in older games of monsters hearing the player and then all streaming out the door so the player could pick them off one-by-one. Instead, guarding NPCs would pick actions like "go to sniper point" or "go to a window that provides cover" or "search the building".

The third piece: goals

Goals tended to be simple, and fell into a few categories. Examples were things like "guard this building", "patrol this route", "get to safety" (for civilians), "investigate a disturbance", "dodge enemy fire", "find the enemy", and "kill the enemy". AIs would attempt to form a plan to achieve their highest-ranked goal, then if that failed, they'd drop down to their next-highest priority.

Since only a limited number of planning passes were allowed each frame, sometimes AIs would spend a small amount of time idling before they could generate a new plan. To smooth this over, we baked in some reaction animations so that it looked like they were thinking/looking around before they started running off to the next objective.

The ability to fall back to a lower-priority goal also meant that if we were actively preventing AIs from achieving their goals, they still did something sensible. For example, we limited the number of enemy NPCs who could engage the player at once on all but the highest alert levels; more distant NPCs would fall back to goals of observing the enemy or guarding. Also, it was possible that it might not be possible to fulfill an investigation or attack goal without violating the NPC's orders, in which case being able to fall back to "guard" or "escort" was important.

The final (and most fun) piece: actions

Once a goal has been chosen, an AI will try to string together a sequence of actions that takes the NPC from their current condition to the goal state. For example: if an NPC wants to kill the player, an available action might be to shoot the player from a vehicle turret. If the NPC is not on a vehicle turret, however, the NPC must first man the turret. In order to man a vehicle turret, the NPC must be in the vehicle; if they are not, they must enter the vehicle. And they cannot enter the vehicle unless they are adjacent to a door, which might require going to the vehicle. (You'll also notice that each of these is only a single animation sequence or cycle; that was by design as it gave us both a good action granularity and obvious points to blend between animation states.)

Of course, a much simpler plan of action is just "shoot the player from where you're standing". Each action has a "cost" associated with it, which may be variable (traveling further costs more). The plan the NPC can find with the lowest cost is the one they try to perform. For example, an NPC standing out in the open may dive out of the way, do a dodge roll, or duck into cover in order to avoid incoming fire; ducking into cover is the least expensive so if the NPC is already in cover they will almost always do it - unless the incoming projectile is explosive and they'll be caught in the blast, in which case they'll pick one of the more expensive options (typically diving, since it gets them out of the way the best). Likewise, an NPC in the open can run to cover, but it's usually cheaper to just dodge.

Evaluating whether an action is possible may require some computation in itself; for example, in order to perform a melee action, the NPC must be able to pathfind to a location adjacent to their target, within 2-3 feet of the same height, and have a clear line-of-action to the target from that location. That's a pathfind and a short raycast, which is non-trivial in cost. For these actions, we may delay planning a frame if our AI budget has been used up, and if it turns out that the action is untenable, we may prevent evaluating it again for a period of time (usually at least several seconds). That way we don't waste compute cycles evaluating actions we know aren't going to apply to the current situation.

Multiple action plans = emergent behavior

That kind of heuristic logic actually led to one of my favorite bugs during development. There are wild people living out in the Martian desert in RF:G called "Marauders" - kinda like the sand people on Tatooine in Star Wars - who use a lot of melee weapons and have Mad Max-like vehicles. I had a test level set up with a crowd of Marauders, one of their vehicles, and a couple of structures including a ramp. I ran my PC over a ridge at the top of the ramp, expecting a mob of Marauders to swarm me with their melee weapons, but instead, a couple of guys jumped in the vehicle, drove up the ramp, and ran me down!

The reason was that there was a bug in the melee heuristic that was comparing the difference in height not at the endpoint of the move-to-melee action, but at the start, and since the Marauders were all at the bottom of the ramp, they immediately discounted melee as a possibility and fell back to the much more complex plan of "go to car, get in car, drive to  enemy, run over enemy" to satisfy their "kill the bad guy" goal. That's exactly the kind of emergent behavior we wanted from AI in the game, and despite the fact that it only showed up in that case due to a bug, it was still an amazing proof of concept.

Anyway, it's hard to explain how the thing worked in any more depth without actually experiencing the game, so why don't you go do that? I'm sure it's cheap on Steam...

On the myth of the "10x" engineer

A lot of people have been talking about "10x engineers" and how "important" they are to things like software development. People want to put forward the idea that hiring the "right" developers (almost always white or Asian, male, from specific colleges or with specific pedigrees) is the key to making your company successful. And yes, hiring good people is important, but this "10x" stuff is complete bullshit. Let me tell you why:

Teaching + Teamwork > Raw Productivity

Assume for a moment I can get 2 or 3 times the work done that an average employee can. That's great! Now what do you do with me? If you put me nose-down in code for 40 hours a week, it's like hiring an extra person. Still great! Except that I cost the company almost as much as two junior engineers - so... not so great.

Now, what if I spend half my time teaching, mentoring, reviewing code, working with the members of my team to make them better. Say that after a year I've boosted each of their performance by a mere 25%. But then I've still put in my 100% (instead of 200%) and on a team of five the other members have contributed another +100%. But if they've learned something, that's a permanent upgrade that doesn't go away if I stop mentoring! So next year, our team gets +300% if I just write code... but maybe also if I continue to help them improve, and then the following year it's 400% instead. So on, so forth, etc. - and the company still reaps the benefits if I leave!

Combine that with the dearth of minority mentors and role models in the industry and I am far more valuable as a force multiplier than I am as an extra-productive engineer. I say that the same goes for all of these "10x" or whatever people - the idea that people can be judged only on the volume of their work output is silly and counterproductive in the long term, both for the companies they work for and the industry as a whole. Better to judge them on how much more productive they can make the people around them.

Or to put it another way, a "1x" engineer is only a liability if you're not willing to help them grow in their career. I'll take a "1x" who's willing to learn and works well with others long before I'll hire an antisocial "pro" (and have, by the way - it was a good choice).

And now, a sportsball analogy:

Consider the Boston Red Sox and the New York Yankees. The Yankees have by far the highest budget in MLB. They can (and do) buy all of the "10x" players they want - guys like A-Rod and Clemens and Ichiro. The Red Sox have a pretty big budget, but not as high. When they were very successful in the '00s, they had maybe half what the Yankees had. They also routinely lost good players to teams who could pay more. How did they win? They had amazing farm teams. They bought up promising young players and taught them how to be major-leaguers - how to win. And you know what? It worked.

Companies awash in cash can afford to go out and head-hunt the big guns. The rest of us have to develop talent. In that atmosphere, a teacher is far more valuable than a big gun. And a team player who knows how to communicate and make a whole team more productive (even if they're not teaching) is still more valuable than a big gun who only works alone.

What the hell, Java.

Here's how you instantiate a specific template or generic version of an object in C++, C#, and Java:

// C++
MyClass<TemplateType>* myObject = new MyClass<TemplateType>(params);
// C#
MyClass<GenericType> myObject = new MyClass<GenericType>(params);
// Java
MyClass<GenericType> myObject = new MyClass<GenericType>(params);

So far, so good. You can tell C# and Java inherited the syntax directly from C++.

Here's how you call a specific template or generic version of a method in each of the languages:

// C++
myObject->method<TemplateType>(params);
// C#
myObject.method<GenericType>(params);
// Java
myObject.<GenericType>method(params); // ???

How does that make any sense?

Java 8, or Oracle finally catches up to .NET Framework 3.0

Java LogoSo Oracle is finally releasing Java 8... sometime.  Since my work is pretty aggressive about upgrading Java versions, I might just get a chance to use it, and I'm actually pretty psyched because there are some cool new features.

In this post, I'm going to cover the new stuff in both Java 7 and 8 that you'll actually want to use on a regular basis.

Continue reading Java 8, or Oracle finally catches up to .NET Framework 3.0

Comparing apples to orange trees

So there's this bit of stupid, arguing that (at least in Great Britain) electric cars do not have smaller carbon footprints than gas- (er, petrol-) powered ones.  The author does the following mental gymnastics:

2011 Chevy Volt
The electric car - hero or heel?

First he takes the energy consumption of a compact petrol-burning car (55/43 kWh per 100km city/highway).  Then he takes estimates of electric vehicle performance from two other studies, which give them at ~16kWh/100km and ~20kWh/100m.  Finally he claims that since fossil fuel power production is only about 36% efficient, those numbers are really 48 and 60 kWh/100km -  worse than the compact car!

This is purported to show that there is no real benefit at all to driving the electric vehicle. Good thing there are holes in his analysis large enough to drive a Chevy Volt through.

Continue reading Comparing apples to orange trees