The Inner Game of Concurrency Programming: Optimizing for Intel’s Dual Cores
While much has been written about the whys, hows, and whether-or-not-to’s of threading, there’s been little focus on the most productive way to pound a paradigm shift like this one into place. Here’s a peek into the Zen of threading for game developers.
By Alexandra Weber Morales
This excerpt is from the article originally published on Intel.com
Patterns for Performance
The paradigm may be novel, but the multi-core optimization payoff is oh-so-sweet, gamers find. And as more makers find ways to send physics engines or AI to separate threads, producing spine-chilling visual effects and behaviors, knowledge about how concurrency works best is spreading among developers.
“It’s funny you should call,” says Douglas C. Schmidt, author of “Pattern-Oriented Software Architecture, Patterns for Concurrent and Networked Objects, Vol. 2” and associate chair of computer science at Vanderbilt University in Nashville. “Just last week I was giving a tutorial for a massive multiplayer gaming company. They’re really interested in patterns for concurrency. It turns out that the Proactor pattern is a nice model or design template of how to go about building high performance for gaming on Windows platforms. It’s a thread pool concurrency model with overlapped I/O on Windows. Listening to I/O completion ports using the Proactor pattern can be a very effective design.”
If patterns are useful, could UML diagrams be far behind? I ask modeling expert and author Scott Ambler if state charts could be the key to simplifying concurrency concepts. The intrepid globe-trotter e-mails me back instantly from his current position (Siberia):
“The answer is, it depends on the developer. If they are visual thinkers, and if they understand state charts, then there is a good chance that they can help. The challenge is that everyone thinks differently and has different backgrounds, so there’s no one right answer. This is a fundamental concept that I focus on in Agile Modeling, but many traditionalists keep striving for the ‘one right methodology to rule them all’-good luck with that.”
Are Languages the Answer?
Is Fowler’s first law of distribution contradicted by Sutter’s “concurrency revolution”? I ask Ambler. “I think that his first law should be modified for concurrency-do it only if you absolutely have to. Remember when Java introduced threading in the mid-1990s? A lot of people thrashed on it and came to the conclusion that they should use it only when it’s absolutely needed. However, game programming might be one of those few situations where it is absolutely needed.”
But the usability, type safety and garbage collection inherent in Java and C# have not played a role in game concurrency. “We haven’t seen a lot of signs that people are using interpreted languages for game programming,” Lindberg says. “It’s mostly C++. There’s Managed DirectX, but we haven’t seen mainstream games built on it yet. Managed runtimes don’t solve any threading problems. Say I asked a game team to rewrite a game in C#. This doesn’t change the nature of the problem in any way. Threading done well is all the way at the other level of abstraction. We’ll see in five years if there will be some .NET games.”
There are mo vements afoot to raise the abstraction level and reduce the complexity of concurrency, however. “Transactional memory is by far the most interesting and practical abstraction here. Locks and mutexes are useful for very low-level programming, but don’t scale to high-level. Java’s synchronized methods were a horrible mistake,” Epic’s Sweeney argues. On the other hand, he claims, is the move toward “a more concurrency-friendly set of building blocks for programs.” The language he’s partial to here is the aforementioned Haskell.
“This approach looks less attractive on the surface because they take away significant features you’re accustomed to and make up for the lost power with additional features (such as far more versatile recursion capabilities). So this requires a complete change rather than an incremental refinement. But, ultimately, this is the only way software will scale up to the hundreds of cores of eventual future CPUs.”