Halo: Reach

I Shot You First Bungie's Halo Reach by David Aldridge at GDC 2001

The talk is fascinating and has several videos to illustrate situations and tools (it is thus a very large download at 542MB!). David was lead network engineer on this project. There are lots of interesting insights on practical use of scalability techniques. As is common for modern FPS games, they used a hosted client model, so that one of 16 players was acting as server. This server was authoritative for certain key events.

One key discussion is that around reliability and unreliability. We point this out in the book, and David makes an important point that not only does the difference between the two affect gameplay, but that gameplay may be changed to support the network situation. The decision about reliability depends on the event or state type, but also the latency as which the state change can be communicated. There are good examples of changing the event sequencing for particular game actions, so that the necessary plausibility is preserved, but hiding the lag somewhere in the message sequence.

The fact that messages might be reliable or unreliable means that there is to prioritize traffic. Some mention is made of a message prioritisation system that looks at outgoing packet sizes and rates and then schedules particular message in to them.

Bungie were already well known for the sophistication of their telemetry and gameplay diagnostic tools. The networking layer is no exception with real-time diagnostic tools to watch traffic to and from the client host.

The following networking stats were observed in the game:

250kbits/s
Minimum total upstream for the host of a solid 16 player game
675kbits/s
Maximum total upstream bandwidth use from a single peer
45kbits/s
Maximum bandwidth sent to one client from a host
1kbit/s
Host upstream required to replicate one biped to one client at combat
10hzMinimum packet rate for solid gameplay-bash: 10hzMinimum: command not found
100ms/200ms
Maximum latency for close-quarters gameplay for tournament/casual
133ms/300ms
Maximum latency for ranged gameplay for tournament/casual

Other interesting observations: ragdolls were synchronised in previous games, but only initial state in Halo: Reach. The bandwidth of Halo: Reach is lower than Halo 3. The networking design was based on that TRIBES Engine Networking Model, Frohnmayer and Gift, GDC 1999. We discuss that model in the book.

How to Run at 60Hz

 

We recently came across an interesting article on IMVU's engineering blog. That blog is well worth following if you are interested in collaborative virtual environments: IMVU is one of the better social platforms and it constantly involves. They also give back to the open source community (notably through their contributions to Cal3D).

 

Anyway, the recent article "How to Write an Interactive, 60 Hz Desktop Application" struck a chord. One topic we wanted to spend more time on in the book is how real-time applications are actually structured with respect to operating system facilities. In Chapter 5 we talked about the structure of real-time graphics-heavy applications, and that part of the book talks at several points about abstractions and middleware. There wasn't time to talk or space about specific control architectures.

The article on IMVU's blog does a good job of studying one particular programming paradigm: the Windows event loop. How should you structure the event loop to avoid blocking or waiting for simulation, and thus maintain a high frame-rate. Multithreading might be one answer. You could use higher-level APIs that hide the event loop, but it is important to understand the underlying event processing.

Anyway, the article is well worth a look. If we get time, we'll encode their final suggestion in to one of our boid test programs.

Unity & Middleware

 

Prompted by an earlier post about FPS size, we took a brief look at the state of networking in Unity (Pikkotech 1000 player FPS demo uses Unity). Unity has gathered a lot of momentum recently, it is the game engine of choice for our students at the moment. Unity has used RakNet for a while for their own basic networking. Several other networking middleware providers have been supporting it also. A very helpful post on their forum by zumwalt lists several middleware that could be used with Unity. Another option that seems popular is SmartFoxServer; they have a demonstration within Unity. We'd added some basic public information about the systems below: for most though you need to register to get details which would probably mean signing some away some rights to write comparative blog articles!

 

Quazal

http://www.quazal.com/

Net-Z (real-time), Rendez-Vous (lobby). Net-Z has a shared C++ object/attribute approach. No public costs. No downloadable.


Photon

http://www.exitgames.com/Photon

Reliable UDP. Unity integration. Need to register for information.


Hero Engine

http://www.heroengine.com/

Full game engine. No public costs for HeroEngine. HeroCloud academic license is $4995 for 25 seats, then $995 p.a.


Icarus Studios

http://www.icarusstudios.com/

Full game engine, need to become certified developer


Monumental Games

http://www.monumentalgames.com/

Prime Engine, no public download


MultiVerse

http://multiverse.net/index.html

Multiverse Platform, need to register.


BigWorld Technology

http://www.bigworldtech.com/

Dynamic shared/space subdivision. Indie/Academic license $299 p.a. 10-25 developer seats, some restrictions on user accounts.


SmartFox Server

http://www.smartfoxserver.com/

100 concurrent users free. Public download. Link to Unity.

 

 

Who has the biggest FPS?

 

 

FPS games involve frenetic real-time interaction which is why the games are often limited in the number of players that they can support. For peer to peer games, we are (probably) limited by the bandwidth in to the peers. For client-server games, we are limited by the bandwidth in to the server. For both cases, there is a potential O(N2) interaction between the players. For the peer-to-peer case you an ameliorate this over the peers themselves. For the client-server case, the server is responsible for this.

 

All of this begs the question who has been able to push the FPS genre to the maximum number of players? 32 or 64 players are common limits. Since early 2010, to our knowledge the FPS with the largest claimed players was MAG (Massive Action Game) written by Zipper Interactive, and published by SCEA. It claimed 256 players in two large teams, each of which structured hierarchically. Players could command large squads or platoons once they had risen up the ranks.

How this is achieved at a network level is not known; the server architecture is proprietary and subject to a patent. The maps of the game are quite large so we could expect the players to be quite spread out. We haven't played the game (it is on the pile to play), but we expect that you are free to explore this map at will, and could, in principle achieve primary awareness of any other player by simple movement. The fact that we can expect players to be spread out suggests that the calculations could be spread over a small number of host processes. In fact it doesn't seem unreasonable to suggest that today, with 4 core servers being very cheap, that you could run 256 on a blade.

Now though, via the Rock Paper Shotgun blog, we learnt about a new contender technology promises to bring 1000 player FPS encounters. Pikkotech is a middleware company that is explicitly advertising a multi-server solution for 1000 players. Their technology white paper talks about the problems of handing over object state between different servers; and server interaction regions. Their architecture promises to deal with many of the issues that we raised in Chapter 12. We'll be interested to see how well such architectures can be made to scale.