Latency estimation in Phaser Quest

In this article, I’ll explain how the server of Phaser Quest keeps track of the latency of all connected player throughout the game.

The code presented here is part of Phaser Quest’s source code.

What is latency and why measure it

Latency is the time it takes for a data packet to travel between the game server and a client. Each client connected to a server may have a different latency, based on multiple factors. Information needs to travel through physical wires to go from the server machine to the client’s computer, or vice versa. Although information on the internet can travel at a speed close to the speed of light, it still takes time. A common and normal latency is about 20ms when interacting with a machine not too far geographically. But if one of your players has a bad or congested connection, the latency may increase a lot, leading to the familiar phenomenon of lag.

Having good estimates of the latency of your players can be useful for multiple reasons, such as attempting to compensate it (as is done in Phaser Quest and explained here), to record it as part of your game statistics, and in order to spot problems with your server.

How to estimate latency

In Phaser Quest, latency is estimated using a ping-pong scheme that takes place each time the server sends an update to the client. Here is the process:
– The server sends an update packet to all clients. An additional field called stamp is added to the packet. Its value is the timestamp of the server at the time of sending the update. This is the “ping” step.
– Each client receives the update, some sooner than other, depending on each client’s latency. Before processing the update, each client sends back a small packet, the “pong”. It contains the stamp field sent by the server, unmodified.
– Whenever the server receives the pong packet of a client, it compares the stamp in the packet (which is the server timestamp of the moment when the update was sent) to the current server timestamp. The difference is the time, in milliseconds, that it took for the update to reach the client and for the client’s response to reach the server. If you divide it by two, you get an estimate of the latency of the client.
– This interaction is repeated each time that the server sends an update to a client, therefore sampling the latency throughout the entire game. The server keeps, for each client, a list of the last 20 latency estimates. The running estimate of the latency of a client corresponds to the median of these 20 last estimates (the median is preferred in this case as it is robust to outliers, contrary to the mean).

Here is the code for the processing of the pong response:

server.js

socket.on('ponq',function(sentStamp){ // sentStamp is the stamp sent back by the client
        // Compute a running estimate of the latency of a client each time an interaction takes place between client and server
        // The running estimate is the median of the last 20 sampled values
        var ss = server.getStamp();
        var delta = (ss - sentStamp)/2;
        if(delta < 0) delta = 0;
        socket.pings.push(delta); // socket.pings is the list of the 20 last latencies
        if(socket.pings.length > 20) socket.pings.shift(); // keep the size down to 20
        socket.latency = server.quickMedian(socket.pings.slice(0)); // quickMedian used the quickselect algorithm to compute the median of a list of values
    });

This scheme allows the server to have a rough yet robust estimate of each client’s latency at any time in the game. Note that it is intentional to combine the ping-pong interactions with the updates. Such interactions are usually made without any extra data, ideally in a context of low traffic. However, it seems more interesting to me to estimate what is the “real-life” latency, when data is actively exchanged with the client. It might be a bit higher, but more representative of the update delays that the clients experience. Moreover, bundling it with the updates avoids the (admittedly small) overhead of sending extra packets separately just to sample the latency.

Conclusion

This article described a very simple, yet robust way to estimate the latency of your clients. All comments are welcome, don’t hesitate to let me know if you have implemented or used different approaches in the past for your own project!

Featured image : credit to David Cross at webhostingmedia.net for the server and computer icons.

Jerome Renaux

I'm an independent game developer, working mainly on browser-based multiplayer online game (HTML5 & Javascript). I love to discuss about all aspects of game development so don't hesitate to get in touch!

More Posts

Follow Me:
Twitter

Jerome Renaux

I'm an independent game developer, working mainly on browser-based multiplayer online game (HTML5 & Javascript). I love to discuss about all aspects of game development so don't hesitate to get in touch!