Playing with Starcraft 2 Editor to understand how a good RTS is made

When working on Iron Marines engine at work we did some research on other RTS games in order to have more knowledge on how they did some stuff and why. In this post, in particular, I want to share a bit of my research on the SC2 Editor which helped a lot when making our own editor.

The objective was to see what a Game Designer could do or not with the SC2 Editor in order to understand some decisions about the editor and the engine itself.

Obviously, by taking a look at the game mods/maps available it is clear that you could build entire games over the SC2 engine, but I wanted to see the basics, how to define and control the game logic.

As a side note, I love RTS games since I was a child, I played a lot of Dune 2 and Warcraft 1. I remember playing with the editors of Command & Conquer and Warcraft 2 also, it was really cool, so much power 😉 and fun. With one of my brothers, each one had to make a map and the other had to play and beat it (we did the same with Doom and Duke Nukem 3d editors).

SC2 Editor

SC2 maps are built with Triggers which are composed by Events, Conditions and Actions to define parts of the game logic. There are a lot of other elements as well that I will talk a bit after explaining the basics.

Here is an image of the SC2 Editor with an advanced map:

Trigger logic

The Triggers are where the general map logic is defined. They are triggered by Events and say which Actions should be performed if given Conditions are met. Even though behind the scenes the logic is C/C++ code and it is calling functions with similar names, the Editor shows it in a natural language like “Is Any Unit of Player1 Alive?” which helps for quick reading and understanding.

This is an example of a Trigger logic of a SC2 campaign map:

Events

Events are a way to Trigger the Trigger logic, in other words, when an event happens the logic is executed. Here is an example of an event triggered when the unit "SpecialMarine" enters the region "Region 001":

Conditions

Conditions are evaluated in order to execute the actions or not. Here is an example of a condition checking if unit "BadGuy" is alive or not:

Actions

Actions are executed when the event happened and the conditions are met. They could be anything supported by the editor, from ordering a structure to build a unit to showing a mission objective update on screen, among other things.

This example shows an action that enqueues to unit "BadGuy" an attack order with unit "SpecialMarine" as target, replacing existing enqueued orders in that unit. There is another action after that which turns off the Trigger in order to avoid processing its logic again.

The idea with this approach is to build the logic in a descriptive way, the Game Designer has tools to fulfill what he needs in terms of game experience. For example, he needs to make it hard to save a special unit when you reach its location, then he sends a wave of enemies to that point.

I said before that the editor generates C/C++ code behind the scenes, so, for my example:

The code generated behind the scenes is this one:

Here is a screenshot of the example I did, the red guy is the SpecialMarine (controlled by the player) and the blue one is the BadGuy (controlled by the map logic), if you move your unit inside the blue region, BadGuy comes in and attack SpecialMarine:

Even though it is really basic, download my example if you want to test it 😛 .

Parameters

In order to make the Triggers work, they need some values to check against, for example, Region1, is a region previously defined, or “Any Unit of Player1”. Most of the functions for Events, Conditions and Actions have parameters of a given Type, and the Editor allow the user to pick an object of that Type from different sources: a function, a preset, a variable, a value or even custom code:

It shows picking a Unit from units in the map (created instances).

It shows Unit picking from different functions that return a Unit.

This allows the Game Designer to adapt, in part, the logic to what is happening in the game while keeping the main structure of the logic. For example, I need to make the structures of Player2 explode when any Unit of Player1 is in Region1, I don’t care which unit I only care it is from Player1.

Game design helper elements

There are different elements that help the Game Designer when creating the map: Regions, Points, Paths and Unit Groups, among others. These elements are normally not visible by the Player but are really useful to the Game Designer to have more control over the logic.

As said before, the SC2 Editor is pretty complete, it allows you to do a lot of stuff, from creating custom cutscenes to override game data to create new units, abilities, and more but that's food for another post.

Our Editor v0.1

The first try of creating some kind of editor for our game wasn't so successful. Without the core of the game clearly defined we tried to create an editor with a lot of the SC2 Editor features. We spent some days defining a lot of stuff in abstract but in the end we aimed too far for a first iteration.

So, after that, we decided to start small. We starting by making a way to detect events over the "being defined core" at that point. An event could be for example: "when units enter an area" or "when a resource spot was captured by a player".

Here are some of the events of one of our maps:

Note: Even though they are Events we named them Triggers (dunno why), so AreaTrigger is an empty Trigger in terms of SC2 Editor with just an Event.

Events were the only thing in the editor, all the corresponding logic was done in code in one class, corresponding to that map, which captures all events and checks conditions for taking some actions, normally, sending enemies to attack some area.

Here is an example code for some of the previous defined events:

It wasn't a bad solution but had some problems:

  • The actions were separated from the level design which played against the iteration cycle (at some point our project needed between 10 and 15 seconds to compile in the Unity Editor).
  • Since it needs code to work, it requires programming knowledge and our team Game Designers aren't so good with code.

Our Editor v0.2

The second (and current) version is more Game Designer friendly, and tends to be more similar to SC2 Editor. Most of the logic is defined in the editor within multiple triggers. Each Trigger is defined as a hierarchy of GameObjects with specific components to define the Events, Conditions and Actions.

Here is an example of a map using the new system:

This declares for example a trigger logic that is activated by time, it has no conditions (so it executes always given the event) and it sends some enemies in sequence and deactivates itself at the end.

We also created a custom Editor window in order to help creating the trigger hierarchy and to simplify looking for the engine Events, Conditions and Actions. Here is part of the editor showing some of the elements we have:

All those buttons automatically create the corresponding GameObject hierarchy with the proper Components in order to make everything work according to plan. Since most of them need parameters, we are using the Unity built-in feature of linking elements given a type (a Component), so for example, for the action of forcing capture a Capturable element by team Soldiers, we have:

Unity allow us to pick a Capturable element (CapturableScript in this case) from the scene. This simplifies a lot the job of configuring the map logic.

Some common conditions could be to check if a resource spot is controlled by a given player or if a structure is alive. Common actions could be, send a wave of enemy units to a given area or deactivate a trigger.

The base code is pretty simple, it mainly defines the API while the real value of this solution is in the custom Events, Conditions and Actions.

Pros

  • Visual, and more Game Designer friendly (it is easier for Programmers too).
  • Faster iteration speed, now we can change things in Editor directly, even in runtime!
  • Easily extensible by adding more Events, Conditions and Actions, and transparent to the Game Designers since they are automatically shown in our Custom Editor.
  • Take advantage of Unity Editor for configuring stuff.
  • Easy to disable/enable some logic by turning on/off the corresponding GameObject, which is good for testing something or disable one logic for a while (for example, during ingame cinematics).
  • More control to the Game Designer, they can test and prototype stuff without asking programming team.
  • Simplified workflow for our ingame cinematics.
  • Compatible with our first version, both can run at the same time.

Cons

  • Merge the stage is harder now that it is serialized with the Unity scene, with code we didn’t have merge problems or at least it was easier to fix. One of the ideas to simplify this is to break the logic in parts and use prefabs for those parts, but it breaks when having links with scene instances (which is a common case).
  • A lot of programming responsibility is transferred to the scripting team which in this case is the Game Design team, that means possibly bad code (for example, duplicated logic), bugs (forget to turn off a trigger after processing the actions) and even performance.

Conclusion

When designing (and coding) a game, it is really important to have a good iteration cycle in each aspect of the game. Having switched to a more visual solution with all the elements at hand and avoiding code as much as we could, helped a lot with that goal.

Since what we end up doing looks similar to a scripting engine, why didn't we go with a solution like uScript or similar in the first place? the real answer is I didn't try in depth other Unity scripting solutions out there (not so happy with that), each time I tried them a bit they gave me the feeling it was too much for what we needed and I was unsure how they perform on mobile devices (never tested that). Also, I wasn't aware we would end up need a scripting layer, so I prefered to evolve over our needs, from small to big.

Taking some time to research other games and play with the SC2 Editor helped me a lot when defining how our engine should work and/or why we should go in some direction. There are more aspects of our game that were influenced in some way by how other RTS games do it, which I may share or not in the future, who knows.

I love RTS games, did I mention that before?

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

Returning to Global Game Jam after 5 years

After five years of Global Game Jam vacations, we joined this year’s the last weekend.

It was a really enjoyable time and we had a lot of fun. In this blog post I would love to share my experience as some kind of postmortem article with animated gifs, because you know, animated gifs rules.

The Theme of this jam was “Waves” and together with my team we did a 1v1 fighting game where players have to attack each other by sending waves through the floor tiles to hit the adversary.

Here is a video of our creation:

My team was composed by:

Here are links to download the game and to its source code:

Postmortem

Developing a game in two days is a crazy experience and so much can happen, it is a incredible survival test where you try not to kill each other (joking) while making the best game ever (which doesn’t happen, of course). Here is my point of view of what went right and what could have been better.

What went right

From the beginning we knew we wanted to complete the jam with something done, and with that we mean nothing incomplete goes inside the game. That helped us to keep a small and achievable scope for the time we had.

When brainstorming the game we had luck in some way and reached a solid idea twenty minutes before sharing a pitch to the rest of the jammers, after being discussing by a couple of hours. The good part was not only we were happy with it but it also was related with the theme (we wanted to do a beat’em up but we couldn’t figure out how to link it with it).

Deciding to use familiar tools like Github, Unity and Adobe Animate and doing something we all had experience in some way helped us a lot to avoid unnecessary complications. I have to mention too that half of the team works together at Ironhide so we know each other well.

Our first prototypes, both visual and mechanical, were completed soon enough to validate we were right about the game and they also were a great guide during development. Here are two gifs about each one.

Visual prototype made in Adobe Animate.

Game mechanics prototype made in Unity

Our artists did a great job with the visual style direction which was really aligned with the setting and background story we thought while brainstorming. If you are interested here is the story behind the game.

The awesome visual style of the game.

Using the physics engine sometimes seems like exaggerating a bit, mainly with small games but the truth is, when used right, it could help to achieve unexpected mechanics that add value to the game while simplifying things. In our case we used to detect not only when a wave hits a player but also to detect when to animate the tiles and it was not only a good decision but also really fun.

Animated tiles with hitbox using physics.

Using good practices like prototyping small stuff in separated Unity scenes, like we did with the tile animations, allowed us to quickly iterate a lot on how things should behave so they become solid before integrating with the rest of the game.

Knowing when to say no to new stuff, to keep focus on small, achievable things, even though, from time to time, we spent some time to think new and crazy ideas.

Having fun was a key pillar to develop the game in time while keeping the good mood during the jam.

What could have been better

We left the sound effects to the end and couldn’t iterate enough so we aren’t so happy with the results. However, it was our fault, the dude that did the sounds was great and helped us because a lot (if you are reading, Thank you!). In the future, a better approach would be to quickly test with sounds like voice made or similar downloaded effects from other games, to give him better references to work with.

Not giving room to learn and integrate things like visual effects and particles which would have improved the game experience a lot. Next time we should spend some time to research what we wanted to do and how it could be done before discarding it, I am sure some lighting effects could have improved a lot the visual experience.

We weren’t fast enough to adapt part of the art we were doing to what the game mechanics were leading us, that translates to doing art we couldn’t implement or it is not appreciated enough because it doesn’t happen a lot during the game. If we reacted faster to that we could have dedicated more time in other things we wanted to do.

Too many programmers for a small game didn’t help us when dividing tasks in order to do more in less time. In some way this forced us to work together in one machine and it wasn’t bad at all. It is not easy to know how many programmers we needed before making the team nor adapting the game to the number of programmers. In the end I felt it was better to keep a small scope instead of trying to do more because a programmer is idle. In our case our third programmer helped with game design, testing and interacting with the sounds guy.

Since the art precedes the code, it should be completed with time before integrating it and since we couldn’t anticipate the assets we almost can’t put them in the game. In fact we couldn’t integrate some. The lesson is, test the pipeline and know art deadline is like two hours before the game jam’s end in order to give time to programmers.

Using Google Drive not in the best way complicated things when sharing assets from artists to developers to integrate in the game since we had to upload and download manually. A better option would have been using Dropbox since it automatically syncs. Another would have been forcing artists to integrate them in the game themselves using Unity and Git to share it, but I didn’t want them to hate me so quick so they avoided this growth opportunity.

The game wasn’t easy to balance since it needed 2 players in order to test and improve it. Our game designer tried the game each time a new player came to see what we were doing but that were isolated cases. I believe were affected a bit by the target resolution of the first visual prototype in terms of game space which left us only with the wave speed factor but if we changed to too slow the experience was affected.

We didn’t start brainstorming fast enough, it was a bit risky and we almost didn’t find a game in time. For the next time, we should start as soon as possible.

Some of the ideas we say no because of the scope were really fun and in some way we may have lose some opportunities there.

Drinking too much beer on Saturday could have reduced our capacity but since we can’t prove it we will not stop drinking beer during Global Game Jams.

Conclusion

Participating on the Global Game Jam was great but you have to have free weekends to do it, for some of us it is not that simple to make time to join the event but I’m really sure it worth it.

I hope to have time for the next one and see our potential unleashed again. Until then, thanks my team for making a good game with me in only one weekend.

Cheers.

 

Guardar

Guardar

Guardar

Guardar

Guardar

VN:F [1.9.22_1171]
Rating: 5.0/5 (3 votes cast)

Making mockups and prototypes to minimize problems

I’m not inventing anything new here, I just want to share how making mockups and prototypes helped me to clarify and minimize some problems and in some cases even solve them with almost no cost.

For prototypes and mockups I'm using the Superpower Assets Pack of Sparklin Labs which provided me a great way of start visualizing a possible game. Thank you for that guys.

I will start talking about how I used visual mockups to quickly iterate multiple times over the layout of the user interface of my game to remove or reduce a lot of unknowns and possible problems.

After that, I will talk about making quick small prototypes to validate ideas. One of them is about performing player actions with a small delay (to simulate networking latency) and the other one is about how to solve each player having different views of the same game world.

UI mockups

For the game I’m making the player's actions were basically clear but I didn't know exactly how the UI was going to be and considering I have small experience making UIs, having a good UI solution is a big challenge.

In the current game prototype iteration, the players only have four actions, build unit, build barracks, build houses and send all units to attack the other player. At the same time, to perform those actions, they need to know how much money they have, available unit slots and how much each action cost.

To start solving this problem, I quickly iterate through several mockups, made directly in a Unity scene and using a game scene as background to test each possible UI problem case. For each iteration I compiled it to the phone and "test it" by early detecting problems like "the buttons are too small" or "can't see the money because I am covering it with my fingers", etc.

Why did I use Unity while I can do it with any image editing application and just upload the image to the phone? Well, that's is a good question, one of the answers is because I am more used to do all these stuff in Unity and I already have the template scenes. The other answer is because I was testing, at the same time, if the Unity UI solution supported what I was looking for and I could even start testing interaction feedback, like how the button will react when touched, if the money will turn to red when not having anymore, etc, something I could not test with only images.

The following gallery shows screenshots of different iterations where I tested button positions, sizes, information and support for possible future player actions. I will not go in detail here because I don't remember exactly the order nor the test but you could get an idea by looking at the images.

It took me like less than 2hs to go through more than 10 iterations, testing even visual feedback by discovering when testing that the player should quickly know when some action is disabled because of money restriction or not having unit slots available, etc. I even have to consider changing the scale of the game world to give more empty space reserved for the UI.

Player actions through delayed network

When playing network games, one thing that was a possible issue in my mind is that the player should receive feedback instantly even though the real action could be delayed a bit to be processed in the server. In the case of a move unit action in a RTS, the feedback could be just an animation showing the move destination and process the action later, but when the action considers a consuming a resource, that could be a little tricky, or at least I wasn’t sure so I decided to make a quick test for that.

Similar to the other, I created a Unity scene, in a separated project, I wanted to iterate really fast on this one. The idea to test was to have a way of processing part of the action in the client side to validate the preconditions (enough money) and to give the player instant feedback, and then process the action when it should.

After analyzing it a bit, my main concern was the player experience on executing an action and receiving instant feedback but watching the action was processed later, so I didn’t need any networking related code, I could test everything locally.

The test consisted in building white boxes with the right mouse button, each box costs $20 and you start with $100. So, the idea is that in the moment the button is pressed, a white box with half opacity appears giving the idea the action was processed and $20 are consumed, so you can’t do another action that needs more than that money. After a while, the white box is built and the preview disappear.

Here is a video showing it in action:

In the case of a server validating the action, it will work similar, the only difference is that the server could fail to validate the action (for example, the other player stole money before), in that case the player has to cancel it. So the next test was to try to process that case (visually) to see how it looks like and how it feels. The idea was similar to the previous case but after a while the game returns the money and the box preview disappears.

Here is a video showing this case in action:

It shouldn't be a common case but this is one idea on how it could be solved and I don’t think it is a bad solution.

Different views of the same game world

The problem I want to solve here is how each player will see the world. Since I can't have different worlds for each player the idea is to have different views of the same world. In the case of 3d games, having different cameras should do the trick (I suppose) but I wasn't  sure if that worked the same way for a 2d game, so I have to be sure by making a prototype.

One thing to consider is that, in the case of the UI, each player should see their own actions in the same position.

For this prototype, I used the same scene background used for the mockups, but in this case I created two cameras, one was rotated 180 degrees to show the opposite:

player0(player 1 view)

player1(player 2 view)

Since UI should be unique for each player I configured each canvas for each camera and used the culling mask to show one or another canvas.

Again, this test was really simple and quick, I believe I spent like 30 mins on it, the important thing is that I know this is a possible (and probably the) solution for this problem, which before the test I wasn't sure if it was a hard problem or not.

Conclusions

One good thing about making prototypes is that you could do a lot of shortcuts or assume stuff since the code and assets are not going to be in the game, that gives you a real fast iteration time as well as focus on one problem at a time. For example, by testing the mockups I added all assets in one folder and used Unity sprite packer without spending time on which texture format should I use for each mobile platform or if all assets can be in one texture or not, or stuff like that.

Making quick prototypes of things that you don't know how to solve, early, gives you a better vision of the scope of problems and it is better to know that as soon as possible. Sometimes you could have a lead on how to solve them and how expensive the solution could be and that gives you a good idea on when you have to attack that problem or if you want it or not (for example, forget about a specific feature). If you can't figure it out possible solutions even after making prototypes, then that feature is even harder than you thought.

Prototyping is cheap and it is fun because it provides a creative platform where the focus is on solving one problem without a lot of restrictions, that allows developing multiple solutions, and since it is a creative process, everyone in game development (game designers, programmers, artists, etc) could participate.

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

VN:F [1.9.22_1171]
Rating: 5.0/5 (4 votes cast)

Delegating responsibilities from the engine to the game

When building a platform for a game I tend to spend too much time thinking solutions for every possible problem, in part because it is fun and also a good exercise and in part because I am not sure which of those problems could affect the game. The idea of this post is to share why sometimes I believe it is better to delegate problems and solutions to the game instead of solving them in the engine.

In the case of the game state, I was trying to create an API on the platform side to be used by the game to easily represent it. I started by having a way to collaborate with the game state by storing values in it, like this:

public interface GameState {
    void storeInt(string name, int number);
    void storeFloat(string name, float number);
}

In order to use it, a class on the game side has to implement an interface which allows it to collaborate in part of the game state data:

public class MyCustomObject : GameStateCollaborator
{
    public void Collaborate(GameState gameState){
        gameState.storeFloat("myHealth", 100.0f);
        gameState.storeInt("mySpeed", 5);
    }
}

It wasn't bad with the first tests but when I tried to use in a more complex situation the experience was a bit cumbersome since I had more data to store. I even felt like I was trying to recreate a serialization system and that wasn’t the idea of this API.

Since I have no idea what the game wants to save or even how it wants to save it, I changed a bit the paradigm. The GameState is now more like a concept without implementation, that part is going to be decided on the game side.

public interface GameState {
     
}

So after that change, the game has to implement the GameState and each game state collaborator will have to depend on that custom implementation, like this:

public class MyCustomGameState : GameState  
{
    public int superImportantValueForTheGame;
    public float anotherImportantValueForTheGame;
}

public class MyCustomObject : GameStateCollaborator
{
    public void Collaborate(GameState gameState)
    {
        var myCustomGameState = gameState as MyCustomGameState;
        myCustomGameState.anotherImportantValueForTheGame = 100.0f;
        myCustomGameState.superImportantValueForTheGame = 5;
    }
}

In this way, I don’t know or care how the game wants to store its game state, what I know is there is a concept of GameState that is responsible of providing the information the platform needs to make features, like a replay or savegame, work. For example, at some point I could have something like:

public interface GameStateSave 
{
    void Save(GameState gameState);
}

And let the game decide how to save its own game state even though the engine is responsible of how and when that interface is used to perform some task.

In the end, the platform/engine ends up being more like a framework, providing tools or life cycles to the user (the game) to easily do some work in a common way.

VN:F [1.9.22_1171]
Rating: 5.0/5 (2 votes cast)

The story of the non deterministic Replay

This is the story of how I discovered my simplified replay system wasn’t so deterministic as I believed because I had an ugly bug, but read the post if you want to know where exactly.

While integrating the lockstep engine on the game I am working on, I decided to do something to save and load replays to be able to easily reproduce some bugs I was experimenting. After I had that done and working, it was pretty awesome to see I can replay the same game multiple times (food for another blog post by the way), however, I thought it could be fun and easy to play them faster, why not.

Since I have a fixed time step logic, it should should be pretty straightforward, simply use a multiplied time and then the fixed timestep logic would do all the work and the game logic shouldn't notice the change. I decided to give it a try and it worked…. almost, when playing the replays at higher speeds I noticed some visual differences but I wasn’t totally sure (it could be interpolation code).

To verify, I went back to the test project, where I had the moving box, and test it there, but I needed some way to be sure. Since I have already a way to calculate checksums of the game state, I used that to verify the game states when playing replays at different speeds (from 2x to 16x).

It failed, even though it only failed to validate some frames, following frames were not necessarily invalid (this is something important to consider).

invalid-state Image 1: It shows one of the best tools in the world to check game states when replaying a game.

So, I was right, I saw some differences, I could be sure that something was happening. The thing was, with only the checksums I couldn't know what the real difference was. Next step, making something to detect it.

In order to do that, I had to change to start saving (at least for debug) the game state, not only the checksum, and to check differences between stored game states in the replay and the current game state (when replaying the game) when checksum validation fails. It worked too, now I have the exact place where the differences are.

invalidstate_realdiff

Image 2: It shows why serializing all the game state in one string is the best thing to do in your life.

After testing it a bit, I noticed another curious thing, the game validation wasn’t always failing given the same replay and the same speed. That gave me a hint that the problem was probably not related with the game code itself (the moving box).

So, if I played the replay at 1x, it was validated properly. If I played the replay at 8x, it failed, most of the time, but not always. So, it seems there is something related with speed I don’t understand yet.

I decided to test the same replay but with Unity timescale modified, my first test was using 1x for replay but 5x for the timescale, validation failed, then the opposite, 10x for replay but 0.1x for timescale, and it worked well. So the problem seems to be related with my accumulator logic inside the fixed timestep logic?

Some test cycles later, it turns out that, it was indeed a bug in one of the core classes of the engine!

The problem was on my class LockstepFixedUpdate, the first version was overriding the Update() method and performing lockstep logic, it worked ok if only at most one fixed update is processed, but in the case a big delta time arrives, it only process lockstep logic once for the first fixed update and never again.

That means that, in case replay commands were to be processed in frame 3, we are in frame 1 and a big dt of 10 frames arrives, then lockstep logic checks only in frame one and never again until all 10 frames were processed. This bug even bypass the lockstep!

Since I made a test to replicate the bug, it was really easy to fix it, I changed to process the lockstep logic with each fixed step updates and it works fine now, I have high speed replays!! YEAH!!

Conclusion

In the process of finding this bug I started to expand the engine support and create better tools, this is really important if I want to build something solid over it.

The only way to detect issues as soon as possible is to iterate over the engine as soon as possible and to do that, use cases are needed and games provide the best use cases. In my case, I am using not only the game I am trying to make but also other similar games as references when deciding what I want and how I want to test it, for example, having replays, being able to play the replay at different speeds, being able to save the replays, etc. Also, being able to replicate a bug in a small test case where you can iterate quickly to fix it is super useful.

Detecting (and having) problems like this in a small and simple game gives the idea of the complexity of a medium to big game, all the variables and the difficulties, it is not something to underestimate, so when developers say they couldn't add multiplayer features to their game because it was really hard to do it, it is not a lie.

I love all of this stuff, even though I understand it is not an easy path.

To complete this post, here is a video showing a prototype of how I load and play a replay which was created by playing with two players in LAN, one was my computer and the other was my phone:

The quote of the day is 'Fail as much as possible, as soon as possible to avoid failing when it is too late'.

Hope you enjoyed the journey.

VN:F [1.9.22_1171]
Rating: 5.0/5 (2 votes cast)

Lockstep multiplayer first steps

In this Gamasutra article, Tundra developers explain their approach to minimize problems when developing a lockstep multiplayer platform used for their game Rapture - World Conquest. Based on that, my first approach was to create and understand a deterministic lockstep logic without even considering networking yet, to focus on one problem at a time.

My objective is to have a reproducible game that given the start state and all the players actions during time, the game can be played again and the final state will be the same.

Lockstep logic

The first step was start by creating a logic to encapsulate the fixed game state for physics and game logic, and a lockstep to perform player actions. As I started without considering networking yet, all the player actions are directly enqueued locally by the user input or by a saved replay.

A lockstep logic means that there are some conditions that, if not met, the game should pause the simulation and wait for them. In the case of multiplayer games this is when the game have to wait for other players actions that didn't arrive in time (and where the waiting for other players dialog is shown in some games).

My code looks something like this pseudo code:

update(dt) {
    if (lockstepLogic.IsLockstepTurn()) {
        if (!lockstepLogic.IsReady())
            return;
        lockstepLogic.Process();
    }
    // normal accumulator logic for the fixed gamestep
}

Since I create all player actions locally, the lockstep never happens right now but it is a good practice to simulate and test it anyways.

Testing it

My first prototype using this logic is a box that moves over the screen. When the right mouse button is pressed, the game enqueues a player action to move the box to the specified position, then when the lockstep logic is processed the box receives the command and start moving to that position.

Interpolation

The moving logic was pretty simple, based on start and destination positions and a speed, the box moves itself to the destination in a straight line. Since that logic is performed in each fixed gamestep, the player see the box jumping between positions, it works but it doesn't look so good. To improve that and make the movement smoother I created a simple interpolation code.

Interpolation depends a lot on what you are trying to interpolate, in some cases could be a simple as the code I used there but there are other cases like the bouncing ball which need more data to create a better interpolation.

Note: adding interpolation means we have now a delay of one fixed gamestep between the box position being rendered and the real position in the game since we are using previous and current positions info for the interpolation.

My first replays

By recording all the player actions with the fixed gamestep frame they were executed, I created a basic way to replay the "game" by just reseting the game state to the initial state and start enqueing all the recorded actions in each corresponding frame.

Here is a video of the progress so far:

Yeah, I know, it's not a great thing, but at least I am starting to understand some of the challenges of making multiplayer games.

Validating simulation

To validate the game is always performing the same simulation given the same input, I added a checksum calculation based on the game state (for now just the moving box state) which is saved from time to time to use later as validation when simulating the game again. The idea was to start defining an API to get the important game state to consider when validating the simulation, and also to start testing game state validation. The code looks like this:

update(frame) {
    if (IsChecksumFrame(frame)) {
        if (recording) {
            checksumRecorder.RecordState(frame, CalculateChecksum());
        } else {
            if (!checksumValidator.IsValid(frame, 
                   checksumRecorder.SavedChecksums, CalculateChecksum())) {
               throw Exception("current game state is not valid!");
            }
        }
    }
}

In my first tests I wasn't reseting the game to the initial state properly so my game was producing different checksums when reproducing the saved player actions, checksums validation started to work after that was fixed.

How am I calculating the Checksum? Right now I am not sure exactly what algorithm to use nor which game state should I consider for the checksum, so what I did was to encapsulate that in some interfaces and implemented a basic way to get both. For the checksum I am using a simple MD5 over a string, and for the game state, well..., a string with all important values concatenated (the moving box position, destination, if it is moving or not, etc).

string CalculateChecksum() {
    string gameStateValue = game.GetGameState();
    return MD5.CalculateHash(gameStateValue);
}

string GetGameState() {
    string gameState = "";
    foreach (object in gameObjects)
        gameState += object.GetGameState();
    return gameState;
}

For a future implementation what I know is that the game state should be composed with important values that can affect the simulation, so I shouldn't care about about audiovisual stuff like particles, effects and sounds.

Also, the game state concept could be used to reset to the initial game state or even for saved game states (to easily replicate some bug for example), and finally, it could be used to synchronize and validate state if for some reason I end up using a client/server architecture.

Next frontier: Determinism?

For now I didn't explore determinism realm because the solution really depends on the game logic but at the same time I must have it clear before starting the game code. One of the next steps is probably start testing with fixed point math, not sure yet, the idea is try to follow an approach similar to the gamasutra article's of reducing non determinism problem to the minimum before going multiplayer.

If you want to take a look all the code used for this blog post, here is the link.

Other links

Example of a dynamic lockstep implementation for Unity

Lockstep Framework for Unity

Reddit post about that Lockstep Framework

Another framework named lockstep.io

Developing your own Replay System

VN:F [1.9.22_1171]
Rating: 5.0/5 (3 votes cast)

Our solution to handle multiple screen sizes in Android – Part three

In the previous posts of this series we talked about our solution to handle multiple screen sizes for game menus, in particular we showed the main menu of the game Clash of the Olympians. In this post we are going to talk about what we did inside the game itself. As a side note, the solution we used here is simple and specific for this game, hope it could help as example but don't expect a silver bullet.

Scaling to match the physics world

As we use Box2D in Clash of the Olympians, the first step was to use a proper scale between Box2D bodies and our assets. The basic approach was to consider that 1m (meter in MKS system) was 32px, so in our target resolution of 800x480 could show 25m x 15m. We picked that scale because it gives pretty numbers both in terms of the game area and in terms of our assets, for example, a character of 64px of height is 2m tall. In particular, Achilles has a height of approx 60px which is equivalent to 1.875m using our scale, that sounds pretty reasonable for that character.

clashoftheolympians-800x480
The image shows the relation between screen size in pixels (800x480 in this case) and the game world in meters.

Defining a virtual area to show

We previously said that we could show 25m x 15m, in fact, the height is not so important in Clash of the Olympians since the game mainly depends in the horizontal distance. So, if we had an imaginary with a resolution of 800x400 (really wide, an aspect ratio of 2) we would show in that case 12.5m of height, we could assume that if we show at least that height the game balance would be not affected at all (enemies are never spawned too high). However, in terms of horizontal distance we want to show always the same area across all devices to avoid changing the game balance (for example, if you could see less area you couldn't react in the proper time to some waves), that is why we decided to show always 25m in terms of width.

clashoftheolympians-800x600

The image shows how we still show the same game world width of 25m on a 800x600 device.

Scaling the world back to match the screen size

Finally, in order to show this virtual area of 25m x H (with H >= 12.5m), we have to calculate the proper scale to set our game camera in each device. For example, in the case of having a Nexus 7 (1280x720 resolution device) the scale to show 25m of horizontal size is 51.2x since we know that 1280 / scale = 25, then 1280 / 25 = 51.2. In the case of a Samsung Galaxy Y (480x320 resolution device) the scale would be 19.2x since 480 / 25 = 19.2. Translating this inside the game would be something as easy as:

camera.scale = screen.width / 25

Final thoughts

This is not a general solution, it depends a lot in the game we were making and the things we could assume like the game height doesn't matter.

Even though the solution is specific and not so cool as the previous posts, we hope it could be of help when making your own game.

VN:F [1.9.22_1171]
Rating: 4.3/5 (18 votes cast)

Our resources manager and how it helped in localizing Clash of the Olympians

In this post we want to share how our resources manager implementation helped us when we had to localize Clash of the Olympians.

Our resources manager

Some time ago we started a small java project named jresourcesmanager (yeah, the most creative name in the world) which provides a simple abstraction of what a resource is and how it is loaded. It consists in some basic concepts named Resource, DataLoader and ResourceManager.

Resource

Is the concept of an application resource/asset, and provides an API to know if the resource is loaded or not and to load and unload it. Here is the code:

public class Resource<T> {

	T data = null;

	DataLoader<T> dataLoader;

	protected Resource(DataLoader<T> dataLoader) {
		this(dataLoader, true);
	}

	protected Resource(DataLoader<T> dataLoader, boolean deferred) {
		this.dataLoader = dataLoader;
		if (!deferred)
			reload();
	}

	/**
	 * Returns the data stored by the Resource.
	 */
	public T get() {
		if (!isLoaded())
			load();
		return data;
	}
	
	public void set(T data) {
		this.data = data;
	}

	public DataLoader<T> getDataLoader() {
		return dataLoader;
	}
	
	public void setDataLoader(DataLoader<T> dataLoader) {
		unload();
		this.dataLoader = dataLoader;
	}

	/**
	 * Reloads the internal data by calling unload and load().
	 */
	public void reload() {
		unload();
		load();
	}

	/**
	 * Loads the data if it wasn't loaded yet, it does nothing otherwise.
	 */
	public void load() {
		if (!isLoaded())
			data = dataLoader.load();
	}

	/**
	 * Unloads the data by calling the DataLoader.unload(t) method.
	 */
	public void unload() {
		if (isLoaded()) {
			dataLoader.unload(data);
			data = null;
		}
	}

	/**
	 * Returns true if the data is loaded, false otherwise.
	 */
	public boolean isLoaded() {
		return data != null;
	}

	public Resource<T> clone() {
		return new Resource<T>(dataLoader);
	}

}

DataLoader

Provides an API to define a way to load and unload a Resource, here is the code:

public abstract class DataLoader<T> {

	/**
	 * Implements how to load the data.
	 */
	public abstract T load();

	/**
	 * Implements how to unload the data, if it is nod automatic and you need to unload stuff by hand.
	 */
	public void unload(T t) {

	}

	/**
	 * Provides a way to return custom information about the data loader.
	 * 
	 * @return An object with the custom information.
	 */
	public Object getMetaData() {
		return null;
	}

}

ResourceManager

It is just a map with all the resources identified by a key, which can be a String for example, and an API to store and get resources to and from, respectively.

That is our resources management code, it is really simple and fulfilled our needs in assets loading/unloading for Vampire Runner and Clash of the Olympians.

The interesting part of this library is that it allows you to store whatever you want to consider as a resource/asset, and that feature is what we used in order to solve the localization issue.

Declaring resources

Another important pillar is how we declare resources in an easy way through the code, and that is by using some builders that simplified the resource declaration. As we were using LibGDX library, we created a resource builder for LibGDX assets and more. This resource builder allowed us to do stuff like this:

splitLoadingTextureAtlas("MainTextureAtlas", "data/images/screens/mainmenu/pack");
resource("MainBackgroundTop", sprite2().textureAtlas("MainTextureAtlas", "mainmenu-bg", 1).center(0.5f, 0.5f).trySpriteAtlas());

That declares a texture atlas with the name of MainTextureAtlas and then a resource which is a sprite with the name of MainBackgroundTop from a texture atlas resource identified by the previous name. Also, allow us to do stuff like declare that the sprite is centered in the middle (the anchor point) and more.

This code is not important, just an example of some of our builders.

Declaring localized resources

As we have the power to create complex resource builders and we needed a simple way to switch between assets given the language selected, we created a resource builder which allowed us to declare different resources, depending the current locale, using the same identifier. So, for example, we can do something like this:

resource("CreditsButton", new MultilanguageResourceBuilder<Sprite>() //
	.defaultLocale(new Locale("en")) //
	.resource(new Locale("en"), sprite2().textureAtlas(TextureAtlases.MeinMenu, "mainmenu-but-credits-en", 1).trySpriteAtlas()) //
	.resource(new Locale("es"), sprite2().textureAtlas(TextureAtlases.MeinMenu, "mainmenu-but-credits-es", 1).trySpriteAtlas()) //
			);

That declares a resource named CreditsButton which returns a sprite with index 1 and name “mainmenu-but-credits-en” from a texture atlas in case the locale is English and a sprite with index 1 and name “mainmenu-but-credits-es” in case the locale is Spanish. It also declares that the default locale (in case a resource for the current locale wasn’t found) is English.

This resource builder was very handy because it allowed us to declare any type of resource for different locales, and from the application side it was transparent, we just ask for the resource CreditsButton.

In case you are interested, the code of the MultilanguageResoruceBuilder is:

public class MultilanguageResourceBuilder<T> implements ResourceBuilder<T> {

	private Map<Locale, ResourceBuilder<T>> resourceBuilders = new HashMap<Locale, ResourceBuilder<T>>();
	private Locale defaultLocale;

	@Override
	public boolean isVolatile() {
		if (defaultLocale == null)
			throw new IllegalStateException("Multilanguage resource builder needs a default locale");
		ResourceBuilder<T> resourceBuilder = resourceBuilders.get(defaultLocale);
		if (resourceBuilder == null)
			throw new IllegalStateException("Multilanguage resource builder needs a default resource builder");
		return resourceBuilder.isVolatile();
	}

	public MultilanguageResourceBuilder<T> defaultLocale(Locale locale) {
		this.defaultLocale = locale;
		return this;
	}

	public MultilanguageResourceBuilder<T> resource(Locale locale, ResourceBuilder<T> resourceBuilder) {
		resourceBuilders.put(locale, resourceBuilder);
		return this;
	}

	@Override
	public T build() {
		Locale locale = Locale.getDefault();

		if (defaultLocale == null)
			throw new IllegalStateException("Multilanguage resource builder needs a default locale");

		if (!resourceBuilders.containsKey(locale))
			locale = defaultLocale;

		ResourceBuilder<T> resourceBuilder = resourceBuilders.get(locale);

		if (resourceBuilder == null)
			throw new IllegalStateException("Multilanguage resource builder needs a default resource builder");

		return resourceBuilder.build();
	}

}

Conclusion

That was the way we used to support multiple languages in a transparent way for the application, we just need to change the current locale and reload the assets.

Hope this blog post idea helps you in case you are about to support multiple languages in your game, and see you next time.

VN:F [1.9.22_1171]
Rating: 3.9/5 (14 votes cast)

Our solution to handle multiple screen sizes in Android – Part two

Continuing with the previous blog post, in this post we are going to talk about the code behind the theory. It consists in three concepts, the VirtualViewport, the OrthographicCameraWithVirtualViewport and the MultipleVirtualViewportBuilder.

VirtualViewport

It defines a virtual area where the game stuff is contained and provides a way to get the real width and height to use with a camera in order to always show the virtual area. Here is the code of this class:

public class VirtualViewport {

	float virtualWidth;
	float virtualHeight;

	public float getVirtualWidth() {
		return virtualWidth;
	}

	public float getVirtualHeight() {
		return virtualHeight;
	}

	public VirtualViewport(float virtualWidth, float virtualHeight) {
		this(virtualWidth, virtualHeight, false);
	}

	public VirtualViewport(float virtualWidth, float virtualHeight, boolean shrink) {
		this.virtualWidth = virtualWidth;
		this.virtualHeight = virtualHeight;
	}

	public float getWidth() {
		return getWidth(Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
	}

	public float getHeight() {
		return getHeight(Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
	}

	/**
	 * Returns the view port width to let all the virtual view port to be shown on the screen.
	 * 
	 * @param screenWidth
	 *            The screen width.
	 * @param screenHeight
	 *            The screen Height.
	 */
	public float getWidth(float screenWidth, float screenHeight) {
		float virtualAspect = virtualWidth / virtualHeight;
		float aspect = screenWidth / screenHeight;
		if (aspect > virtualAspect || (Math.abs(aspect - virtualAspect) < 0.01f)) {
			return virtualHeight * aspect;
		} else {
			return virtualWidth;
		}
	}

	/**
	 * Returns the view port height to let all the virtual view port to be shown on the screen.
	 * 
	 * @param screenWidth
	 *            The screen width.
	 * @param screenHeight
	 *            The screen Height.
	 */
	public float getHeight(float screenWidth, float screenHeight) {
		float virtualAspect = virtualWidth / virtualHeight;
		float aspect = screenWidth / screenHeight;
		if (aspect > virtualAspect || (Math.abs(aspect - virtualAspect) < 0.01f)) {
			return virtualHeight;
		} else {
			return virtualWidth / aspect;
		}
	}

}

So, if we have a virtual area of 640x480 and want to show it on a screen of 800x480 we can do the next steps in order to get the proper values that we have to use as the camera viewport for that screen:

VirtualViewport virtualViewport = new VirtualViewport(640, 480);
float realViewportWidth = virtualViewport.getWidth(800, 480);
float realViewportHeight = virtualViewport.getHeight(800, 480);
// now set the camera viewport values
camera.setViewportFor(realViewportWidth, realViewportHeight);

OrthographicCameraWithVirtualViewport

In order to simplify the work when using LibGDX library, we created a subclass of LibGDX's OrthographicCamera with specific behavior to update the camera viewport using the VirtualViewport values. Here is its code:

public class OrthographicCameraWithVirtualViewport extends OrthographicCamera {

	Vector3 tmp = new Vector3();
	Vector2 origin = new Vector2();
	VirtualViewport virtualViewport;
	
	public void setVirtualViewport(VirtualViewport virtualViewport) {
		this.virtualViewport = virtualViewport;
	}

	public OrthographicCameraWithVirtualViewport(VirtualViewport virtualViewport) {
		this(virtualViewport, 0f, 0f);
	}

	public OrthographicCameraWithVirtualViewport(VirtualViewport virtualViewport, float cx, float cy) {
		this.virtualViewport = virtualViewport;
		this.origin.set(cx, cy);
	}

	public void setPosition(float x, float y) {
		position.set(x - viewportWidth * origin.x, y - viewportHeight * origin.y, 0f);
	}

	@Override
	public void update() {
		float left = zoom * -viewportWidth / 2 + virtualViewport.getVirtualWidth() * origin.x;
		float right = zoom * viewportWidth / 2 + virtualViewport.getVirtualWidth() * origin.x;
		float top = zoom * viewportHeight / 2 + virtualViewport.getVirtualHeight() * origin.y;
		float bottom = zoom * -viewportHeight / 2 + virtualViewport.getVirtualHeight() * origin.y;

		projection.setToOrtho(left, right, bottom, top, Math.abs(near), Math.abs(far));
		view.setToLookAt(position, tmp.set(position).add(direction), up);
		combined.set(projection);
		Matrix4.mul(combined.val, view.val);
		invProjectionView.set(combined);
		Matrix4.inv(invProjectionView.val);
		frustum.update(invProjectionView);
	}

	/**
	 * This must be called in ApplicationListener.resize() in order to correctly update the camera viewport. 
	 */
	public void updateViewport() {
		setToOrtho(false, virtualViewport.getWidth(), virtualViewport.getHeight());
	}
}

MultipleVirtualViewportBuilder

This class allows us to build a better VirtualViewport given the minimum and maximum areas we want to support performing the logic we explained in the previous post. For example, if we have a minimum area of 800x480 and a maximum area of 854x600, then, given a device of 480x320 (3:2) it will return a VirtualViewport of 854x570 which is a good match of a resolution which contains the minimum area and is smaller than the maximum area and has the same aspect ratio of 480x320.

public class MultipleVirtualViewportBuilder {

	private final float minWidth;
	private final float minHeight;
	private final float maxWidth;
	private final float maxHeight;

	public MultipleVirtualViewportBuilder(float minWidth, float minHeight, float maxWidth, float maxHeight) {
		this.minWidth = minWidth;
		this.minHeight = minHeight;
		this.maxWidth = maxWidth;
		this.maxHeight = maxHeight;
	}

	public VirtualViewport getVirtualViewport(float width, float height) {
		if (width >= minWidth && width <= maxWidth && height >= minHeight && height <= maxHeight)
			return new VirtualViewport(width, height, true);

		float aspect = width / height;

		float scaleForMinSize = minWidth / width;
		float scaleForMaxSize = maxWidth / width;

		float virtualViewportWidth = width * scaleForMaxSize;
		float virtualViewportHeight = virtualViewportWidth / aspect;

		if (insideBounds(virtualViewportWidth, virtualViewportHeight))
			return new VirtualViewport(virtualViewportWidth, virtualViewportHeight, false);

		virtualViewportWidth = width * scaleForMinSize;
		virtualViewportHeight = virtualViewportWidth / aspect;

		if (insideBounds(virtualViewportWidth, virtualViewportHeight))
			return new VirtualViewport(virtualViewportWidth, virtualViewportHeight, false);
		
		return new VirtualViewport(minWidth, minHeight, true);
	}
	
	private boolean insideBounds(float width, float height) {
		if (width < minWidth || width > maxWidth)
			return false;
		if (height < minHeight || height > maxHeight)
			return false;
		return true;
	}

}

In case the aspect ratio is not supported, it will return the minimum area.

Floating elements

As we explained in the previous post, there are some cases where we need stuff that should be always at fixed positions in the screen, for example, the audio and music buttons in Clash of the Olympians. In order to do that we need to make the position of those buttons depend on the VirtualViewport. In the next section where we explain how to use all together we show an example of how to do a floating element.

Using the code together

Finally, here is an example showing how to use these concepts in a LibGDX application:

public class VirtualViewportExampleMain extends com.badlogic.gdx.Game {

	private OrthographicCameraWithVirtualViewport camera;
	
	// extra stuff for the example
	private SpriteBatch spriteBatch;
	private Sprite minimumAreaSprite;
	private Sprite maximumAreaSprite;
	private Sprite floatingButtonSprite;
	private BitmapFont font;

	private MultipleVirtualViewportBuilder multipleVirtualViewportBuilder;

	@Override
	public void create() {
		multipleVirtualViewportBuilder = new MultipleVirtualViewportBuilder(800, 480, 854, 600);
		VirtualViewport virtualViewport = multipleVirtualViewportBuilder.getVirtualViewport(Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
		
		camera = new OrthographicCameraWithVirtualViewport(virtualViewport);
		// centers the camera at 0, 0 (the center of the virtual viewport)
		camera.position.set(0f, 0f, 0f);
		
		// extra code
		spriteBatch = new SpriteBatch();
		
		Pixmap pixmap = new Pixmap(64, 64, Format.RGBA8888);
		pixmap.setColor(Color.WHITE);
		pixmap.fillRectangle(0, 0, 64, 64);
		
		minimumAreaSprite = new Sprite(new Texture(pixmap));
		minimumAreaSprite.setPosition(-400, -240);
		minimumAreaSprite.setSize(800, 480);
		minimumAreaSprite.setColor(0f, 1f, 0f, 1f);
		
		maximumAreaSprite = new Sprite(new Texture(pixmap));
		maximumAreaSprite.setPosition(-427, -300);
		maximumAreaSprite.setSize(854, 600);
		maximumAreaSprite.setColor(1f, 1f, 0f, 1f);
		
		floatingButtonSprite = new Sprite(new Texture(pixmap));
		floatingButtonSprite.setPosition(virtualViewport.getVirtualWidth() * 0.5f - 80, virtualViewport.getVirtualHeight() * 0.5f - 80);
		floatingButtonSprite.setSize(64, 64);
		floatingButtonSprite.setColor(1f, 1f, 1f, 1f);
		
		font = new BitmapFont();
		font.setColor(Color.BLACK);
	}
	
	@Override
	public void resize(int width, int height) {
		super.resize(width, height);
		
		VirtualViewport virtualViewport = multipleVirtualViewportBuilder.getVirtualViewport(Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
		camera.setVirtualViewport(virtualViewport);
		
		camera.updateViewport();
		// centers the camera at 0, 0 (the center of the virtual viewport)
		camera.position.set(0f, 0f, 0f);
		
		// relocate floating stuff
		floatingButtonSprite.setPosition(virtualViewport.getVirtualWidth() * 0.5f - 80, virtualViewport.getVirtualHeight() * 0.5f - 80);
	}
	
	@Override
	public void render() {
		super.render();
		Gdx.gl.glClearColor(1f, 0f, 0f, 1f);
		Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
		camera.update();
		
		// render stuff...
		spriteBatch.setProjectionMatrix(camera.combined);
		spriteBatch.begin();
		maximumAreaSprite.draw(spriteBatch);
		minimumAreaSprite.draw(spriteBatch);
		floatingButtonSprite.draw(spriteBatch);
		font.draw(spriteBatch, String.format("%1$sx%2$s", Gdx.graphics.getWidth(), Gdx.graphics.getHeight()), -20, 0);
		spriteBatch.end();
	}

	public static void main(String[] args) {
		LwjglApplicationConfiguration config = new LwjglApplicationConfiguration();

		config.title = VirtualViewportExampleMain.class.getName();
		config.width = 800;
		config.height = 480;
		config.fullscreen = false;
		config.useGL20 = true;
		config.useCPUSynch = true;
		config.forceExit = true;
		config.vSyncEnabled = true;

		new LwjglApplication(new VirtualViewportExampleMain(), config);
	}

}

In the example there are three colors, green represents the minimum supported area, yellow the maximum supported area and red represents the area outside. If we see red it means that aspect ratio is not supported. There is a floating element colored white, which is always relocated in the top right corner of the screen, unless we are on an unsupported aspect ratio, in that case it is just located in the top right corner of the green area.

The next video shows the example in action:

UPDATE: you can download the source code to run on Eclipse from here.

Conclusion

In these two blog posts we explained in a simplified way how we managed to support different aspect ratios and resolutions for Clash of the Olympians, a technique that could be used as an acceptable way of handling different screen sizes for a wide range of games, and it is not hard to use.

As always, we hope you liked it and that it could be useful for you when developing your games. Opinions and suggestions are always welcome if you want to comment 🙂 and also share it if you liked it and think other people could benefit from this code.

Thanks for reading.

VN:F [1.9.22_1171]
Rating: 4.9/5 (38 votes cast)

Our solution to handle multiple screen sizes in Android - Part one

Developing games for multiple devices is not an easy task. Given the variety of devices, one of the most common problem is having to handle multiple screen sizes, which means different resolutions and aspect ratios.

In this blog post we want to share what we did to minimize this problem when making Ironhide's Clash of the Olympians for Android.

In the next sections we are going to show some common ways of handling the multiple screens problem and then our way.

Stretching the content

One common approach when developing a game is making the game for a fixed resolution, for example, making the game for 800x480.

Based on that, you can have the next layout in one of your game's screens:


Main screen of Clash of the Olympians in a 800x480 device.

Then, to support other screen sizes the idea is to stretch the content to the other device screen:


Main screen on a 800x600 device, stretched from 800x480.

The main problem is that the aspect ratio is affected and that is visually unacceptable.

Stretching + keeping aspect ratio

To solve part of the previous problem, one common technique is stretching but keeping the correct aspect ratio by adding dead space to the borders of the screen so the real game area aspect ratio is the same on different devices. For example:


Main screen in a 800x600 device with borders.


Main screen in a 854x480 device with borders.

This is an easy way to attack this multiple screen size problem, you can even create some nice borders instead of the black borders shown in the previous image to improve how it looks.

However, in some cases this is not acceptable either since it doesn't look so good or it feels like the game wasn't made for that device.

Our solution: Using a Virtual Viewport

Our approach consists in adapting what is shown in the game screen area to the device screen size.

First, we define a range of aspect ratios we want to support, for example, in the case of clash we defined 4:3 (800x600) and 16:9 (854x480) as our border case aspect ratios, so all aspect ratios in the middle of those two should be supported.

Given those two aspect ratios, we defined our maximum area as 854x600 and our minimum area as 800x480 (the union and intersection between 800x600 and 854x480, respecively). The idea is to cover the maximum area with stuff, but the important stuff (buttons, information, etc) should be always included in the minimum area.


The red rectangle shows the minimum area while the blue rectangle shows the maximum area.

Then, given a device resolution we calculate an area that matches the device aspect ratio and is included in the virtual area. For example, given a device with a resolution of 816x544 (4:3), this is what is shown:


The green rectangle shows the matching area for 816x544.


This is how the main screen is shown in a 816x544 device.

In case we are on a bigger or lower resolution than the maximum or minimum area we defined, respectively, for example a screen of 480x320 (3:2), what we do is calculate the aspect ratio and find a corresponding match for that aspect ratio in the area we defined. In the case of the example, one match could be 800x534 since it is 3:2 aspect ratio and it is inside our virtual area. Then we scale down to fit the screen.


The green rectangle shows the calculated area for a resolution of 800x534 (matching the aspect of the 480x320 device).


This is what is shown of the main screen in a 480x320 device (click to enlarge the image).

Floating elements

For some elements of the game, such as buttons, maintaining their fixed world position for different screen sizes doesn't look good, so what we do is making them floating elements. That means they are always at the same screen position, the next images shows an example with the main screen buttons:


Main screen's buttons distribution for a 854x480 device.


Main screen's buttons distribution for a 800x600 device. As you can see, buttons are relocated to match the screen size.

Finally, we want to show a video of this multiple screen sizes auto adjustment in real time:


Adjusting the game to the screen size in real time.

Some limitations

As we are scaling up/down in some cases to match the corresponding screen, some devices could perceive some blur since we are using linear filtering and the final position of the elements after the camera transformations could be not integer positions. This problem is minimized with better density devices and assets.

Layouts could change between different devices, for example, the layout for a phone could be different to the layout of a tablet device.

Text is a special case, when rendering text just downscaling it is not a correct solution since it could be not readable. You may have to re-layout text for lower resolution devices to show it bigger and readable.

Conclusion

If you design your game screens follow this approach, it is not so hard to support multiple screen sizes in an acceptable way. However there is still a lot of detail to take care of, like the problems we talked in the previous section.

In the next part of this blog post we will show some code based on LibGDX for those interested in how we implemented all this.

Thanks for reading and hope you liked it.

VN:F [1.9.22_1171]
Rating: 4.7/5 (51 votes cast)