Playing with Starcraft 2 Editor to understand how a good RTS is made

When working on Iron Marines engine at work we did some research on other RTS games in order to have more knowledge on how they did some stuff and why. In this post, in particular, I want to share a bit of my research on the SC2 Editor which helped a lot when making our own editor.

The objective was to see what a Game Designer could do or not with the SC2 Editor in order to understand some decisions about the editor and the engine itself.

Obviously, by taking a look at the game mods/maps available it is clear that you could build entire games over the SC2 engine, but I wanted to see the basics, how to define and control the game logic.

As a side note, I love RTS games since I was a child, I played a lot of Dune 2 and Warcraft 1. I remember playing with the editors of Command & Conquer and Warcraft 2 also, it was really cool, so much power 😉 and fun. With one of my brothers, each one had to make a map and the other had to play and beat it (we did the same with Doom and Duke Nukem 3d editors).

SC2 Editor

SC2 maps are built with Triggers which are composed by Events, Conditions and Actions to define parts of the game logic. There are a lot of other elements as well that I will talk a bit after explaining the basics.

Here is an image of the SC2 Editor with an advanced map:

Trigger logic

The Triggers are where the general map logic is defined. They are triggered by Events and say which Actions should be performed if given Conditions are met. Even though behind the scenes the logic is C/C++ code and it is calling functions with similar names, the Editor shows it in a natural language like “Is Any Unit of Player1 Alive?” which helps for quick reading and understanding.

This is an example of a Trigger logic of a SC2 campaign map:

Events

Events are a way to Trigger the Trigger logic, in other words, when an event happens the logic is executed. Here is an example of an event triggered when the unit "SpecialMarine" enters the region "Region 001":

Conditions

Conditions are evaluated in order to execute the actions or not. Here is an example of a condition checking if unit "BadGuy" is alive or not:

Actions

Actions are executed when the event happened and the conditions are met. They could be anything supported by the editor, from ordering a structure to build a unit to showing a mission objective update on screen, among other things.

This example shows an action that enqueues to unit "BadGuy" an attack order with unit "SpecialMarine" as target, replacing existing enqueued orders in that unit. There is another action after that which turns off the Trigger in order to avoid processing its logic again.

The idea with this approach is to build the logic in a descriptive way, the Game Designer has tools to fulfill what he needs in terms of game experience. For example, he needs to make it hard to save a special unit when you reach its location, then he sends a wave of enemies to that point.

I said before that the editor generates C/C++ code behind the scenes, so, for my example:

The code generated behind the scenes is this one:

Here is a screenshot of the example I did, the red guy is the SpecialMarine (controlled by the player) and the blue one is the BadGuy (controlled by the map logic), if you move your unit inside the blue region, BadGuy comes in and attack SpecialMarine:

Even though it is really basic, download my example if you want to test it 😛 .

Parameters

In order to make the Triggers work, they need some values to check against, for example, Region1, is a region previously defined, or “Any Unit of Player1”. Most of the functions for Events, Conditions and Actions have parameters of a given Type, and the Editor allow the user to pick an object of that Type from different sources: a function, a preset, a variable, a value or even custom code:

It shows picking a Unit from units in the map (created instances).

It shows Unit picking from different functions that return a Unit.

This allows the Game Designer to adapt, in part, the logic to what is happening in the game while keeping the main structure of the logic. For example, I need to make the structures of Player2 explode when any Unit of Player1 is in Region1, I don’t care which unit I only care it is from Player1.

Game design helper elements

There are different elements that help the Game Designer when creating the map: Regions, Points, Paths and Unit Groups, among others. These elements are normally not visible by the Player but are really useful to the Game Designer to have more control over the logic.

As said before, the SC2 Editor is pretty complete, it allows you to do a lot of stuff, from creating custom cutscenes to override game data to create new units, abilities, and more but that's food for another post.

Our Editor v0.1

The first try of creating some kind of editor for our game wasn't so successful. Without the core of the game clearly defined we tried to create an editor with a lot of the SC2 Editor features. We spent some days defining a lot of stuff in abstract but in the end we aimed too far for a first iteration.

So, after that, we decided to start small. We starting by making a way to detect events over the "being defined core" at that point. An event could be for example: "when units enter an area" or "when a resource spot was captured by a player".

Here are some of the events of one of our maps:

Note: Even though they are Events we named them Triggers (dunno why), so AreaTrigger is an empty Trigger in terms of SC2 Editor with just an Event.

Events were the only thing in the editor, all the corresponding logic was done in code in one class, corresponding to that map, which captures all events and checks conditions for taking some actions, normally, sending enemies to attack some area.

Here is an example code for some of the previous defined events:

It wasn't a bad solution but had some problems:

  • The actions were separated from the level design which played against the iteration cycle (at some point our project needed between 10 and 15 seconds to compile in the Unity Editor).
  • Since it needs code to work, it requires programming knowledge and our team Game Designers aren't so good with code.

Our Editor v0.2

The second (and current) version is more Game Designer friendly, and tends to be more similar to SC2 Editor. Most of the logic is defined in the editor within multiple triggers. Each Trigger is defined as a hierarchy of GameObjects with specific components to define the Events, Conditions and Actions.

Here is an example of a map using the new system:

This declares for example a trigger logic that is activated by time, it has no conditions (so it executes always given the event) and it sends some enemies in sequence and deactivates itself at the end.

We also created a custom Editor window in order to help creating the trigger hierarchy and to simplify looking for the engine Events, Conditions and Actions. Here is part of the editor showing some of the elements we have:

All those buttons automatically create the corresponding GameObject hierarchy with the proper Components in order to make everything work according to plan. Since most of them need parameters, we are using the Unity built-in feature of linking elements given a type (a Component), so for example, for the action of forcing capture a Capturable element by team Soldiers, we have:

Unity allow us to pick a Capturable element (CapturableScript in this case) from the scene. This simplifies a lot the job of configuring the map logic.

Some common conditions could be to check if a resource spot is controlled by a given player or if a structure is alive. Common actions could be, send a wave of enemy units to a given area or deactivate a trigger.

The base code is pretty simple, it mainly defines the API while the real value of this solution is in the custom Events, Conditions and Actions.

Pros

  • Visual, and more Game Designer friendly (it is easier for Programmers too).
  • Faster iteration speed, now we can change things in Editor directly, even in runtime!
  • Easily extensible by adding more Events, Conditions and Actions, and transparent to the Game Designers since they are automatically shown in our Custom Editor.
  • Take advantage of Unity Editor for configuring stuff.
  • Easy to disable/enable some logic by turning on/off the corresponding GameObject, which is good for testing something or disable one logic for a while (for example, during ingame cinematics).
  • More control to the Game Designer, they can test and prototype stuff without asking programming team.
  • Simplified workflow for our ingame cinematics.
  • Compatible with our first version, both can run at the same time.

Cons

  • Merge the stage is harder now that it is serialized with the Unity scene, with code we didn’t have merge problems or at least it was easier to fix. One of the ideas to simplify this is to break the logic in parts and use prefabs for those parts, but it breaks when having links with scene instances (which is a common case).
  • A lot of programming responsibility is transferred to the scripting team which in this case is the Game Design team, that means possibly bad code (for example, duplicated logic), bugs (forget to turn off a trigger after processing the actions) and even performance.

Conclusion

When designing (and coding) a game, it is really important to have a good iteration cycle in each aspect of the game. Having switched to a more visual solution with all the elements at hand and avoiding code as much as we could, helped a lot with that goal.

Since what we end up doing looks similar to a scripting engine, why didn't we go with a solution like uScript or similar in the first place? the real answer is I didn't try in depth other Unity scripting solutions out there (not so happy with that), each time I tried them a bit they gave me the feeling it was too much for what we needed and I was unsure how they perform on mobile devices (never tested that). Also, I wasn't aware we would end up need a scripting layer, so I prefered to evolve over our needs, from small to big.

Taking some time to research other games and play with the SC2 Editor helped me a lot when defining how our engine should work and/or why we should go in some direction. There are more aspects of our game that were influenced in some way by how other RTS games do it, which I may share or not in the future, who knows.

I love RTS games, did I mention that before?

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

Using Unity Text to show numbers without garbage generation

The idea of this post is to show different ideas and analysis on how to use Unity UI Text to show numbers without garbage generation. I need this for a framerate counter and other debug values.

Test case

Shows a fixed digit length number in screen, regenerated each frame with a new random value.

Shows how the test scene used for all test cases work.

Using Strings

Since strings are immutable in c#, common operations on strings generates new strings and hence allocates new heap memory. If you are using strings as temporary values like showing a changing number in a UI text then that memory becomes garbage. In PC that garbage could go unnoticed but not in mobile devices since that could derive in a hiccup when the garbage collector decides to collect it.

The idea with these tests is to try to use make the label work with strings without garbage generation. To detect generated garbage I am using the Unity profiler and avoiding ToString() of int, float, etc, to just calculate the cost of the string manipulation for now.

String concatenation

String concatenation generates 30 Bytes per frame since internally String.Concat() calls String.InternallyAllocateStr().

It is not as bad as expected, it is just creating a new string with the length of the first string plus the second and then it copies their values. Obviously it becomes worse when multiple concatenations are done in secuence.

Test code:

Text text;
 
static readonly string[] numbers = { "0", "1", "2", "3", "4", "5", "6", "7", "8", "9" };
 
void Start () {
    text = GetComponent<Text> ();
}
 
void Update () {
     
    string a = numbers[UnityEngine.Random.Range(0, numbers.Length)];
    string b = numbers[UnityEngine.Random.Range(0, numbers.Length)];
 
    text.text = a + b;
}

String format

Using string.Format() generates 176 Bytes per frame, internally is using String.FormatHelper + StringBuilder.ToString().  The first one creates a new StringBuilder and the second is the transform from StringBuilder to string.

Test code:

 Text text;
 
 static readonly string[] numbers = { "0", "1", "2", "3", "4", "5", "6", "7", "8", "9" };
 
 void Start () {
     text = GetComponent<Text> ();
 }
 
 void Update () {
     string a = numbers[UnityEngine.Random.Range(0, numbers.Length)];
     string b = numbers[UnityEngine.Random.Range(0, numbers.Length)];
 
     text.text = string.Format ("{0}{1}", a, b);        
     
 }

String Builder Format

Using cached StringBuilder improves the previous one a bit, it generates 86 Bytes per frame, the AppendFormat is generating garbage and then the set_Length() (used to clear the StringBuilder).

Test code:

 Text text;
 StringBuilder stringBuilder = new StringBuilder(20, 20);
 
 static readonly string[] numbers = { "0", "1", "2", "3", "4", "5", "6", "7", "8", "9" };
 
 void Start () {
     text = GetComponent<Text> ();
     stringBuilder.Length = 3;
 }
 -j
 void Update () {
     string a = numbers[UnityEngine.Random.Range(0, numbers.Length)];
     string b = numbers[UnityEngine.Random.Range(0, numbers.Length)];
 
     stringBuilder.Length = 0;
     stringBuilder.AppendFormat ("{0}{1}", a, b);
 
     text.text = stringBuilder.ToString();
 }

Note: If I change the StringBuilder starting capacity and max capacity, the cost is the same but goes to ToString() method instead, but internally to the same method String.InternallyAllocateStr().

String Builder only Append

Instead of using StringBuilder.AppendFormat, change to use only String.Append. This reduces the cost to only 30 Bytes per frame (the same of the first one), the only cost here is the set_Length() which internally calls String.InternallyAllocateStr().

Test code:

 Text text;
 StringBuilder stringBuilder = new StringBuilder(20, 20);
 
 static readonly string[] numbers = { "0", "1", "2", "3", "4", "5", "6", "7", "8", "9" };
 
 void Start () {
     text = GetComponent<Text> ();
     stringBuilder.Length = 3;
 }
 
 void Update () {
     string a = numbers[UnityEngine.Random.Range(0, numbers.Length)];
     string b = numbers[UnityEngine.Random.Range(0, numbers.Length)];
 
     stringBuilder.Length = 0;
     stringBuilder.Append (a);
     stringBuilder.Append (b);
 
     text.text = stringBuilder.ToString();
 }

Note: Does the same behaviour if I change starting and max capacity, the cost is the same but is on ToString() instead of set_Length().

String Builder by replacing chars

If instead of Append I replace chars directly by using [] and avoid the set_Length(), the cost is the same, 30 Bytes per frame, since the String.InternallyAllocateStr() goes to set_Chars().

Test code:

 Text text;
 StringBuilder stringBuilder = new StringBuilder(20, 20);
 
 static readonly char[] numbers = { '0', '1', '2', '3', '4', '5', '6', '7', '8', '9' };
 
 void Start () {
     text = GetComponent<Text> ();
     stringBuilder.Length = 3;
 }

 void Update () {
     char a = numbers[UnityEngine.Random.Range(0, numbers.Length)];
     char b = numbers[UnityEngine.Random.Range(0, numbers.Length)];
 
     stringBuilder [0] = a;
     stringBuilder [1] = b;
 
     text.text = stringBuilder.ToString();
 }

Note: Again, does the same behaviour if I change starting and max capacity, instead of set_Chars(), the cost is in ToString() method.

String Builder, access internal string by reflection

There is a suggestion at in this post to access by refleciton to _str field from StringBuilder class to avoid the cost of ToString() method.

Test code:

 Text text;
 StringBuilder stringBuilder = new StringBuilder(20, 20);

 static System.Reflection.FieldInfo _sb_str_info = 
        typeof(StringBuilder).GetField("_str", 
        System.Reflection.BindingFlags.NonPublic | 
        System.Reflection.BindingFlags.Instance);
 
 static readonly char[] numbers = { '0', '1', '2', '3', '4', '5', '6', '7', '8', '9' };
 
 void Start () {
     stringBuilder.Length = 3;
 
     text = GetComponent<Text> ();
 }
 
 void Update () {
     stringBuilder[0] = numbers[UnityEngine.Random.Range(0, numbers.Length)];
     stringBuilder[1] = numbers[UnityEngine.Random.Range(0, numbers.Length)];
     stringBuilder[2] = (char) 0;
 
     var internalValue = _sb_str_info.GetValue (stringBuilder) as string;
     text.text = internalValue;
 }

In this case, there is no garbage at all. However, I see no change in the UI Text even though the editor shows the text field value is changing, like it is not being redrawn in screen. I suppose that could be because the string pointer is not changing and by taking a look at the Text code from the Unity UI it is comparing with != instead of Equals... not sure here.

 public virtual string text
 {
     get
     {
         return m_Text;
     }
     set
     {
         if (String.IsNullOrEmpty(value))
         {
             if (String.IsNullOrEmpty(m_Text))
                 return;
             m_Text = "";
             SetVerticesDirty();
         }
         else if (m_Text != value)
         {
             m_Text = value;
             SetVerticesDirty();
             SetLayoutDirty();
         }
     }
 }

I tried by forcing layout and vertices dirty after updating internal string, just in case, but had no luck (sad face).

Caching strings

Another option suggested in this blog post is to precache strings for different numbers but that is only reasonable for a small amount of digits. I like it because it is simple and could be generated at runtime, and works well for debug numbers like FPS where the number is normally between 0 and 60.

I tried it and it works really well and generates 0 Bytes per frame.

Test code:

 Text text;
 
 string[] generated;
 
 // Use this for initialization
 void Start () {
     text = GetComponent<Text> ();
 
     generated = new string[100];
 
     // should go from 0 to 99.
     for (int i = 0; i < 100; i++) {
         generated [i] = string.Format ("{0:00}", i);
     }
 }
 
 // Update is called once per frame
 void Update () {
     int random = UnityEngine.Random.Range (0, generated.Length);
     text.text = generated [random];
 }
 

Rendering numbers directly

One possible way to avoid all this garbage (I mean both the code and the unused memory) is to not use strings at all but to just render to the screen images for each number digit, where each digit is a different sprite.

When making TinyWarriors prototype I did a basic number rendering where I could specify the number of digits and it just created multiple Unity UI Images inside a horizontal layout.

Shows a test using images for each digit instead of a text.

Test code:

 public Image[] numbers;
 
 // in order, like 0, 1, 2, ..., 9
 public Sprite[] numberSprites;
 
 public bool fillZero = true;
 
 void Start()
 {
     SetNumber (0);
 }
 
 public void SetNumber(int number)
 {
     int tens = (number % 100) / 10;
     int ones = (number % 10);
 
     var tensActive = fillZero || tens != 0;
     var onesActive = fillZero || number > 0;
 
     numbers [0].gameObject.SetActive (tensActive);
     numbers [1].gameObject.SetActive (onesActive);
 
     if (tensActive)
         numbers [0].sprite = numberSprites [tens];
 
     if (onesActive)
         numbers [1].sprite = numberSprites [ones];
 }
 
 public void Update()
 {
     int random = UnityEngine.Random.Range (0, 100);
     SetNumber (random);
 }

The code could be adapted to support more digits. When profiling it in editor there is a lot of garbage generation, around 1KB per frame, in Canvas.SendWillRendereCanvases() because it is forcing a material rebuild each time a sprite is changed. However, I tested it on devices and it doesn’t so it must be something related with the Unity editor.

Other strategies

Other strategies include minimizing the garbage generation by reducing the text update frequency, for example, by avoiding updating the text if the number didn't change and/or updating the text from time to time and not every frame.

Conclusion

Since I just wanted a solution for a framerate counter (and other debug numbers) the last solutions are perfect and I believe those could even be extrapolated for other game needs, like showing the player points in an arcade game, with a bit of extra thinking.

References

Here is a list of some articles, forum and blog posts I took a look during the tests and the post writing.

Unity memory optimizations article - https://unity3d.com/es/learn/tutorials/temas/performance-optimization/optimizing-garbage-collection-unity-games

Memory management reference - http://www.memorymanagement.org/

FPS implementation caching strings - http://catlikecoding.com/unity/tutorials/frames-per-second/

Using reflection to set StringBuilder string to avoid garbage - http://www.defectivestudios.com/devblog/garbage-free-string-unity/

FPS Asset - http://blog.codestage.ru/unity-plugins/fps/

Another FPS Asset - https://www.assetstore.unity3d.com/en/#!/content/6513

StringBuilder API - https://msdn.microsoft.com/en-us/library/system.text.stringbuilder(v=vs.110).aspx

Performance tips for Unity for mobile - https://divillysausages.com/2016/01/21/performance-tips-for-unity-2d-mobile/

Unity UI Source code - https://bitbucket.org/Unity-Technologies/ui

Untiy Community Library - https://github.com/UnityCommunity/UnityLibrary

 Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

Making mockups and prototypes to minimize problems

I’m not inventing anything new here, I just want to share how making mockups and prototypes helped me to clarify and minimize some problems and in some cases even solve them with almost no cost.

For prototypes and mockups I'm using the Superpower Assets Pack of Sparklin Labs which provided me a great way of start visualizing a possible game. Thank you for that guys.

I will start talking about how I used visual mockups to quickly iterate multiple times over the layout of the user interface of my game to remove or reduce a lot of unknowns and possible problems.

After that, I will talk about making quick small prototypes to validate ideas. One of them is about performing player actions with a small delay (to simulate networking latency) and the other one is about how to solve each player having different views of the same game world.

UI mockups

For the game I’m making the player's actions were basically clear but I didn't know exactly how the UI was going to be and considering I have small experience making UIs, having a good UI solution is a big challenge.

In the current game prototype iteration, the players only have four actions, build unit, build barracks, build houses and send all units to attack the other player. At the same time, to perform those actions, they need to know how much money they have, available unit slots and how much each action cost.

To start solving this problem, I quickly iterate through several mockups, made directly in a Unity scene and using a game scene as background to test each possible UI problem case. For each iteration I compiled it to the phone and "test it" by early detecting problems like "the buttons are too small" or "can't see the money because I am covering it with my fingers", etc.

Why did I use Unity while I can do it with any image editing application and just upload the image to the phone? Well, that's is a good question, one of the answers is because I am more used to do all these stuff in Unity and I already have the template scenes. The other answer is because I was testing, at the same time, if the Unity UI solution supported what I was looking for and I could even start testing interaction feedback, like how the button will react when touched, if the money will turn to red when not having anymore, etc, something I could not test with only images.

The following gallery shows screenshots of different iterations where I tested button positions, sizes, information and support for possible future player actions. I will not go in detail here because I don't remember exactly the order nor the test but you could get an idea by looking at the images.

It took me like less than 2hs to go through more than 10 iterations, testing even visual feedback by discovering when testing that the player should quickly know when some action is disabled because of money restriction or not having unit slots available, etc. I even have to consider changing the scale of the game world to give more empty space reserved for the UI.

Player actions through delayed network

When playing network games, one thing that was a possible issue in my mind is that the player should receive feedback instantly even though the real action could be delayed a bit to be processed in the server. In the case of a move unit action in a RTS, the feedback could be just an animation showing the move destination and process the action later, but when the action considers a consuming a resource, that could be a little tricky, or at least I wasn’t sure so I decided to make a quick test for that.

Similar to the other, I created a Unity scene, in a separated project, I wanted to iterate really fast on this one. The idea to test was to have a way of processing part of the action in the client side to validate the preconditions (enough money) and to give the player instant feedback, and then process the action when it should.

After analyzing it a bit, my main concern was the player experience on executing an action and receiving instant feedback but watching the action was processed later, so I didn’t need any networking related code, I could test everything locally.

The test consisted in building white boxes with the right mouse button, each box costs $20 and you start with $100. So, the idea is that in the moment the button is pressed, a white box with half opacity appears giving the idea the action was processed and $20 are consumed, so you can’t do another action that needs more than that money. After a while, the white box is built and the preview disappear.

Here is a video showing it in action:

In the case of a server validating the action, it will work similar, the only difference is that the server could fail to validate the action (for example, the other player stole money before), in that case the player has to cancel it. So the next test was to try to process that case (visually) to see how it looks like and how it feels. The idea was similar to the previous case but after a while the game returns the money and the box preview disappears.

Here is a video showing this case in action:

It shouldn't be a common case but this is one idea on how it could be solved and I don’t think it is a bad solution.

Different views of the same game world

The problem I want to solve here is how each player will see the world. Since I can't have different worlds for each player the idea is to have different views of the same world. In the case of 3d games, having different cameras should do the trick (I suppose) but I wasn't  sure if that worked the same way for a 2d game, so I have to be sure by making a prototype.

One thing to consider is that, in the case of the UI, each player should see their own actions in the same position.

For this prototype, I used the same scene background used for the mockups, but in this case I created two cameras, one was rotated 180 degrees to show the opposite:

player0(player 1 view)

player1(player 2 view)

Since UI should be unique for each player I configured each canvas for each camera and used the culling mask to show one or another canvas.

Again, this test was really simple and quick, I believe I spent like 30 mins on it, the important thing is that I know this is a possible (and probably the) solution for this problem, which before the test I wasn't sure if it was a hard problem or not.

Conclusions

One good thing about making prototypes is that you could do a lot of shortcuts or assume stuff since the code and assets are not going to be in the game, that gives you a real fast iteration time as well as focus on one problem at a time. For example, by testing the mockups I added all assets in one folder and used Unity sprite packer without spending time on which texture format should I use for each mobile platform or if all assets can be in one texture or not, or stuff like that.

Making quick prototypes of things that you don't know how to solve, early, gives you a better vision of the scope of problems and it is better to know that as soon as possible. Sometimes you could have a lead on how to solve them and how expensive the solution could be and that gives you a good idea on when you have to attack that problem or if you want it or not (for example, forget about a specific feature). If you can't figure it out possible solutions even after making prototypes, then that feature is even harder than you thought.

Prototyping is cheap and it is fun because it provides a creative platform where the focus is on solving one problem without a lot of restrictions, that allows developing multiple solutions, and since it is a creative process, everyone in game development (game designers, programmers, artists, etc) could participate.

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

VN:F [1.9.22_1171]
Rating: 5.0/5 (4 votes cast)

A basic analysis of Clash Royale multiplayer solution

The idea of this post is to analyze in a superficial way the multiplayer solution behind Clash Royale. Before starting the analysis, you must know that I don’t have previous experience making multiplayer games, I am just learning, and all the analysis done here is just a theory.

So, you surely know about Clash Royale but in case you don’t, it is an online multiplayer 1v1 RTS game where the first player to destroy the other’s towers win. To do so, they play cards which transforms into units that advance and attack enemy towers, structures that spawn units or attack enemy units or special powers which can be used to perform damage, among other things. Cards cost energy that regenerates over time (to a maximum). I recommend it since it is a great game, and it is really really polished in every detail (including multiplayer).

Here is a video explaining and showing the game:

Analysis

Since the game is very competitive, they probably have an authoritative server to validate player actions to avoid cheating. For example, a Player could say “I played card X” to the server, the server has to validate the player had that card in hand to use it, and enough energy.

In each game there could be several units at the same time (like 30 units in the worst case), also, there are tons of games being played at the same time. In order to make the game run over mobile networks and to support all those games, with all those units at the same time, they have to reduce the bandwidth to the minimum.

One strategy could be to compress the data sent, other could be to send data not so frequent and to interpolate to be as smooth as possible. However, considering that each unit has a position, looking direction, target, health, animation frame, among other stuff, that could still be a lot of data. I am guessing here that they follow another approach like a synchronized simulation of the game in each client and, since it is not so CPU heavy, every mobile device nowadays (game was released March 2016) shouldn't have problems running the game logic.

Another thing that made me think about synchronized simulation was that every player action is not performed instantly, it has a small delay or a cast time. That could be a design choice but I believe it considers the fact that actions must be synchronized between players and having a delay allows them to do that.

If they are simulating in the client, they have to control the simulation to be deterministic or they must have some way to fix the game state if it was desynchronized at some point.

If I remember correctly, they follow an approach that the game never stops, players could be disconnected for a while and then reconnect and continue playing, they just lose part of the game (couldn’t perform actions). That could be used also for player desynchronization. Don’t know the resynchronization strategy but maybe the server sends game state snapshot and the player continues from there.

They even have game replays, so I am guessing that simulating the game in each device could help in reducing the cost of watching a replay even though they could replay it in a server if they want since they probably have tons of servers :).

Conclusion

My guess is that the game has a client/server architecture where both the server and the client simulate the game synchronously. The server is in charge of validating player actions and responsible of deciding the real game state in case of desynchronization.

As I said at the beginning of the blog post, this is a superficial analysis based on my current knowledge. If I wanted to perform a deeper analysis I could have follow another approach like doing some reverse engineering over the game connections to validate some of my guesses but that wasn't the blog post purpose.

And here is a really fun video of the game to finish the post:

VN:F [1.9.22_1171]
Rating: 5.0/5 (2 votes cast)

Our solution to handle multiple screen sizes in Android – Part three

In the previous posts of this series we talked about our solution to handle multiple screen sizes for game menus, in particular we showed the main menu of the game Clash of the Olympians. In this post we are going to talk about what we did inside the game itself. As a side note, the solution we used here is simple and specific for this game, hope it could help as example but don't expect a silver bullet.

Scaling to match the physics world

As we use Box2D in Clash of the Olympians, the first step was to use a proper scale between Box2D bodies and our assets. The basic approach was to consider that 1m (meter in MKS system) was 32px, so in our target resolution of 800x480 could show 25m x 15m. We picked that scale because it gives pretty numbers both in terms of the game area and in terms of our assets, for example, a character of 64px of height is 2m tall. In particular, Achilles has a height of approx 60px which is equivalent to 1.875m using our scale, that sounds pretty reasonable for that character.

clashoftheolympians-800x480
The image shows the relation between screen size in pixels (800x480 in this case) and the game world in meters.

Defining a virtual area to show

We previously said that we could show 25m x 15m, in fact, the height is not so important in Clash of the Olympians since the game mainly depends in the horizontal distance. So, if we had an imaginary with a resolution of 800x400 (really wide, an aspect ratio of 2) we would show in that case 12.5m of height, we could assume that if we show at least that height the game balance would be not affected at all (enemies are never spawned too high). However, in terms of horizontal distance we want to show always the same area across all devices to avoid changing the game balance (for example, if you could see less area you couldn't react in the proper time to some waves), that is why we decided to show always 25m in terms of width.

clashoftheolympians-800x600

The image shows how we still show the same game world width of 25m on a 800x600 device.

Scaling the world back to match the screen size

Finally, in order to show this virtual area of 25m x H (with H >= 12.5m), we have to calculate the proper scale to set our game camera in each device. For example, in the case of having a Nexus 7 (1280x720 resolution device) the scale to show 25m of horizontal size is 51.2x since we know that 1280 / scale = 25, then 1280 / 25 = 51.2. In the case of a Samsung Galaxy Y (480x320 resolution device) the scale would be 19.2x since 480 / 25 = 19.2. Translating this inside the game would be something as easy as:

camera.scale = screen.width / 25

Final thoughts

This is not a general solution, it depends a lot in the game we were making and the things we could assume like the game height doesn't matter.

Even though the solution is specific and not so cool as the previous posts, we hope it could be of help when making your own game.

VN:F [1.9.22_1171]
Rating: 4.3/5 (18 votes cast)

Our solution to handle multiple screen sizes in Android – Part two

Continuing with the previous blog post, in this post we are going to talk about the code behind the theory. It consists in three concepts, the VirtualViewport, the OrthographicCameraWithVirtualViewport and the MultipleVirtualViewportBuilder.

VirtualViewport

It defines a virtual area where the game stuff is contained and provides a way to get the real width and height to use with a camera in order to always show the virtual area. Here is the code of this class:

public class VirtualViewport {

	float virtualWidth;
	float virtualHeight;

	public float getVirtualWidth() {
		return virtualWidth;
	}

	public float getVirtualHeight() {
		return virtualHeight;
	}

	public VirtualViewport(float virtualWidth, float virtualHeight) {
		this(virtualWidth, virtualHeight, false);
	}

	public VirtualViewport(float virtualWidth, float virtualHeight, boolean shrink) {
		this.virtualWidth = virtualWidth;
		this.virtualHeight = virtualHeight;
	}

	public float getWidth() {
		return getWidth(Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
	}

	public float getHeight() {
		return getHeight(Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
	}

	/**
	 * Returns the view port width to let all the virtual view port to be shown on the screen.
	 * 
	 * @param screenWidth
	 *            The screen width.
	 * @param screenHeight
	 *            The screen Height.
	 */
	public float getWidth(float screenWidth, float screenHeight) {
		float virtualAspect = virtualWidth / virtualHeight;
		float aspect = screenWidth / screenHeight;
		if (aspect > virtualAspect || (Math.abs(aspect - virtualAspect) < 0.01f)) {
			return virtualHeight * aspect;
		} else {
			return virtualWidth;
		}
	}

	/**
	 * Returns the view port height to let all the virtual view port to be shown on the screen.
	 * 
	 * @param screenWidth
	 *            The screen width.
	 * @param screenHeight
	 *            The screen Height.
	 */
	public float getHeight(float screenWidth, float screenHeight) {
		float virtualAspect = virtualWidth / virtualHeight;
		float aspect = screenWidth / screenHeight;
		if (aspect > virtualAspect || (Math.abs(aspect - virtualAspect) < 0.01f)) {
			return virtualHeight;
		} else {
			return virtualWidth / aspect;
		}
	}

}

So, if we have a virtual area of 640x480 and want to show it on a screen of 800x480 we can do the next steps in order to get the proper values that we have to use as the camera viewport for that screen:

VirtualViewport virtualViewport = new VirtualViewport(640, 480);
float realViewportWidth = virtualViewport.getWidth(800, 480);
float realViewportHeight = virtualViewport.getHeight(800, 480);
// now set the camera viewport values
camera.setViewportFor(realViewportWidth, realViewportHeight);

OrthographicCameraWithVirtualViewport

In order to simplify the work when using LibGDX library, we created a subclass of LibGDX's OrthographicCamera with specific behavior to update the camera viewport using the VirtualViewport values. Here is its code:

public class OrthographicCameraWithVirtualViewport extends OrthographicCamera {

	Vector3 tmp = new Vector3();
	Vector2 origin = new Vector2();
	VirtualViewport virtualViewport;
	
	public void setVirtualViewport(VirtualViewport virtualViewport) {
		this.virtualViewport = virtualViewport;
	}

	public OrthographicCameraWithVirtualViewport(VirtualViewport virtualViewport) {
		this(virtualViewport, 0f, 0f);
	}

	public OrthographicCameraWithVirtualViewport(VirtualViewport virtualViewport, float cx, float cy) {
		this.virtualViewport = virtualViewport;
		this.origin.set(cx, cy);
	}

	public void setPosition(float x, float y) {
		position.set(x - viewportWidth * origin.x, y - viewportHeight * origin.y, 0f);
	}

	@Override
	public void update() {
		float left = zoom * -viewportWidth / 2 + virtualViewport.getVirtualWidth() * origin.x;
		float right = zoom * viewportWidth / 2 + virtualViewport.getVirtualWidth() * origin.x;
		float top = zoom * viewportHeight / 2 + virtualViewport.getVirtualHeight() * origin.y;
		float bottom = zoom * -viewportHeight / 2 + virtualViewport.getVirtualHeight() * origin.y;

		projection.setToOrtho(left, right, bottom, top, Math.abs(near), Math.abs(far));
		view.setToLookAt(position, tmp.set(position).add(direction), up);
		combined.set(projection);
		Matrix4.mul(combined.val, view.val);
		invProjectionView.set(combined);
		Matrix4.inv(invProjectionView.val);
		frustum.update(invProjectionView);
	}

	/**
	 * This must be called in ApplicationListener.resize() in order to correctly update the camera viewport. 
	 */
	public void updateViewport() {
		setToOrtho(false, virtualViewport.getWidth(), virtualViewport.getHeight());
	}
}

MultipleVirtualViewportBuilder

This class allows us to build a better VirtualViewport given the minimum and maximum areas we want to support performing the logic we explained in the previous post. For example, if we have a minimum area of 800x480 and a maximum area of 854x600, then, given a device of 480x320 (3:2) it will return a VirtualViewport of 854x570 which is a good match of a resolution which contains the minimum area and is smaller than the maximum area and has the same aspect ratio of 480x320.

public class MultipleVirtualViewportBuilder {

	private final float minWidth;
	private final float minHeight;
	private final float maxWidth;
	private final float maxHeight;

	public MultipleVirtualViewportBuilder(float minWidth, float minHeight, float maxWidth, float maxHeight) {
		this.minWidth = minWidth;
		this.minHeight = minHeight;
		this.maxWidth = maxWidth;
		this.maxHeight = maxHeight;
	}

	public VirtualViewport getVirtualViewport(float width, float height) {
		if (width >= minWidth && width <= maxWidth && height >= minHeight && height <= maxHeight)
			return new VirtualViewport(width, height, true);

		float aspect = width / height;

		float scaleForMinSize = minWidth / width;
		float scaleForMaxSize = maxWidth / width;

		float virtualViewportWidth = width * scaleForMaxSize;
		float virtualViewportHeight = virtualViewportWidth / aspect;

		if (insideBounds(virtualViewportWidth, virtualViewportHeight))
			return new VirtualViewport(virtualViewportWidth, virtualViewportHeight, false);

		virtualViewportWidth = width * scaleForMinSize;
		virtualViewportHeight = virtualViewportWidth / aspect;

		if (insideBounds(virtualViewportWidth, virtualViewportHeight))
			return new VirtualViewport(virtualViewportWidth, virtualViewportHeight, false);
		
		return new VirtualViewport(minWidth, minHeight, true);
	}
	
	private boolean insideBounds(float width, float height) {
		if (width < minWidth || width > maxWidth)
			return false;
		if (height < minHeight || height > maxHeight)
			return false;
		return true;
	}

}

In case the aspect ratio is not supported, it will return the minimum area.

Floating elements

As we explained in the previous post, there are some cases where we need stuff that should be always at fixed positions in the screen, for example, the audio and music buttons in Clash of the Olympians. In order to do that we need to make the position of those buttons depend on the VirtualViewport. In the next section where we explain how to use all together we show an example of how to do a floating element.

Using the code together

Finally, here is an example showing how to use these concepts in a LibGDX application:

public class VirtualViewportExampleMain extends com.badlogic.gdx.Game {

	private OrthographicCameraWithVirtualViewport camera;
	
	// extra stuff for the example
	private SpriteBatch spriteBatch;
	private Sprite minimumAreaSprite;
	private Sprite maximumAreaSprite;
	private Sprite floatingButtonSprite;
	private BitmapFont font;

	private MultipleVirtualViewportBuilder multipleVirtualViewportBuilder;

	@Override
	public void create() {
		multipleVirtualViewportBuilder = new MultipleVirtualViewportBuilder(800, 480, 854, 600);
		VirtualViewport virtualViewport = multipleVirtualViewportBuilder.getVirtualViewport(Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
		
		camera = new OrthographicCameraWithVirtualViewport(virtualViewport);
		// centers the camera at 0, 0 (the center of the virtual viewport)
		camera.position.set(0f, 0f, 0f);
		
		// extra code
		spriteBatch = new SpriteBatch();
		
		Pixmap pixmap = new Pixmap(64, 64, Format.RGBA8888);
		pixmap.setColor(Color.WHITE);
		pixmap.fillRectangle(0, 0, 64, 64);
		
		minimumAreaSprite = new Sprite(new Texture(pixmap));
		minimumAreaSprite.setPosition(-400, -240);
		minimumAreaSprite.setSize(800, 480);
		minimumAreaSprite.setColor(0f, 1f, 0f, 1f);
		
		maximumAreaSprite = new Sprite(new Texture(pixmap));
		maximumAreaSprite.setPosition(-427, -300);
		maximumAreaSprite.setSize(854, 600);
		maximumAreaSprite.setColor(1f, 1f, 0f, 1f);
		
		floatingButtonSprite = new Sprite(new Texture(pixmap));
		floatingButtonSprite.setPosition(virtualViewport.getVirtualWidth() * 0.5f - 80, virtualViewport.getVirtualHeight() * 0.5f - 80);
		floatingButtonSprite.setSize(64, 64);
		floatingButtonSprite.setColor(1f, 1f, 1f, 1f);
		
		font = new BitmapFont();
		font.setColor(Color.BLACK);
	}
	
	@Override
	public void resize(int width, int height) {
		super.resize(width, height);
		
		VirtualViewport virtualViewport = multipleVirtualViewportBuilder.getVirtualViewport(Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
		camera.setVirtualViewport(virtualViewport);
		
		camera.updateViewport();
		// centers the camera at 0, 0 (the center of the virtual viewport)
		camera.position.set(0f, 0f, 0f);
		
		// relocate floating stuff
		floatingButtonSprite.setPosition(virtualViewport.getVirtualWidth() * 0.5f - 80, virtualViewport.getVirtualHeight() * 0.5f - 80);
	}
	
	@Override
	public void render() {
		super.render();
		Gdx.gl.glClearColor(1f, 0f, 0f, 1f);
		Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
		camera.update();
		
		// render stuff...
		spriteBatch.setProjectionMatrix(camera.combined);
		spriteBatch.begin();
		maximumAreaSprite.draw(spriteBatch);
		minimumAreaSprite.draw(spriteBatch);
		floatingButtonSprite.draw(spriteBatch);
		font.draw(spriteBatch, String.format("%1$sx%2$s", Gdx.graphics.getWidth(), Gdx.graphics.getHeight()), -20, 0);
		spriteBatch.end();
	}

	public static void main(String[] args) {
		LwjglApplicationConfiguration config = new LwjglApplicationConfiguration();

		config.title = VirtualViewportExampleMain.class.getName();
		config.width = 800;
		config.height = 480;
		config.fullscreen = false;
		config.useGL20 = true;
		config.useCPUSynch = true;
		config.forceExit = true;
		config.vSyncEnabled = true;

		new LwjglApplication(new VirtualViewportExampleMain(), config);
	}

}

In the example there are three colors, green represents the minimum supported area, yellow the maximum supported area and red represents the area outside. If we see red it means that aspect ratio is not supported. There is a floating element colored white, which is always relocated in the top right corner of the screen, unless we are on an unsupported aspect ratio, in that case it is just located in the top right corner of the green area.

The next video shows the example in action:

UPDATE: you can download the source code to run on Eclipse from here.

Conclusion

In these two blog posts we explained in a simplified way how we managed to support different aspect ratios and resolutions for Clash of the Olympians, a technique that could be used as an acceptable way of handling different screen sizes for a wide range of games, and it is not hard to use.

As always, we hope you liked it and that it could be useful for you when developing your games. Opinions and suggestions are always welcome if you want to comment 🙂 and also share it if you liked it and think other people could benefit from this code.

Thanks for reading.

VN:F [1.9.22_1171]
Rating: 4.9/5 (38 votes cast)

Our solution to handle multiple screen sizes in Android - Part one

Developing games for multiple devices is not an easy task. Given the variety of devices, one of the most common problem is having to handle multiple screen sizes, which means different resolutions and aspect ratios.

In this blog post we want to share what we did to minimize this problem when making Ironhide's Clash of the Olympians for Android.

In the next sections we are going to show some common ways of handling the multiple screens problem and then our way.

Stretching the content

One common approach when developing a game is making the game for a fixed resolution, for example, making the game for 800x480.

Based on that, you can have the next layout in one of your game's screens:


Main screen of Clash of the Olympians in a 800x480 device.

Then, to support other screen sizes the idea is to stretch the content to the other device screen:


Main screen on a 800x600 device, stretched from 800x480.

The main problem is that the aspect ratio is affected and that is visually unacceptable.

Stretching + keeping aspect ratio

To solve part of the previous problem, one common technique is stretching but keeping the correct aspect ratio by adding dead space to the borders of the screen so the real game area aspect ratio is the same on different devices. For example:


Main screen in a 800x600 device with borders.


Main screen in a 854x480 device with borders.

This is an easy way to attack this multiple screen size problem, you can even create some nice borders instead of the black borders shown in the previous image to improve how it looks.

However, in some cases this is not acceptable either since it doesn't look so good or it feels like the game wasn't made for that device.

Our solution: Using a Virtual Viewport

Our approach consists in adapting what is shown in the game screen area to the device screen size.

First, we define a range of aspect ratios we want to support, for example, in the case of clash we defined 4:3 (800x600) and 16:9 (854x480) as our border case aspect ratios, so all aspect ratios in the middle of those two should be supported.

Given those two aspect ratios, we defined our maximum area as 854x600 and our minimum area as 800x480 (the union and intersection between 800x600 and 854x480, respecively). The idea is to cover the maximum area with stuff, but the important stuff (buttons, information, etc) should be always included in the minimum area.


The red rectangle shows the minimum area while the blue rectangle shows the maximum area.

Then, given a device resolution we calculate an area that matches the device aspect ratio and is included in the virtual area. For example, given a device with a resolution of 816x544 (4:3), this is what is shown:


The green rectangle shows the matching area for 816x544.


This is how the main screen is shown in a 816x544 device.

In case we are on a bigger or lower resolution than the maximum or minimum area we defined, respectively, for example a screen of 480x320 (3:2), what we do is calculate the aspect ratio and find a corresponding match for that aspect ratio in the area we defined. In the case of the example, one match could be 800x534 since it is 3:2 aspect ratio and it is inside our virtual area. Then we scale down to fit the screen.


The green rectangle shows the calculated area for a resolution of 800x534 (matching the aspect of the 480x320 device).


This is what is shown of the main screen in a 480x320 device (click to enlarge the image).

Floating elements

For some elements of the game, such as buttons, maintaining their fixed world position for different screen sizes doesn't look good, so what we do is making them floating elements. That means they are always at the same screen position, the next images shows an example with the main screen buttons:


Main screen's buttons distribution for a 854x480 device.


Main screen's buttons distribution for a 800x600 device. As you can see, buttons are relocated to match the screen size.

Finally, we want to show a video of this multiple screen sizes auto adjustment in real time:


Adjusting the game to the screen size in real time.

Some limitations

As we are scaling up/down in some cases to match the corresponding screen, some devices could perceive some blur since we are using linear filtering and the final position of the elements after the camera transformations could be not integer positions. This problem is minimized with better density devices and assets.

Layouts could change between different devices, for example, the layout for a phone could be different to the layout of a tablet device.

Text is a special case, when rendering text just downscaling it is not a correct solution since it could be not readable. You may have to re-layout text for lower resolution devices to show it bigger and readable.

Conclusion

If you design your game screens follow this approach, it is not so hard to support multiple screen sizes in an acceptable way. However there is still a lot of detail to take care of, like the problems we talked in the previous section.

In the next part of this blog post we will show some code based on LibGDX for those interested in how we implemented all this.

Thanks for reading and hope you liked it.

VN:F [1.9.22_1171]
Rating: 4.7/5 (51 votes cast)

Drawing a projectile trajectory like Angry Birds using LibGDX

We had to implement a projectile trajectory like Angry Birds for our current game and we wanted to share a bit how we did it.

Introduction

In Angry Birds, the trajectory is drawn after you fired a bird showing its trajectory to help you decide the next shot. Knowing the trajectory of the current projectile wasn't totally needed in that version of the game since you have the slingshot and that tells you, in part, where the current bird is going.

In Angry Birds Space, they changed to show the trajectory of the current bird because they changed the game mechanics and now birds can fly different depending the gravity of the planets, the slingshot doesn't tell you the real direction anymore. So, that was the correct change to help the player with the new rules.

We wanted to test how drawing a trajectory, like Angry Birds Space does for the next shot, could help the player.

Calculating the trajectory

The first step is to calculate the function f(t) for the projectile trajectory. In our case, projectiles have a normal behavior (there are no mini planets) so the formula is simplified:

We found an implementation for the equation in stackoverflow, here the code is:

class ProjectileEquation {

	public float gravity;
	public Vector2 startVelocity = new Vector2();
	public Vector2 startPoint = new Vector2();

	public float getX(float t) {
		return startVelocity.x * t + startPoint.x;
	}

	public float getY(float t) {
		return 0.5f * gravity * t * t + startVelocity.y * t + startPoint.y;
	}

}

With that class we have an easy way to calculate x and y coordinates given the time.

Drawing it to the screen

If we follow a similar approach of Angry Birds, we can draw colored points for the projectile trajectory.

In our case, we created a LibGDX Actor dedicated to draw the Trajectory of the projectile. It first calculates the trajectory using the previous class and then renders it by using a Sprite and drawing it for each point of the trajectory by using the SpriteBatch's draw method. Here is the code:

public static class Controller  {
	
	public float power = 50f;
	public float angle = 0f;
	
}

public static class TrajectoryActor extends Actor {

	private Controller controller;
	private ProjectileEquation projectileEquation;
	private Sprite trajectorySprite;

	public int trajectoryPointCount = 30;
	public float timeSeparation = 1f;

	public TrajectoryActor(Controller controller, float gravity, Sprite trajectorySprite) {
		this.controller = controller;
		this.trajectorySprite = trajectorySprite;
		this.projectileEquation = new ProjectileEquation();
		this.projectileEquation.gravity = gravity;
	}

	@Override
	public void act(float delta) {
		super.act(delta);
		projectileEquation.startVelocity.set(controller.power, 0f);
		projectileEquation.startVelocity.rotate(controller.angle);
	}

	@Override
	public void draw(SpriteBatch batch, float parentAlpha) {
		float t = 0f;
		float width = this.width;
		float height = this.height;

		float timeSeparation = this.timeSeparation;
		
		for (int i = 0; i < trajectoryPointCount; i++) {
			float x = this.x + projectileEquation.getX(t);
			float y = this.y + projectileEquation.getY(t);

			batch.setColor(this.color);
			batch.draw(trajectorySprite, x, y, width, height);

			t += timeSeparation;
		}
	}

	@Override
	public Actor hit(float x, float y) {
		return null;
	}

}

The idea of using the Controller class is to be able to modify the values from outside of the actor by using a shared class between different parts of the code.

Further improvements

To make it look nicer, one possible addition is to decrement the size of the trajectory points and to reduce their opacity.

In order to do that we drawn each point of the trajectory each time with less alpha in the color and smaller by changing the width and height when calling spritebatch.draw().

We also added a fade in transition to show the trajectory instead making it instantly appear and that works great too, but that is in the game.

Another possible improvement, but depends on the game you are making, is to separate the points using a fixed distance. In order to do that, we have to be dependent on x and not t. So we added a method to the ProjectileEquation class that given a fixed distance and all the values of the class it returns the corresponding t in order to maintain the horizontal distance between points, here is the code:

	public float getTForGivenX(float x) {
		return (x - startPoint.x) / (startVelocity.x);
}

Now we can change the draw method of the TrajectoryActor to do, before starting to draw the points:

	float fixedHorizontalDistance = 10f;
	timeSeparation = projectileEquation.getTForGivenX(fixedHorizontalDistance);

Not sure which one is the best option between using x or t as the main variable, as I said before, I suppose it depends on the game you are making.

Here is a video showing the results:

If you want to see it working you can test the webstart of the prototypes project, or you can go to the code and see the dirty stuff.

Conclusion

Making a trajectory if you know the correct formula is not hard and it looks nice, it also could be used to help the players maybe as part of the basic gameplay or maybe as a powerup.

Hope you like it.

VN:F [1.9.22_1171]
Rating: 4.3/5 (22 votes cast)

Area triggers using Box2D, Artemis and SVG paths

As we explained in previous posts, we are using Inkscape to design the levels of some of our games, in particular, our current project. In this post we want to share how we are making area triggers using Box2D sensor bodies, Artemis and SVG paths.

What is an area trigger

When we say area trigger we mean something that should be triggered, an event for example, when an entity/game object enters the area, to perform custom logic, for example, ending the game or showing a message. Some game engines provides this kind of stuff, for example Unity3d with its Collider class and different events like OnTriggerEnter.

Building an area trigger in Inkscape

Basically, we use SVG paths with custom XML data to define the area trigger to later parse it by the game level loader to create the corresponding game entities. The following screen shot shows an example of an area defined using Inkscape:

Right now, we are exporting two values with the SVG path, the event we want to fire identified by the XML attribute named eventId, and extra data for that event identified by the XML attribute eventData. For example, for our current game we use the eventId showTutorial with a text we want to share with the player on eventData attribute like "Welcome to the training grounds". The following example shows the XML data added to the SVG path:

  

The exported data may depend on your framework or game, so you should export whatever data you need instead.

Defining the area trigger inside the game

Inside the game, we have to define a entity/game object for the area trigger. In the case of our current game, that entity is composed by a Box2D sensor body with a shape built using the SVG path and a Script with logic to perform when the main character collides it.

We use sensor bodies because they are mainly used to detect collisions but not to react to them by changing their angular and linear velocities. As we explained in a previous post, we are using our custom builders to help when building Box2D bodies and fixtures. Our current body declaration looks like this:

  
Body body = bodyBuilder //
	.fixture(bodyBuilder.fixtureDefBuilder() //
		.polygonShape(vertices) // the vertices from the SVG path
		.categoryBits(Collisions.Triggers) // the collision category of this body
		.maskBits(Collisions.MainCharacter) // the collision mask
		.sensor() //
	) //
	.position(0f, 0f) //
	.type(BodyType.StaticBody) //
	.angle(0f) //
	.userData(entity) //
	.build();

The previous code depends on specific stuff of the current game but it could be modified to be reused in other projects.

As we explained in another previous post, we are using a basic scripting framework over Artemis. Our current script to detect the collision looks like this:

 
public static class TriggerWhenShipOverScript extends ScriptJavaImpl {
	
	private final String eventId;
	private final String eventData;
	
	EventManager eventManager;

	public TriggerWhenShipOverScript(String eventId, String eventData) {
		this.eventId = eventId;
		this.eventData = eventData;
	}

	@Override
	public void update(World world, Entity e) {
		PhysicsComponent physicsComponent = Components.getPhysicsComponent(e);
		Contacts contacts = physicsComponent.getContact();
		
		if (contacts.isInContact()) {
			eventManager.submit(eventId, eventData);
			e.delete();
		}
	}
}

For the current game, we are testing this stuff for a way to communicate with the player by showing messages from time to time, for example, in a basic tutorial implementation. The next video shows an example of that working inside the game:

Conclusion

The idea of the post is to share a common technique of triggering events when a game object enters an area, which is not framework dependent. So you could use the same technique using your own framework instead Box2D and Artemis, a custom level file format instead SVG and the editor of your choice instead Inkscape.

References

VN:F [1.9.22_1171]
Rating: 4.4/5 (5 votes cast)

Building 2d animations using Inkscape and Synfig

In this blog post we want to share a method to animate Inkscape SVG objects using Synfig Studio, trying to follow a similar approach to the Building 2d sprites from 3d models using Blender blog post.

A small introduction about Inkscape

Inkscape is one of the best open source, multi platform and free tools to work with vector graphics using the open standard SVG.

After some time using Inkscape, I have learned how to make a lot of things and feel great using it. However, it lacks of some features which would make it a great tool, for example, a way to animate objects by making interpolations of its different states defining key frames and using a time line, among others.

It has some ways to create interpolations of objects between two different states but it is unusable since it doesn't work with groups, so if you have a complex object made of a group of several other objects, then you have to interpolate all of them. If you make some modification on of the key frames, then you have to interpolate everything again.

Synfig comes into action

Synfig Studio is a free and open-source 2D animation tool, it works with vector graphics as well. It lets you create nice animations using a time line and key frames and lets you easily export the animation. However, it uses its own format, so you can't directly import an SVG. Luckily, the format is open and there are already some ways to transform from SVG to Synfig.

In particular I tried an Inkscape extension named svg2sif which lets you save files in Synfig format and seems to work fine (the page of the extension explains how to install it). I don't know possible limitations of the svg2sif Inkscape extension, so use it with caution, don't expect everything to work fine.

Now that we have the method defined, we will explain it by showing an example.

Creating an object in Inkscape

We start by creating an Inkscape object to be animated later. For this mini tutorial I created a black creature named Bor...ahem! Gishus Maximus:

Modelling Gishus Maximus using Inkscape

Here is the SVG if you are interested on it, sadly WordPress doesn't support SVG files as media files.

With the model defined, we have to save it as Synfig format using the extension, so go to "Save a Copy..." and select the .sif format (added by the svg2sif extension), and save it.

Animating the object in Synfig

Now that we have the Synfig file we open it and voilà, we can animate it. However, there is a bug, probably with the svg2sif extension and the time line is missing. To fix it, we have to create a new document and copy the shape from the one exported by Inkscape to the new one.

The next step is to use your super animation skill and animate the object. In my case I created some kind of eating animation by making a mouth, opening it slow and then closing it fast:

Animating Gishus Maxumis using Synfig

Here is the Synfig file with the animation if you are interested on it.

To export it, use the "Show the Render Settings Dialog" button and configure how much frames per second you want, among other things, and then export it using the Render button. You can export it to different format, for example, a list of separated PNG files for each animation frame or an animated GIF. However, it you can't configure some of the formats and the exported file is not what I wanted so I preferred to export to a list of PNG files and then use the convert tool to create the animated GIF:

Finally, I have a time lapse of how I applied the method if you want to watch it:

Extra section: Importing the animation in your game

After we have separated PNG files for the animation, we can create a sprite sheet or use another tools to create files to be easily imported by a game framework. For this example, I used a Gimp plug-in named Sprite Tape to import all the separated PNG files and create a sprite sheet:

If you are a LibGDX user and want to use the Texture Packer, you can create a folder and copy the PNG files changing their names to animationname_01, animationname_02, etc, and let Texture Packer to automatically import it.

Conclusions

One problem with this method is that you can't easily modify your objects in Inkscape and then automatically import them in Synfig and update the current animation to work with it. So, once you moved to Synfig you have to keep there to avoid making a lot of duplicated work. This could be avoided if Inkscape provided a good animation extension.

Synfig Studio is a great tool but not the best of course, it is not intuitive (as Gimp, Blender and others) and it has some bugs that make it explode without reason. On the other hand, it is open source, free and multi platform and the best part is that it works well for what we need right now 😉

This method allow us to animate vector graphics which is great since it is a way for programmers like us to animate their programmer art 😀

Finally, I am not an animation expert at all, so this blog post could be based on some wrong assumptions. So, if you are one, feel free to correct me and share your opinions.

As always, hope you like the post.

VN:F [1.9.22_1171]
Rating: 4.3/5 (18 votes cast)