Implementing Fog of War for RTS games in Unity 1/2

For the last 3 years, I've been working on Iron Marines at Ironhide Game Studio, a Real time Strategy Game for mobile devices. During its development, we created a Fog of War solution that works pretty well for the game but it lacks some of the common features other RTS games have, and how to improve that is something I wanted to learn at some point in my life.

Recently, after reading a Riot Games Engineering blog post about Fog of War in League of Legends, I got motivated and started prototyping a new implementation.

In this blog post I will explain Iron Marines' Fog of War solution in detail and then I will write another blog post about the new solution and explain why I consider it is better than the first one.

Fog of War in Strategy Games

It normally represents the missing information about the battle, for example, not knowing how the terrain is yet, or outdated information, for example, the old position of an enemy base. Player units and buildings provide vision that removes Fog during the game revealing information about the terrain and the current location and state of the enemies.

Example: Dune 2 and its Fog of War representing the unknown territory (by the way, you can play Dune 2 online).

Example: Warcraft: Orcs and Humans' Fog of War (it seems you can play Warcraft online too).

The concept of Fog of War is being used in strategy games since more than 20 years now, which is a lot for video games.

Process

We started by researching other games and deciding what we wanted before start implementing anything.

After that, we decided to target a similar solution to Starcraft (by the way, it is free to play now, just download Battle.net and create an account). In that game, units and buildings have a range of vision that provide vision to the Player. Unexplored territory is covered with full opacity black fog while previously explored territory is covered by half opacity fog, revealing what the Player know about it, information that doesn't change during the game.

Enemy units and buildings are visible only if they are inside Player's vision but buildings leave over a last known location after they are not visible anymore. I believe the main reason for that they can't normally move (with the exception of some Terran buildings) so it is logical to assume they will stay in that position after losing vision and might be vital information about the battle.

Iron Marines

Given those rules, we created mock images to see how we wanted it to look in our game before started implementing anything.

Mock Image 1: Testing terrain with different kind of Fog in one of the stages of Iron Marines.

Mock Image 2: Testing now with enemy units to see when they should be visible or not.

Mock Image 3: Just explaining what each Fog color means for the Player's vision.

We started by prototyping the logic to see if it works for our game or not and how we should adapt it.

For that, we used an int matrix representing a discrete version of the game world where the Player's vision is. A matrix's entry with value 0 means the Player has no vision at that position and a value of 1 or greater means it has.

Image: in this matrix there are 3 visions, and one has greater range.

Units and buildings' visions will increment 1 to the value of all entries that represent world positions inside their vision range. Each time they move, we first decrease 1 from its previous position and then we increment 1 in the new position.

We have a matrix for each Player that is used for showing or hiding enemy units and buildings and for auto targeting abilities that can't fire outside Player's vision.

To determine if an enemy unit or building is visible or not, we first get the corresponding entry of the matrix by transforming its world position and check if the stored value is greater than 0 or not. If not, we change its GameObject layer to one that is culled from the main camera to avoid rendering it, we named that layer "hidden". If it is visible, we change it back to the default layer, so it starts being rendered again.

Image: shows how enemy units are not rendered in the Game view. I explain later why buildings are rendered even outside the Player's vision.

Visuals

We started by just rendering a black or grey color quad over the game world for each matrix's entry, here is an image showing how it looks like (it is the only one I found in the chest of memories):

This allowed us to prototype and decide some features we didn't want. In particular, we avoided blocking vision by obstacles like mountains or trees since we preferred to avoid the feeling of confinement and also we don't have multiple levels of terrain like other games do. I will talk more about that feature in the next blog post.

After we knew what we wanted, and tested in the game for a while, we decided to start improving the visual solution.

The improved version consists in rendering a texture with the Fog of War over the entire game world, similar to what we did when we created the visual mocks.

For that, we created a GameObject with a MeshRenderer and scaled it to cover the game world. That mesh renders a texture named FogTexture, which contains the Fog information, using a Shader that considers pixels' colors as an inverted alpha channel, from White color as full transparent to Black color as full opaque.

Now, in order to fill the FogTexture, we created a separated Camera, named FogCamera, that renders to the texture using a RenderTexture. For each object that provides vision in the game world, we created a corresponding GameObject inside the FogCamera by transforming its position accordingly and scaling it based on the vision's range. We use a separated Unity's Layer that is culled from other cameras to only render those GameObjects in the FogCamera.

To complete the process, each of those objects have a SpriteRenderer with a small white Ellipse texture to render white pixels inside the RenderTexture.

Note: we use an Ellipse instead of a Circle to simulate the game perspective.

Image: This is the texture used for each vision, it is a white Ellipse with transparency (I had to make the transparency opaque so the reader can see it).

Image: this is an example of the GameObjects and the FogCamera.

In order to make the FogTexture look smooth over the game, we applied a small blur to the FogCamera when rendering to the RenderTexture. We tested different blur shaders and different configurations until we found one that worked fine on multiple mobile devices. Here is how it looks like:

And here is how the Fog looks like in the game, without and with blur:

For the purpose of rendering previously revealed territory, we had to add a previous step to the process. In this step, we configured another camera, named PreviousFogCamera, using a RenderTexture too, named PreviousVisionTexture, and we first render the visions there (using the same procedure). The main difference is that the camera is configured to not clear the buffer by using the "Don't Clear" clear flag, so we can keep the data from previous frames.

After that, we render both the PreviousVisionTexture in gray color and the vision's GameObjects in the FogTexture using the FogCamera. The final result looks like this:

Image: it shows the revealed territory in the FogCamera.

Image: and here is an example of how the Fog with previous revealed territory looks in the game.

Buildings

Since buildings in Iron Marines are big and they don't move like Starcraft, we wanted to follow a similar solution.

In order to do that, we identified buildings we wanted to show below the Fog by adding a Component and configuring they were going to be rendered when not inside the Player's vision.

Then, there is a System that, when a GameObject with that Component enters the Player's vision for the first time, it creates another GameObject and configures it accordingly. That GameObject is automatically turned on when the building is not inside the Player's vision anymore and turned off when the building is inside vision. If, by some reason the building was destroyed while not in inside vision, the GameObject doesn't disappear until the Player discovers its new state.

We added a small easing when entering and leaving the Player's vision to make it look a bit smoother. Here is a video showing how it looks like:

Conclusion

Our solution lacks some of the common Fog of War features but it works perfectly for our game and looks really nice. It also performed pretty well on mobile devices, which is our main target, and if not done properly it could have affected the game negatively. We are really proud and happy with what we achieved developing Iron Marines.

That was, in great part, how we implemented Fog of War for Iron Marines. I hope you liked both the solution and the article. In the next blog post I will talk more about the new solution which includes more features.

Thanks for reading!

 

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

How we used Entity Component System (ECS) approach at Gemserk - 2/2

So, after our first attempt on using ECS, when we started to develop mobile games and moved to the LibGDX framework, we decided to abandon our ComponentsEngine and start over.

We were still reading about ECS while we were creating our set of tools and code over LibGDX. At some point in time, some of the Java community developers started Artemis, a lightweight ECS, and we decided to give it a try.

Artemis

This is a simplified core architecture diagram:

Note: we used a modified version of Artemis with some additions like enable/disable an Entity.

In this pure approach, Entities and Components are just data, and they are related by identifiers, like these tables in a relational database:

Where we have two entities, both with a PositionComponent and only one of them with MovementComponent.

An example of this components in code:

public class PositionComponent : Component {
    public float x, y;
}

public class MovementComponent: Component {
    public float speed;
}

EntitySystems perform the game logic. They normally work on a subset of Components from an Entity but they could need it in order to enable/disable/destroy it (if it is part of its logic).

An example of a System is LimitLinearVelocitySystem, used in multiple of our games to limit the physics velocity of an Entity with PhysicsComponent and LimitVelocityLimitComponent:

public void process(Entity e) {
    PhysicsComponent physicsComponent = 
                     Components.getPhysicsComponent(e);
    Body body = physicsComponent.getPhysics().getBody();

    LinearVelocityLimitComponent limitComponent = 
             e.getComponent(LinearVelocityLimitComponent.class);
    Vector2 linearVelocity = body.getLinearVelocity();

    float speed = linearVelocity.len();
    float maxSpeed = limitComponent.getLimit();

    if (speed > maxSpeed) {
        float factor = maxSpeed / speed;
        linearVelocity.mul(factor);
        body.setLinearVelocity(linearVelocity);
    }
}

There are some extra classes like the TagManager which allows assigning a unique string identifier to an Entity in order to find it, and the GroupManager which allows adding entities to groups identified by a name, which is more like how tags are used everywhere else.

Even though it could look similar to our ComponentsEngine where the Properties in that engine correspond to the Components in this one, and Components to Systems, there is an important difference: Systems are not part of an Entity and work horizontally over all entities with specific Components (data). So, in this approach, changing only data from entities indeed change their behaviour in the game, so it is really data driven.

Since Entities and Components are just data, their instances in memory are easier to reuse. An Entity is just an id, so it is obvious. A Component is a set of data that doesn't care too much which Entity it belongs to. So, after the Entity doesn't need it anymore, it could be reused elsewhere, improving a lot memory usage and garbage collection.

Artemis, and other pure ECS implementations, are really lightweight and performant frameworks. The real power comes from the Systems and Components built over them. In our case, we created a lot of Systems that we started to reuse between games, from SuperFlyingThing to VampireRunner and even in Clash of the Olympians.

Scripting

One of the most interesting ones is the Scripting framework we created over Artemis. It was really useful and gave us a lot of power to overcome some of the limitations we were facing with the pure approach when we need to create a specific logic for a specific moment during the game and (probably) never again and we didn't want a System for that.

They work similar to Unity MonoBehaviours and our logic Components from ComponentsEngine, and they also belong to the Entity lifecycle. One difference, however, is that they try to avoid storing data in their instances as much as possible and instead try to store and read data from Components, like Systems do.

Here is an example Script from one of our games:

public class MovementScript extends ScriptJavaImpl {
	
  @Override
  public void update(World world, Entity e) {

    MovementComponent movementComponent = Components.getMovementComponent(e);
    SpatialComponent spatialComponent = Components.getSpatialComponent(e);
    ControllerComponent controllerComponent = Components.getControllerComponent(e);
		
    Controller controller = controllerComponent.controller;
    Movement movement = movementComponent.getMovement();
    Spatial spatial = spatialComponent.getSpatial();
		
    float rotationAngle = controllerComponent.rotationSpeed * GlobalTime.getDelta();
		
    Vector2 linearVelocity = movement.getLinearVelocity();
		
    if (controller.left) {
        spatial.setAngle(spatial.getAngle() + rotationAngle);
        linearVelocity.rotate(rotationAngle);
    } else if (controller.right) {
        spatial.setAngle(spatial.getAngle() - rotationAngle);
        linearVelocity.rotate(-rotationAngle);
    }
		
  }
	
}

Note: GlobalTime.getDelta() is similar to Unity's Time API.

One problem about the Scripting framework is that it is not always clear when some logic should be in a Script or a System, and that made reusability a bit harder. In the case of the example above, it is obvious it should be moved to a System.

Templates

Another useful thing we did over Artemis was using templates to build entities in a specific way, similar to what we had in ComponentsEngine.

We used also a concept of Parameters in templates in order to customize parts of them. Similar to ComponentsEngine, templates could apply other templates and different templates could be applied to the same Entity.

Here is an example of an Entity template:

public void apply(Entity entity) {

    String id = parameters.get("id");
    String targetPortalId = parameters.get("targetPortalId");
    String spriteId = parameters.get("sprite", "PortalSprite");
    Spatial spatial = parameters.get("spatial");
    Script script = parameters.get("script", new PortalScript());

    Sprite sprite = resourceManager.getResourceValue(spriteId);

    entity.addComponent(new TagComponent(id));
    entity.addComponent(new SpriteComponent(sprite, Colors.darkBlue));
    entity.addComponent(new Components.PortalComponent(targetPortalId, spatial.getAngle()));
    entity.addComponent(new RenderableComponent(-5));
    entity.addComponent(new SpatialComponent(spatial));
    entity.addComponent(new ScriptComponent(script));

    Body body = bodyBuilder //
            .fixture(bodyBuilder.fixtureDefBuilder() //
                    .circleShape(spatial.getWidth() * 0.35f) //
                    .categoryBits(CategoryBits.ObstacleCategoryBits) //
                    .maskBits((short) (CategoryBits.AllCategoryBits & ~CategoryBits.ObstacleCategoryBits)) //
                    .sensor()) //
            .position(spatial.getX(), spatial.getY()) //
            .mass(1f) //
            .type(BodyType.StaticBody) //
            .userData(entity) //
            .build();

    entity.addComponent(new PhysicsComponent(new PhysicsImpl(body)));

}

That template configures a Portal entity in SuperFlyingThing which teleports the main ship from one place to another. Click here for the list of templates used in that game.

Interacting with other systems

Sometimes you already have a solution for something, like the physics engine, and you want to integrate it with the ECS. The way we found was to create a way to synchronize data from that system to the ECS and vice versa, sometimes having to replicate a bit of data in order to have it easier for the game to use it.

Finally

Even though we had to do some modifications and we know it could still be improved, we loved using a pure ECS in the way we did.

It was used to build different kind of games/prototypes and even a full game like the Clash of the Olympians for mobile devices, and it worked pretty well.

What about now?

Right now we are not using any of this in Unity. However we never lost interest nor feith in ECS, and there are starting to appear new solutions over Unity in the last years that caught our attention. Entitas is one of them. The Unity team is working towards this path too as shown in this video. Someone who din't want to wait started implementing that solution in his own way. We've also watched some Unite and GDC talks about using this approach in big games and some of them even have a Scripting layer too, which is awesome since, in some way, it validates we weren't so wrong ;).

I am exited to try ECS approach again in the near future. I believe it could be a really good foundation for multiplayer games and that is something I'm really interested in doing at some point in my life.

Thanks for reading.

References

 

 

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

Assigning interface dependencies to MonoBehaviour fields in Unity Editor

In Unity, Object's public fields of type interface are not serialized, so that means it is not possible to configure them using the editor like you do with other stuff like dependencies to MonoBehaviours.

One possible way to deflect this problem is to create an abstract MonoBehaviour and depend on that instead:

 public abstract class ServiceBase : MonoBehaviour
 {
     public abstract void DoStuff();    
 }
 
 public class ServiceUser : MonoBehaviour
 {
     public ServiceBase service;
 
     void Update()
     {
         service.DoStuff ();
     }
 }

This solution has some limitations. One of them is you can't create a sub class of multiple base classes like you do when implementing interfaces. Another limitation is that you can't easily switch between a MonoBehaviour implementation to a ScriptableObject implementation (or any other).

Working on Iron Marines and Dashy Ninja, I normally end up referencing a UnityEngine.Object and then in runtime I try to get the specific interface, like this:

 public interface IService 
 {
      void DoStuff();
 }
 
 public class CharacterExample : MonoBehaviour
 {
     public UnityEngine.Object objectWithInterface;
     IService _service;
 
     void Start ()
     {
         var referenceGameObject = objectWithInterface as GameObject;
         if (referenceGameObject != null) {
             _service = referenceGameObject.GetComponentInChildren<IService> ();
         } else {
             _service = objectWithInterface as IService;
         }
     }

     void Update()
     {
         _service.DoStuff ();
     }
 }
 

Then, in Unity Editor I can assign both GameObjects and ScriptableObjects (or any other Object):

That works pretty well but I end up having the code of how to get the interface duplicated all around.

To avoid that, I made up two helper classes with the code to retrieve the interface from the proper location. These are InterfaceReference and InterfaceObject<T>.

InterfaceReference

 [Serializable]
 public class InterfaceReference
 {
     public UnityEngine.Object _object; 
     object _cachedGameObject;
 
     public T Get<T>() where T : class
     {
         if (_object == null)
             return _cachedGameObject as T;

         var go = _object as GameObject;
 
         if (go != null) {
             _cachedGameObject = go.GetComponentInChildren<T> ();
         } else {
             _cachedGameObject = _object;
         }
 
         return _cachedGameObject as T;
     }

     public void Set<T>(T t) where T : class
     {
         _cachedGameObject = t;
         _object = t as UnityEngine.Object;
     }
 }

Usage example:

 public class CharacterExample : MonoBehaviour
 {
     public InterfaceReference service;
 
     void Update()
     {
         // some logic
         service.Get<IService>().DoStuff();
     }
 }

The advantage of this class is that it could be simply used anywhere and it is already serialized by Unity. But the disadvantage is having to ask for specific interface when calling the Get() method. If that is only in one place, great, but if you have to do it a lot in the code it doesn't look so good. In that case, it could be better to have another field and assign it once in Awake() or Start() methods.

InterfaceObject

 public class InterfaceObject<T> where T : class
 {
     public UnityEngine.Object _object;
     T _instance;
 
     public T Get() {
         if (_instance == null) {
             var go = _object as GameObject;
             if (go != null) {
                 _instance = go.GetComponentInChildren<T> ();
             } else {
                 _instance = _object as T;
             }
         }
         return _instance;
     }
 
     public void Set(T t) {
         _instance = t;
         _object = t as UnityEngine.Object;
     }
 }

Note: the Get() and Set() methods could be a C# Property also, which would be useful to allow the debugger to evaluate the property when inspecting it.

Usage example:

 [Serializable]
 public class IServiceObject : InterfaceObject<IService>
 {
     
 }
 
 public class CharacterExample : MonoBehaviour
 {
     public IServiceObject service;
 
     void Update()
     {
         // some logic
         service.Get().DoStuff();
     }
 }

The advantage of this class is that you can use the Get() method without having to cast to the specific interface. The disadvantage, however, is that it is not serializable because it is a generic class. In order to solve that, an empty implementation with a specific type must be created to allow Unity to serialize it, like it is shown in the previous example. If you don't have a lot of interfaces you want to configure, then I believe this could be a good solution.

All of this is simple but useful code to help configuring stuff in the Unity Editor.

I am working on different experiments on how to improve Unity's usage for me. Here is a link to the project in case you are interested in taking a look.

Thanks for reading

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

Our tips to improve Unity UI performance when making games for mobile devices

There is a Unity Unite talk named "Unite Europe 2017 - Squeezing Unity: Tips for raising performance" by Ian Dundore about things you can do in your game to improve Unity performance by explaining how Unity works behind the scenes.

Update: I just noted (when I was about to complete the post) there is another talk by Ian from Unite 2016 named "Unite 2016 - Let's Talk (Content) Optimization", which has other good tips as well.

There are two techniques to improve Unity UI performance we use at work they didn’t mention in the video and we want to share them in this blog post. One of them is using CanvasGroup component and the other one is using RectMask2D.

CanvasGroup

CanvasGroup component controls the alpha value of all the elements inside in its RectTransform hierarchy and whether that hierarchy handle input or not. The first one is mainly used for render purposes while the second one for user interaction.

What is awesome about CanvasGroup is, if you want to avoid rendering a hierarchy of elements, you can just put alpha in 0 and that will avoid sending it to render queue improving gpu usage, or at least that is what the FrameDebugger and our performance tests say. If you also want to avoid that hierarchy to consume events from the EventSystem, you can turn off the block raycast property and that will avoid all the raycast checks for its children, improving cpu usage. The combination of those two is important. It is also easier and more designer friendly than iterating over all the children with CanvasRender and disable them. Same thing to disable/enable all objects handling raycasts.

In our case, at work, we are using multiple Canvas objects and have all of them “disabled” (not being rendered nor handling input) using CanvasGroup alpha and block raycasts properties. That improves a lot the speed of activating and deactivating our metagame screens since it avoids regenerating the mesh and calculating layout again which GameObject SetActive() does.

RectMask2D

The idea when using masks it to hide part of the screen in some way, even using particular shapes. We use masks a lot at work in the metagame screens, mainly to show stuff in a ScrollRect in a nice way.

We started using just Mask component, with an Image without Sprite set, to crop what we wanted. Even though it worked ok, it wasn’t performing well on mobile devices. After investigating a bit with FrameDebugger we discovered we had tons of render calls of stuff that was outside the ScrollRect (and the mask).

Since we are just using rectangle containers for those ScrollRects, we changed to use RectMask2D instead. For ScrollRects with a lot of elements, that change improved enormously the performance since it was like the elements outside the mask weren’t there anymore in terms of render calls.

This was a great change but only works if you are using rectangle containers, doesn’t work with other shapes. Note that the Unity UI Mask tutorial only shows image masks and doesn't say anything about performance cost at all (it should).

Note: when working with masks there is a common technique of adding something over the mask to hide possible ugly mask borders, we normally do that on all our ScrollRect that doesn't cover all the screen.

Bonus track: The touch hack

There is another one, a hack, we call it the Touch Hack. It is a way to handle touch all over the screen without render penalty, not sure it is a great tip but it helped us.

The first thing we thought when handling touch all over the screen (to do popup logic and/or block all the other canvases) was to use an Image, without Sprite set, expanded to all the screen with raycast enabled. That worked ok but it was not only breaking the batch but also rendering a big empty transparent quad to all the screen which is really bad on mobile devices.

Our solution was to change to use a Text instead, also expanded to all the screen but without Font nor text set. It doesn’t consume render time (tested on mobile devices) and handles the raycasts as expected, I suppose it is because it doesn’t generate the mesh (since it doesn’t have text nor font set) and at the same time still has the bounding box for raycasts configured.

Conclusion

It is really important to have good tools to detect where the problems are and a way to know if you are improving or not. We use a lot the FrameDebugger (to see what was being drawn, how many render calls, etc), the overdraw Scene view and the Profiler (to see the Unity UI CPU cost).

Hope these tips could help when using Unity UI to improve even more the performance of your games.

More

Optimizing Unity UI - http://www.tantzygames.com/blog/optimizing-unity-ui/

A guide to optimizing Unity UI - https://unity3d.com/es/learn/tutorials/temas/best-practices/guide-optimizing-unity-ui

Implementing Multiple Canvas Groups In Unity 5 - http://www.israel-smith.com/thoughts/implementing-multiple-canvas-groups-in-unity-5/

 

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

Using Unity Text to show numbers without garbage generation

The idea of this post is to show different ideas and analysis on how to use Unity UI Text to show numbers without garbage generation. I need this for a framerate counter and other debug values.

Test case

Shows a fixed digit length number in screen, regenerated each frame with a new random value.

Shows how the test scene used for all test cases work.

Using Strings

Since strings are immutable in c#, common operations on strings generates new strings and hence allocates new heap memory. If you are using strings as temporary values like showing a changing number in a UI text then that memory becomes garbage. In PC that garbage could go unnoticed but not in mobile devices since that could derive in a hiccup when the garbage collector decides to collect it.

The idea with these tests is to try to use make the label work with strings without garbage generation. To detect generated garbage I am using the Unity profiler and avoiding ToString() of int, float, etc, to just calculate the cost of the string manipulation for now.

String concatenation

String concatenation generates 30 Bytes per frame since internally String.Concat() calls String.InternallyAllocateStr().

It is not as bad as expected, it is just creating a new string with the length of the first string plus the second and then it copies their values. Obviously it becomes worse when multiple concatenations are done in secuence.

Test code:

Text text;
 
static readonly string[] numbers = { "0", "1", "2", "3", "4", "5", "6", "7", "8", "9" };
 
void Start () {
    text = GetComponent<Text> ();
}
 
void Update () {
     
    string a = numbers[UnityEngine.Random.Range(0, numbers.Length)];
    string b = numbers[UnityEngine.Random.Range(0, numbers.Length)];
 
    text.text = a + b;
}

String format

Using string.Format() generates 176 Bytes per frame, internally is using String.FormatHelper + StringBuilder.ToString().  The first one creates a new StringBuilder and the second is the transform from StringBuilder to string.

Test code:

 Text text;
 
 static readonly string[] numbers = { "0", "1", "2", "3", "4", "5", "6", "7", "8", "9" };
 
 void Start () {
     text = GetComponent<Text> ();
 }
 
 void Update () {
     string a = numbers[UnityEngine.Random.Range(0, numbers.Length)];
     string b = numbers[UnityEngine.Random.Range(0, numbers.Length)];
 
     text.text = string.Format ("{0}{1}", a, b);        
     
 }

String Builder Format

Using cached StringBuilder improves the previous one a bit, it generates 86 Bytes per frame, the AppendFormat is generating garbage and then the set_Length() (used to clear the StringBuilder).

Test code:

 Text text;
 StringBuilder stringBuilder = new StringBuilder(20, 20);
 
 static readonly string[] numbers = { "0", "1", "2", "3", "4", "5", "6", "7", "8", "9" };
 
 void Start () {
     text = GetComponent<Text> ();
     stringBuilder.Length = 3;
 }
 -j
 void Update () {
     string a = numbers[UnityEngine.Random.Range(0, numbers.Length)];
     string b = numbers[UnityEngine.Random.Range(0, numbers.Length)];
 
     stringBuilder.Length = 0;
     stringBuilder.AppendFormat ("{0}{1}", a, b);
 
     text.text = stringBuilder.ToString();
 }

Note: If I change the StringBuilder starting capacity and max capacity, the cost is the same but goes to ToString() method instead, but internally to the same method String.InternallyAllocateStr().

String Builder only Append

Instead of using StringBuilder.AppendFormat, change to use only String.Append. This reduces the cost to only 30 Bytes per frame (the same of the first one), the only cost here is the set_Length() which internally calls String.InternallyAllocateStr().

Test code:

 Text text;
 StringBuilder stringBuilder = new StringBuilder(20, 20);
 
 static readonly string[] numbers = { "0", "1", "2", "3", "4", "5", "6", "7", "8", "9" };
 
 void Start () {
     text = GetComponent<Text> ();
     stringBuilder.Length = 3;
 }
 
 void Update () {
     string a = numbers[UnityEngine.Random.Range(0, numbers.Length)];
     string b = numbers[UnityEngine.Random.Range(0, numbers.Length)];
 
     stringBuilder.Length = 0;
     stringBuilder.Append (a);
     stringBuilder.Append (b);
 
     text.text = stringBuilder.ToString();
 }

Note: Does the same behaviour if I change starting and max capacity, the cost is the same but is on ToString() instead of set_Length().

String Builder by replacing chars

If instead of Append I replace chars directly by using [] and avoid the set_Length(), the cost is the same, 30 Bytes per frame, since the String.InternallyAllocateStr() goes to set_Chars().

Test code:

 Text text;
 StringBuilder stringBuilder = new StringBuilder(20, 20);
 
 static readonly char[] numbers = { '0', '1', '2', '3', '4', '5', '6', '7', '8', '9' };
 
 void Start () {
     text = GetComponent<Text> ();
     stringBuilder.Length = 3;
 }

 void Update () {
     char a = numbers[UnityEngine.Random.Range(0, numbers.Length)];
     char b = numbers[UnityEngine.Random.Range(0, numbers.Length)];
 
     stringBuilder [0] = a;
     stringBuilder [1] = b;
 
     text.text = stringBuilder.ToString();
 }

Note: Again, does the same behaviour if I change starting and max capacity, instead of set_Chars(), the cost is in ToString() method.

String Builder, access internal string by reflection

There is a suggestion at in this post to access by refleciton to _str field from StringBuilder class to avoid the cost of ToString() method.

Test code:

 Text text;
 StringBuilder stringBuilder = new StringBuilder(20, 20);

 static System.Reflection.FieldInfo _sb_str_info = 
        typeof(StringBuilder).GetField("_str", 
        System.Reflection.BindingFlags.NonPublic | 
        System.Reflection.BindingFlags.Instance);
 
 static readonly char[] numbers = { '0', '1', '2', '3', '4', '5', '6', '7', '8', '9' };
 
 void Start () {
     stringBuilder.Length = 3;
 
     text = GetComponent<Text> ();
 }
 
 void Update () {
     stringBuilder[0] = numbers[UnityEngine.Random.Range(0, numbers.Length)];
     stringBuilder[1] = numbers[UnityEngine.Random.Range(0, numbers.Length)];
     stringBuilder[2] = (char) 0;
 
     var internalValue = _sb_str_info.GetValue (stringBuilder) as string;
     text.text = internalValue;
 }

In this case, there is no garbage at all. However, I see no change in the UI Text even though the editor shows the text field value is changing, like it is not being redrawn in screen. I suppose that could be because the string pointer is not changing and by taking a look at the Text code from the Unity UI it is comparing with != instead of Equals... not sure here.

 public virtual string text
 {
     get
     {
         return m_Text;
     }
     set
     {
         if (String.IsNullOrEmpty(value))
         {
             if (String.IsNullOrEmpty(m_Text))
                 return;
             m_Text = "";
             SetVerticesDirty();
         }
         else if (m_Text != value)
         {
             m_Text = value;
             SetVerticesDirty();
             SetLayoutDirty();
         }
     }
 }

I tried by forcing layout and vertices dirty after updating internal string, just in case, but had no luck (sad face).

Caching strings

Another option suggested in this blog post is to precache strings for different numbers but that is only reasonable for a small amount of digits. I like it because it is simple and could be generated at runtime, and works well for debug numbers like FPS where the number is normally between 0 and 60.

I tried it and it works really well and generates 0 Bytes per frame.

Test code:

 Text text;
 
 string[] generated;
 
 // Use this for initialization
 void Start () {
     text = GetComponent<Text> ();
 
     generated = new string[100];
 
     // should go from 0 to 99.
     for (int i = 0; i < 100; i++) {
         generated [i] = string.Format ("{0:00}", i);
     }
 }
 
 // Update is called once per frame
 void Update () {
     int random = UnityEngine.Random.Range (0, generated.Length);
     text.text = generated [random];
 }
 

Rendering numbers directly

One possible way to avoid all this garbage (I mean both the code and the unused memory) is to not use strings at all but to just render to the screen images for each number digit, where each digit is a different sprite.

When making TinyWarriors prototype I did a basic number rendering where I could specify the number of digits and it just created multiple Unity UI Images inside a horizontal layout.

Shows a test using images for each digit instead of a text.

Test code:

 public Image[] numbers;
 
 // in order, like 0, 1, 2, ..., 9
 public Sprite[] numberSprites;
 
 public bool fillZero = true;
 
 void Start()
 {
     SetNumber (0);
 }
 
 public void SetNumber(int number)
 {
     int tens = (number % 100) / 10;
     int ones = (number % 10);
 
     var tensActive = fillZero || tens != 0;
     var onesActive = fillZero || number > 0;
 
     numbers [0].gameObject.SetActive (tensActive);
     numbers [1].gameObject.SetActive (onesActive);
 
     if (tensActive)
         numbers [0].sprite = numberSprites [tens];
 
     if (onesActive)
         numbers [1].sprite = numberSprites [ones];
 }
 
 public void Update()
 {
     int random = UnityEngine.Random.Range (0, 100);
     SetNumber (random);
 }

The code could be adapted to support more digits. When profiling it in editor there is a lot of garbage generation, around 1KB per frame, in Canvas.SendWillRendereCanvases() because it is forcing a material rebuild each time a sprite is changed. However, I tested it on devices and it doesn’t so it must be something related with the Unity editor.

Other strategies

Other strategies include minimizing the garbage generation by reducing the text update frequency, for example, by avoiding updating the text if the number didn't change and/or updating the text from time to time and not every frame.

Conclusion

Since I just wanted a solution for a framerate counter (and other debug numbers) the last solutions are perfect and I believe those could even be extrapolated for other game needs, like showing the player points in an arcade game, with a bit of extra thinking.

References

Here is a list of some articles, forum and blog posts I took a look during the tests and the post writing.

Unity memory optimizations article - https://unity3d.com/es/learn/tutorials/temas/performance-optimization/optimizing-garbage-collection-unity-games

Memory management reference - http://www.memorymanagement.org/

FPS implementation caching strings - http://catlikecoding.com/unity/tutorials/frames-per-second/

Using reflection to set StringBuilder string to avoid garbage - http://www.defectivestudios.com/devblog/garbage-free-string-unity/

FPS Asset - http://blog.codestage.ru/unity-plugins/fps/

Another FPS Asset - https://www.assetstore.unity3d.com/en/#!/content/6513

StringBuilder API - https://msdn.microsoft.com/en-us/library/system.text.stringbuilder(v=vs.110).aspx

Performance tips for Unity for mobile - https://divillysausages.com/2016/01/21/performance-tips-for-unity-2d-mobile/

Unity UI Source code - https://bitbucket.org/Unity-Technologies/ui

Untiy Community Library - https://github.com/UnityCommunity/UnityLibrary

 Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

Making mockups and prototypes to minimize problems

I’m not inventing anything new here, I just want to share how making mockups and prototypes helped me to clarify and minimize some problems and in some cases even solve them with almost no cost.

For prototypes and mockups I'm using the Superpower Assets Pack of Sparklin Labs which provided me a great way of start visualizing a possible game. Thank you for that guys.

I will start talking about how I used visual mockups to quickly iterate multiple times over the layout of the user interface of my game to remove or reduce a lot of unknowns and possible problems.

After that, I will talk about making quick small prototypes to validate ideas. One of them is about performing player actions with a small delay (to simulate networking latency) and the other one is about how to solve each player having different views of the same game world.

UI mockups

For the game I’m making the player's actions were basically clear but I didn't know exactly how the UI was going to be and considering I have small experience making UIs, having a good UI solution is a big challenge.

In the current game prototype iteration, the players only have four actions, build unit, build barracks, build houses and send all units to attack the other player. At the same time, to perform those actions, they need to know how much money they have, available unit slots and how much each action cost.

To start solving this problem, I quickly iterate through several mockups, made directly in a Unity scene and using a game scene as background to test each possible UI problem case. For each iteration I compiled it to the phone and "test it" by early detecting problems like "the buttons are too small" or "can't see the money because I am covering it with my fingers", etc.

Why did I use Unity while I can do it with any image editing application and just upload the image to the phone? Well, that's is a good question, one of the answers is because I am more used to do all these stuff in Unity and I already have the template scenes. The other answer is because I was testing, at the same time, if the Unity UI solution supported what I was looking for and I could even start testing interaction feedback, like how the button will react when touched, if the money will turn to red when not having anymore, etc, something I could not test with only images.

The following gallery shows screenshots of different iterations where I tested button positions, sizes, information and support for possible future player actions. I will not go in detail here because I don't remember exactly the order nor the test but you could get an idea by looking at the images.

It took me like less than 2hs to go through more than 10 iterations, testing even visual feedback by discovering when testing that the player should quickly know when some action is disabled because of money restriction or not having unit slots available, etc. I even have to consider changing the scale of the game world to give more empty space reserved for the UI.

Player actions through delayed network

When playing network games, one thing that was a possible issue in my mind is that the player should receive feedback instantly even though the real action could be delayed a bit to be processed in the server. In the case of a move unit action in a RTS, the feedback could be just an animation showing the move destination and process the action later, but when the action considers a consuming a resource, that could be a little tricky, or at least I wasn’t sure so I decided to make a quick test for that.

Similar to the other, I created a Unity scene, in a separated project, I wanted to iterate really fast on this one. The idea to test was to have a way of processing part of the action in the client side to validate the preconditions (enough money) and to give the player instant feedback, and then process the action when it should.

After analyzing it a bit, my main concern was the player experience on executing an action and receiving instant feedback but watching the action was processed later, so I didn’t need any networking related code, I could test everything locally.

The test consisted in building white boxes with the right mouse button, each box costs $20 and you start with $100. So, the idea is that in the moment the button is pressed, a white box with half opacity appears giving the idea the action was processed and $20 are consumed, so you can’t do another action that needs more than that money. After a while, the white box is built and the preview disappear.

Here is a video showing it in action:

In the case of a server validating the action, it will work similar, the only difference is that the server could fail to validate the action (for example, the other player stole money before), in that case the player has to cancel it. So the next test was to try to process that case (visually) to see how it looks like and how it feels. The idea was similar to the previous case but after a while the game returns the money and the box preview disappears.

Here is a video showing this case in action:

It shouldn't be a common case but this is one idea on how it could be solved and I don’t think it is a bad solution.

Different views of the same game world

The problem I want to solve here is how each player will see the world. Since I can't have different worlds for each player the idea is to have different views of the same world. In the case of 3d games, having different cameras should do the trick (I suppose) but I wasn't  sure if that worked the same way for a 2d game, so I have to be sure by making a prototype.

One thing to consider is that, in the case of the UI, each player should see their own actions in the same position.

For this prototype, I used the same scene background used for the mockups, but in this case I created two cameras, one was rotated 180 degrees to show the opposite:

player0(player 1 view)

player1(player 2 view)

Since UI should be unique for each player I configured each canvas for each camera and used the culling mask to show one or another canvas.

Again, this test was really simple and quick, I believe I spent like 30 mins on it, the important thing is that I know this is a possible (and probably the) solution for this problem, which before the test I wasn't sure if it was a hard problem or not.

Conclusions

One good thing about making prototypes is that you could do a lot of shortcuts or assume stuff since the code and assets are not going to be in the game, that gives you a real fast iteration time as well as focus on one problem at a time. For example, by testing the mockups I added all assets in one folder and used Unity sprite packer without spending time on which texture format should I use for each mobile platform or if all assets can be in one texture or not, or stuff like that.

Making quick prototypes of things that you don't know how to solve, early, gives you a better vision of the scope of problems and it is better to know that as soon as possible. Sometimes you could have a lead on how to solve them and how expensive the solution could be and that gives you a good idea on when you have to attack that problem or if you want it or not (for example, forget about a specific feature). If you can't figure it out possible solutions even after making prototypes, then that feature is even harder than you thought.

Prototyping is cheap and it is fun because it provides a creative platform where the focus is on solving one problem without a lot of restrictions, that allows developing multiple solutions, and since it is a creative process, everyone in game development (game designers, programmers, artists, etc) could participate.

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

VN:F [1.9.22_1171]
Rating: 5.0/5 (4 votes cast)

Delegating responsibilities from the engine to the game

When building a platform for a game I tend to spend too much time thinking solutions for every possible problem, in part because it is fun and also a good exercise and in part because I am not sure which of those problems could affect the game. The idea of this post is to share why sometimes I believe it is better to delegate problems and solutions to the game instead of solving them in the engine.

In the case of the game state, I was trying to create an API on the platform side to be used by the game to easily represent it. I started by having a way to collaborate with the game state by storing values in it, like this:

public interface GameState {
    void storeInt(string name, int number);
    void storeFloat(string name, float number);
}

In order to use it, a class on the game side has to implement an interface which allows it to collaborate in part of the game state data:

public class MyCustomObject : GameStateCollaborator
{
    public void Collaborate(GameState gameState){
        gameState.storeFloat("myHealth", 100.0f);
        gameState.storeInt("mySpeed", 5);
    }
}

It wasn't bad with the first tests but when I tried to use in a more complex situation the experience was a bit cumbersome since I had more data to store. I even felt like I was trying to recreate a serialization system and that wasn’t the idea of this API.

Since I have no idea what the game wants to save or even how it wants to save it, I changed a bit the paradigm. The GameState is now more like a concept without implementation, that part is going to be decided on the game side.

public interface GameState {
     
}

So after that change, the game has to implement the GameState and each game state collaborator will have to depend on that custom implementation, like this:

public class MyCustomGameState : GameState  
{
    public int superImportantValueForTheGame;
    public float anotherImportantValueForTheGame;
}

public class MyCustomObject : GameStateCollaborator
{
    public void Collaborate(GameState gameState)
    {
        var myCustomGameState = gameState as MyCustomGameState;
        myCustomGameState.anotherImportantValueForTheGame = 100.0f;
        myCustomGameState.superImportantValueForTheGame = 5;
    }
}

In this way, I don’t know or care how the game wants to store its game state, what I know is there is a concept of GameState that is responsible of providing the information the platform needs to make features, like a replay or savegame, work. For example, at some point I could have something like:

public interface GameStateSave 
{
    void Save(GameState gameState);
}

And let the game decide how to save its own game state even though the engine is responsible of how and when that interface is used to perform some task.

In the end, the platform/engine ends up being more like a framework, providing tools or life cycles to the user (the game) to easily do some work in a common way.

VN:F [1.9.22_1171]
Rating: 5.0/5 (2 votes cast)

The story of the non deterministic Replay

This is the story of how I discovered my simplified replay system wasn’t so deterministic as I believed because I had an ugly bug, but read the post if you want to know where exactly.

While integrating the lockstep engine on the game I am working on, I decided to do something to save and load replays to be able to easily reproduce some bugs I was experimenting. After I had that done and working, it was pretty awesome to see I can replay the same game multiple times (food for another blog post by the way), however, I thought it could be fun and easy to play them faster, why not.

Since I have a fixed time step logic, it should should be pretty straightforward, simply use a multiplied time and then the fixed timestep logic would do all the work and the game logic shouldn't notice the change. I decided to give it a try and it worked…. almost, when playing the replays at higher speeds I noticed some visual differences but I wasn’t totally sure (it could be interpolation code).

To verify, I went back to the test project, where I had the moving box, and test it there, but I needed some way to be sure. Since I have already a way to calculate checksums of the game state, I used that to verify the game states when playing replays at different speeds (from 2x to 16x).

It failed, even though it only failed to validate some frames, following frames were not necessarily invalid (this is something important to consider).

invalid-state Image 1: It shows one of the best tools in the world to check game states when replaying a game.

So, I was right, I saw some differences, I could be sure that something was happening. The thing was, with only the checksums I couldn't know what the real difference was. Next step, making something to detect it.

In order to do that, I had to change to start saving (at least for debug) the game state, not only the checksum, and to check differences between stored game states in the replay and the current game state (when replaying the game) when checksum validation fails. It worked too, now I have the exact place where the differences are.

invalidstate_realdiff

Image 2: It shows why serializing all the game state in one string is the best thing to do in your life.

After testing it a bit, I noticed another curious thing, the game validation wasn’t always failing given the same replay and the same speed. That gave me a hint that the problem was probably not related with the game code itself (the moving box).

So, if I played the replay at 1x, it was validated properly. If I played the replay at 8x, it failed, most of the time, but not always. So, it seems there is something related with speed I don’t understand yet.

I decided to test the same replay but with Unity timescale modified, my first test was using 1x for replay but 5x for the timescale, validation failed, then the opposite, 10x for replay but 0.1x for timescale, and it worked well. So the problem seems to be related with my accumulator logic inside the fixed timestep logic?

Some test cycles later, it turns out that, it was indeed a bug in one of the core classes of the engine!

The problem was on my class LockstepFixedUpdate, the first version was overriding the Update() method and performing lockstep logic, it worked ok if only at most one fixed update is processed, but in the case a big delta time arrives, it only process lockstep logic once for the first fixed update and never again.

That means that, in case replay commands were to be processed in frame 3, we are in frame 1 and a big dt of 10 frames arrives, then lockstep logic checks only in frame one and never again until all 10 frames were processed. This bug even bypass the lockstep!

Since I made a test to replicate the bug, it was really easy to fix it, I changed to process the lockstep logic with each fixed step updates and it works fine now, I have high speed replays!! YEAH!!

Conclusion

In the process of finding this bug I started to expand the engine support and create better tools, this is really important if I want to build something solid over it.

The only way to detect issues as soon as possible is to iterate over the engine as soon as possible and to do that, use cases are needed and games provide the best use cases. In my case, I am using not only the game I am trying to make but also other similar games as references when deciding what I want and how I want to test it, for example, having replays, being able to play the replay at different speeds, being able to save the replays, etc. Also, being able to replicate a bug in a small test case where you can iterate quickly to fix it is super useful.

Detecting (and having) problems like this in a small and simple game gives the idea of the complexity of a medium to big game, all the variables and the difficulties, it is not something to underestimate, so when developers say they couldn't add multiplayer features to their game because it was really hard to do it, it is not a lie.

I love all of this stuff, even though I understand it is not an easy path.

To complete this post, here is a video showing a prototype of how I load and play a replay which was created by playing with two players in LAN, one was my computer and the other was my phone:

The quote of the day is 'Fail as much as possible, as soon as possible to avoid failing when it is too late'.

Hope you enjoyed the journey.

VN:F [1.9.22_1171]
Rating: 5.0/5 (2 votes cast)

Our solution to handle multiple screen sizes in Android – Part three

In the previous posts of this series we talked about our solution to handle multiple screen sizes for game menus, in particular we showed the main menu of the game Clash of the Olympians. In this post we are going to talk about what we did inside the game itself. As a side note, the solution we used here is simple and specific for this game, hope it could help as example but don't expect a silver bullet.

Scaling to match the physics world

As we use Box2D in Clash of the Olympians, the first step was to use a proper scale between Box2D bodies and our assets. The basic approach was to consider that 1m (meter in MKS system) was 32px, so in our target resolution of 800x480 could show 25m x 15m. We picked that scale because it gives pretty numbers both in terms of the game area and in terms of our assets, for example, a character of 64px of height is 2m tall. In particular, Achilles has a height of approx 60px which is equivalent to 1.875m using our scale, that sounds pretty reasonable for that character.

clashoftheolympians-800x480
The image shows the relation between screen size in pixels (800x480 in this case) and the game world in meters.

Defining a virtual area to show

We previously said that we could show 25m x 15m, in fact, the height is not so important in Clash of the Olympians since the game mainly depends in the horizontal distance. So, if we had an imaginary with a resolution of 800x400 (really wide, an aspect ratio of 2) we would show in that case 12.5m of height, we could assume that if we show at least that height the game balance would be not affected at all (enemies are never spawned too high). However, in terms of horizontal distance we want to show always the same area across all devices to avoid changing the game balance (for example, if you could see less area you couldn't react in the proper time to some waves), that is why we decided to show always 25m in terms of width.

clashoftheolympians-800x600

The image shows how we still show the same game world width of 25m on a 800x600 device.

Scaling the world back to match the screen size

Finally, in order to show this virtual area of 25m x H (with H >= 12.5m), we have to calculate the proper scale to set our game camera in each device. For example, in the case of having a Nexus 7 (1280x720 resolution device) the scale to show 25m of horizontal size is 51.2x since we know that 1280 / scale = 25, then 1280 / 25 = 51.2. In the case of a Samsung Galaxy Y (480x320 resolution device) the scale would be 19.2x since 480 / 25 = 19.2. Translating this inside the game would be something as easy as:

camera.scale = screen.width / 25

Final thoughts

This is not a general solution, it depends a lot in the game we were making and the things we could assume like the game height doesn't matter.

Even though the solution is specific and not so cool as the previous posts, we hope it could be of help when making your own game.

VN:F [1.9.22_1171]
Rating: 4.3/5 (18 votes cast)

Our resources manager and how it helped in localizing Clash of the Olympians

In this post we want to share how our resources manager implementation helped us when we had to localize Clash of the Olympians.

Our resources manager

Some time ago we started a small java project named jresourcesmanager (yeah, the most creative name in the world) which provides a simple abstraction of what a resource is and how it is loaded. It consists in some basic concepts named Resource, DataLoader and ResourceManager.

Resource

Is the concept of an application resource/asset, and provides an API to know if the resource is loaded or not and to load and unload it. Here is the code:

public class Resource<T> {

	T data = null;

	DataLoader<T> dataLoader;

	protected Resource(DataLoader<T> dataLoader) {
		this(dataLoader, true);
	}

	protected Resource(DataLoader<T> dataLoader, boolean deferred) {
		this.dataLoader = dataLoader;
		if (!deferred)
			reload();
	}

	/**
	 * Returns the data stored by the Resource.
	 */
	public T get() {
		if (!isLoaded())
			load();
		return data;
	}
	
	public void set(T data) {
		this.data = data;
	}

	public DataLoader<T> getDataLoader() {
		return dataLoader;
	}
	
	public void setDataLoader(DataLoader<T> dataLoader) {
		unload();
		this.dataLoader = dataLoader;
	}

	/**
	 * Reloads the internal data by calling unload and load().
	 */
	public void reload() {
		unload();
		load();
	}

	/**
	 * Loads the data if it wasn't loaded yet, it does nothing otherwise.
	 */
	public void load() {
		if (!isLoaded())
			data = dataLoader.load();
	}

	/**
	 * Unloads the data by calling the DataLoader.unload(t) method.
	 */
	public void unload() {
		if (isLoaded()) {
			dataLoader.unload(data);
			data = null;
		}
	}

	/**
	 * Returns true if the data is loaded, false otherwise.
	 */
	public boolean isLoaded() {
		return data != null;
	}

	public Resource<T> clone() {
		return new Resource<T>(dataLoader);
	}

}

DataLoader

Provides an API to define a way to load and unload a Resource, here is the code:

public abstract class DataLoader<T> {

	/**
	 * Implements how to load the data.
	 */
	public abstract T load();

	/**
	 * Implements how to unload the data, if it is nod automatic and you need to unload stuff by hand.
	 */
	public void unload(T t) {

	}

	/**
	 * Provides a way to return custom information about the data loader.
	 * 
	 * @return An object with the custom information.
	 */
	public Object getMetaData() {
		return null;
	}

}

ResourceManager

It is just a map with all the resources identified by a key, which can be a String for example, and an API to store and get resources to and from, respectively.

That is our resources management code, it is really simple and fulfilled our needs in assets loading/unloading for Vampire Runner and Clash of the Olympians.

The interesting part of this library is that it allows you to store whatever you want to consider as a resource/asset, and that feature is what we used in order to solve the localization issue.

Declaring resources

Another important pillar is how we declare resources in an easy way through the code, and that is by using some builders that simplified the resource declaration. As we were using LibGDX library, we created a resource builder for LibGDX assets and more. This resource builder allowed us to do stuff like this:

splitLoadingTextureAtlas("MainTextureAtlas", "data/images/screens/mainmenu/pack");
resource("MainBackgroundTop", sprite2().textureAtlas("MainTextureAtlas", "mainmenu-bg", 1).center(0.5f, 0.5f).trySpriteAtlas());

That declares a texture atlas with the name of MainTextureAtlas and then a resource which is a sprite with the name of MainBackgroundTop from a texture atlas resource identified by the previous name. Also, allow us to do stuff like declare that the sprite is centered in the middle (the anchor point) and more.

This code is not important, just an example of some of our builders.

Declaring localized resources

As we have the power to create complex resource builders and we needed a simple way to switch between assets given the language selected, we created a resource builder which allowed us to declare different resources, depending the current locale, using the same identifier. So, for example, we can do something like this:

resource("CreditsButton", new MultilanguageResourceBuilder<Sprite>() //
	.defaultLocale(new Locale("en")) //
	.resource(new Locale("en"), sprite2().textureAtlas(TextureAtlases.MeinMenu, "mainmenu-but-credits-en", 1).trySpriteAtlas()) //
	.resource(new Locale("es"), sprite2().textureAtlas(TextureAtlases.MeinMenu, "mainmenu-but-credits-es", 1).trySpriteAtlas()) //
			);

That declares a resource named CreditsButton which returns a sprite with index 1 and name “mainmenu-but-credits-en” from a texture atlas in case the locale is English and a sprite with index 1 and name “mainmenu-but-credits-es” in case the locale is Spanish. It also declares that the default locale (in case a resource for the current locale wasn’t found) is English.

This resource builder was very handy because it allowed us to declare any type of resource for different locales, and from the application side it was transparent, we just ask for the resource CreditsButton.

In case you are interested, the code of the MultilanguageResoruceBuilder is:

public class MultilanguageResourceBuilder<T> implements ResourceBuilder<T> {

	private Map<Locale, ResourceBuilder<T>> resourceBuilders = new HashMap<Locale, ResourceBuilder<T>>();
	private Locale defaultLocale;

	@Override
	public boolean isVolatile() {
		if (defaultLocale == null)
			throw new IllegalStateException("Multilanguage resource builder needs a default locale");
		ResourceBuilder<T> resourceBuilder = resourceBuilders.get(defaultLocale);
		if (resourceBuilder == null)
			throw new IllegalStateException("Multilanguage resource builder needs a default resource builder");
		return resourceBuilder.isVolatile();
	}

	public MultilanguageResourceBuilder<T> defaultLocale(Locale locale) {
		this.defaultLocale = locale;
		return this;
	}

	public MultilanguageResourceBuilder<T> resource(Locale locale, ResourceBuilder<T> resourceBuilder) {
		resourceBuilders.put(locale, resourceBuilder);
		return this;
	}

	@Override
	public T build() {
		Locale locale = Locale.getDefault();

		if (defaultLocale == null)
			throw new IllegalStateException("Multilanguage resource builder needs a default locale");

		if (!resourceBuilders.containsKey(locale))
			locale = defaultLocale;

		ResourceBuilder<T> resourceBuilder = resourceBuilders.get(locale);

		if (resourceBuilder == null)
			throw new IllegalStateException("Multilanguage resource builder needs a default resource builder");

		return resourceBuilder.build();
	}

}

Conclusion

That was the way we used to support multiple languages in a transparent way for the application, we just need to change the current locale and reload the assets.

Hope this blog post idea helps you in case you are about to support multiple languages in your game, and see you next time.

VN:F [1.9.22_1171]
Rating: 3.9/5 (14 votes cast)