Implementing Fog of War for RTS games in Unity 2/2

As I said in the previous blog post, some time ago I started working on a new Fog of War / Vision System solution aiming the following features:

  • Being able to render the fog of war of each player at any time, for replays and debug.
  • Being able to combine multiple players' visions for alliances, spectator mode or watching replays.
  • Blocking vision by different terrain heights or other elements like bushes.
  • Optimized to support 50+ units at the same time in mobile devices at 60FPS.
  • It should look similar to modern games like Starcraft 2 and League of Legends (in fact, SC2 is already eight years old, not sure if that is consider a modern game or not :P).

This is an example of what I want:

Example: fog of war in Starcraft 2

To simplify a bit writing this article, when I write unit I mean not only units but also structures or anything that could affect the fog of war in the game.

Logic

First, there is a concept named UnitVision used to represent anything that reveals fog. Here is the data structure:

struct UnitVision
{
   // A bit mask representing a group of players inside 
   // the vision system.
   int players;

   // The range of the vision (in world coordinates)
   float range;
   
   // the position (in world coordinates)
   vector2 position;

   // used for blocking vision
   short terrainHeight;
}

Normally, a game will have one for each unit but there could be cases where a unit could have more (for example a large unit) or even none.

A bit mask is used to specify a group of players, so for example, if player 0 is 0001 and player1 is 0010, then 0011 means the group formed by player0 and player1. Since it is an int, it supports up to ~sizeof(int) players.

Most of the time the group would contain only one player but there might be some situations, like a general effect or cinematic, etc, that needs to be seen by all players and one possible solution is to use a unitVision with more than one player.

The terrainHeight field stores the current height of the unit and it is used for blocking vision or not. It normally will be the world terrain's height on that position if it is a ground unit but there are cases like flying units or special abilities that could change the unit height that should be consider when calculating blocked vision. It is the game's responsibility to update that field accordingly.

There is another concept named VisionGrid that represents the vision for all the players. Here is the data structure:

struct VisionGrid
{
    // the width and height of the grid (needed to access the arrays)
    int width, height;

    // array of size width * height, each entry has an int with the 
    // bits representing which players have this entry in vision.
    int[] values;

    // similar to the values but it just stores if a player visited
    // that entry at some point in time.
    int[] visited;

    void SetVisible(i, j, players) {
        values[i + j * width] |= players;
        visited[i + j * width] |= players;
    }

    void Clear() {
        values.clear(0);
    }

    bool IsVisible(i, j, players) {
        return (values[i + j * width] & players) > 0;
    }

    bool WasVisible(i, j, players) {
        return (visited[i + j * width] & players) > 0;
    }
}

Note: arrays have a size of width * height.

The bigger the grid is, the slower it gets to calculate vision but it also has more information which could be useful for units' behaviors or to get better fog rendering.The smaller the grid is, the opposite. A good balance must be defined from the beginning in order to build the game over that decision.

Here is an example of a grid over the game world:

World + Grid

Given a grid entry for a world position, the structure stores an int in the values array with the data of which players have that position inside vision. For example, if the entry has 0001 stored, it means only the player0 sees that point. If it has 0011, then both the player0 and player1.

This structure also stores when a player revealed fog in the past in the visited array which is used mainly for rendering purposes (gray fog) but could also be used by the game logic (to check if a player knows some information for example).

The method IsVisible(i, j, players) will return true if any of the players in the bit mask has the position visible. The method WasVisible(i, j, players) is similar but will check the visited array.

So, for example, if player1 and player2 (0010 and 0100 in bits) are in an alliance, then when player2 wants to know if an enemy is visible to perform an attack, it can call isVisible method with the bitmask for both players 0110.

Calculating vision

Each time the vision grid is updated, the values array is cleared and recalculated.

Here is a pseudo code of the algorithm:

void CalculateVision()
{
   visionGrid.Clear()
   
   for each unitVision in world {
      for each gridEntry inside unitVision.range {
         if (not IsBlocked(gridEntry)) {
            // where set visible updates both the values and the
            // visited arrays.
            grid.SetVisible(gridEntry.i, gridEntry.j, 
                            unitVision.players)
         }
      }
   }
}

To iterate over grid entries inside range, it first calculates vision's position and range in grid coordinates, named gridPosition and gridRange, and then it draws a filled circle of gridRange radius around gridPosition.

Blocked vision

In order to detect blocked vision, there is another grid of the same size with terrain's height information. Here is its data structure:

struct Terrain {
    // the width and height of the grid (needed to access the arrays) 
    int width, height;

    // array of size width * height, has the terrain level of the 
    // grid entry. 
    short[] height;

    int GetHeight(i, j) {
       return height[i + j * width];
    }
}

Here is an example of how the grid looks over the game:

Note: that image is handmade as an example, there might be some mistakes.

While iterating through the vision's grid entries around unitVision's range, to detect if the entry is visible or not, the system checks if there are no obstacles to the vision's center. To do that, it draws a line from the entry's position to the center's position.

If all the grid entries in the line are in the same height or below, then the entry is visible. Here is an example where the blue dot represents the entry being calculated and white dots the line to the center.

If there is at least one entry in the line that is in a greater height, then the line of sight is blocked. Here is an example where the blue dot represents the entry we want to know if it is visible or not, white dots represents entries in the line in the same height and red dots represents entries in a higher ground.

Once it detects one entry above the vision it doesn't need to continue drawing the line to the vision center.

Here is a pseudo algorithm:

public bool IsBlocked()
{
   for each entry in line to unitVision.position {
      height = terrain.GetHeight(entry.position)
      if (height > unitVision.height) {
         return true;
      }
   }
   return false;
}

Optimizations

  • If an entry was already marked as visible while iterating over all unit visions then there is no need to recalculate it.
  • Reduce the size of the grid.
  • Update the fog less frequently (In Starcraft there is a delay of about 1 second, I recently noticed while playing, it is an old game).

Rendering

To render the Fog of War, first I have a small texture of the same size of the grid, named FogTexture, where I write a Color array of the same size using the Texture2D.SetPixels() method.

Each frame, I iterate on each VisionGrid entry and set the corresponding Color to the array using the values and visited arrays. Here is a pseudo algorithm:

void Update()
{
   for i, j in grid {
       colors[i + j * width] = black
       if (visionGrid.IsVisible(i, j, activePlayers))
           colors[pixel] = white
       else if (visionGrid.WasVisible(i, j, activePlayers))
           colors[pixel] = grey // this is for previous vision
   }
   texture.SetPixels(colors)
}

The field activePlayers contains a bit mask of players and it is used to render the current fog of those players. It will normally contain just the main player during game but in situations like replay mode, for example, it can change at any time to render different player's vision.

In the case that two players are in an alliance, a bitmask for both players can be used to render their shared vision.

After filling the FogTexture, it is rendered in a RenderTexture using a Camera with a Post Processing filter used to apply some blur to make it look better. This RenderTexture is four times bigger in order to get a better result when applying the Post Processing effects.

Once I have the RenderTexture, I render it over the game with a custom shader that treats the image as an alpha mask (white is transparent and black is opaque, or red in this case since I don't need other color channels) similar to how we did with Iron Marines.

Here is how it looks like:

Fog texture over the world in game view.

And here is how it looks in the Unity's Scene View:

Fog texture over the world in scene view.

The render process is something like this:

Easing

There are some cases when the fog texture changed dramatically from one frame to the other, for example when a new unit appears or when a unit moves to a higher ground.

For those cases, I added easing on the colors array, so each entry in the array transitions in time from the previous state to the new one in order to minimize the change. It was really simple, it added a bit of performance cost when processing the texture pixels but in the end it was so much better than I preferred to pay that extra cost (it can be disabled at any time).

At first I wasn't sure about writing pixels directly to a texture since I thought it would be slow but, after testing on mobile devices, it is quite fast so it shouldn't be an issue.

Unit visibility

To know if a unit is visible or not, the system checks for all the entries where the unit is contained (big units could occupy multiple entries) and if at least one of them is visible then the unit is visible. This check is useful to know if a unit can be attacked for example.

Here is a pseudo code:

bool IsVisible(players, unit)
{ 
    // it is a unit from one of the players
    if ((unit.players & players) > 0)
        return true;

    // returns all the entries where the unit is contained
    entries = visionGrid.GetEntries(unit.position, unit.size)

    for (entry in entries) {
        if (visionGrid.IsVisible(entry, players)) 
            return true;
    }

    return false;
}

Which units are visible is related with the fog being rendered so we use the same activePlayers field to check whether to show or hide a unit.

To avoid rendering units I followed a similar approach to what we did for Iron Marines using the GameObject's layer, so if the unit is visible, the default layer is set to its GameObject and if the unit is not visible, a layer that is culled from the game camera is set.

void UpdateVisibles() { 
    for (unit in units) { 
        unit.gameObject.layer = IsVisible(activePlayers, unit) : 
                default ? hidden; 
    } 
}

Finally

This is how everything looks working together:

Conclusion

When simplifying the world in a grid and started thinking in terms of textures, it was easier to apply different kind of image algorithms like drawing a filled circle or a line which were really useful when optimizing. There are even more image operations that could be used for the game logic and rendering.

SC2 has a lot of information in terms of textures, not only the player's vision, and they provide an API to access it and it is being used for machine learning experiments.

I am still working on more features and I plan to try some optimization experiments like using c# job system. I am really excited about that one but I first have to transform my code to make it work. I would love to write about that experiment.

Using a blur effect for the fog texture has some drawbacks like revealing a bit of higher ground when it shouldn't. I want to research a bit of some other image effect to apply where black color is not modified when blurring but not sure if that is possible or if it is the proper solution. One thing that I want to try though is an upscale technique like the one used in League of Legends when creating the fog texture and then reduce the applied blur effect, all of this to try to minimize the issue.

After writing this blog post and having to create some images to explain concepts I believe it could be great to add more debug features like showing the vision or terrain grid itself at any time or showing a line from one point to another to show where the vision is blocked and why, among other stuff. That could be useful at some point.

This blog post was really hard to write since, even though I am familiarized with the logic, it was hard to define the proper concepts to be clear when explaining it. In the end, I feel like I forgot to explain some stuff but I can't realize exactly what.

As always, I really hope you enjoyed it and it would be great to hear your feedback to improve writing this kind of blog posts.

Thanks for reading!

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

Implementing Fog of War for RTS games in Unity 1/2

For the last 3 years, I've been working on Iron Marines at Ironhide Game Studio, a Real time Strategy Game for mobile devices. During its development, we created a Fog of War solution that works pretty well for the game but it lacks some of the common features other RTS games have, and how to improve that is something I wanted to learn at some point in my life.

Recently, after reading a Riot Games Engineering blog post about Fog of War in League of Legends, I got motivated and started prototyping a new implementation.

In this blog post I will explain Iron Marines' Fog of War solution in detail and then I will write another blog post about the new solution and explain why I consider it is better than the first one.

Fog of War in Strategy Games

It normally represents the missing information about the battle, for example, not knowing how the terrain is yet, or outdated information, for example, the old position of an enemy base. Player units and buildings provide vision that removes Fog during the game revealing information about the terrain and the current location and state of the enemies.

Example: Dune 2 and its Fog of War representing the unknown territory (by the way, you can play Dune 2 online).

Example: Warcraft: Orcs and Humans' Fog of War (it seems you can play Warcraft online too).

The concept of Fog of War is being used in strategy games since more than 20 years now, which is a lot for video games.

Process

We started by researching other games and deciding what we wanted before start implementing anything.

After that, we decided to target a similar solution to Starcraft (by the way, it is free to play now, just download Battle.net and create an account). In that game, units and buildings have a range of vision that provide vision to the Player. Unexplored territory is covered with full opacity black fog while previously explored territory is covered by half opacity fog, revealing what the Player know about it, information that doesn't change during the game.

Enemy units and buildings are visible only if they are inside Player's vision but buildings leave over a last known location after they are not visible anymore. I believe the main reason for that they can't normally move (with the exception of some Terran buildings) so it is logical to assume they will stay in that position after losing vision and might be vital information about the battle.

Iron Marines

Given those rules, we created mock images to see how we wanted it to look in our game before started implementing anything.

Mock Image 1: Testing terrain with different kind of Fog in one of the stages of Iron Marines.

Mock Image 2: Testing now with enemy units to see when they should be visible or not.

Mock Image 3: Just explaining what each Fog color means for the Player's vision.

We started by prototyping the logic to see if it works for our game or not and how we should adapt it.

For that, we used an int matrix representing a discrete version of the game world where the Player's vision is. A matrix's entry with value 0 means the Player has no vision at that position and a value of 1 or greater means it has.

Image: in this matrix there are 3 visions, and one has greater range.

Units and buildings' visions will increment 1 to the value of all entries that represent world positions inside their vision range. Each time they move, we first decrease 1 from its previous position and then we increment 1 in the new position.

We have a matrix for each Player that is used for showing or hiding enemy units and buildings and for auto targeting abilities that can't fire outside Player's vision.

To determine if an enemy unit or building is visible or not, we first get the corresponding entry of the matrix by transforming its world position and check if the stored value is greater than 0 or not. If not, we change its GameObject layer to one that is culled from the main camera to avoid rendering it, we named that layer "hidden". If it is visible, we change it back to the default layer, so it starts being rendered again.

Image: shows how enemy units are not rendered in the Game view. I explain later why buildings are rendered even outside the Player's vision.

Visuals

We started by just rendering a black or grey color quad over the game world for each matrix's entry, here is an image showing how it looks like (it is the only one I found in the chest of memories):

This allowed us to prototype and decide some features we didn't want. In particular, we avoided blocking vision by obstacles like mountains or trees since we preferred to avoid the feeling of confinement and also we don't have multiple levels of terrain like other games do. I will talk more about that feature in the next blog post.

After we knew what we wanted, and tested in the game for a while, we decided to start improving the visual solution.

The improved version consists in rendering a texture with the Fog of War over the entire game world, similar to what we did when we created the visual mocks.

For that, we created a GameObject with a MeshRenderer and scaled it to cover the game world. That mesh renders a texture named FogTexture, which contains the Fog information, using a Shader that considers pixels' colors as an inverted alpha channel, from White color as full transparent to Black color as full opaque.

Now, in order to fill the FogTexture, we created a separated Camera, named FogCamera, that renders to the texture using a RenderTexture. For each object that provides vision in the game world, we created a corresponding GameObject inside the FogCamera by transforming its position accordingly and scaling it based on the vision's range. We use a separated Unity's Layer that is culled from other cameras to only render those GameObjects in the FogCamera.

To complete the process, each of those objects have a SpriteRenderer with a small white Ellipse texture to render white pixels inside the RenderTexture.

Note: we use an Ellipse instead of a Circle to simulate the game perspective.

Image: This is the texture used for each vision, it is a white Ellipse with transparency (I had to make the transparency opaque so the reader can see it).

Image: this is an example of the GameObjects and the FogCamera.

In order to make the FogTexture look smooth over the game, we applied a small blur to the FogCamera when rendering to the RenderTexture. We tested different blur shaders and different configurations until we found one that worked fine on multiple mobile devices. Here is how it looks like:

And here is how the Fog looks like in the game, without and with blur:

For the purpose of rendering previously revealed territory, we had to add a previous step to the process. In this step, we configured another camera, named PreviousFogCamera, using a RenderTexture too, named PreviousVisionTexture, and we first render the visions there (using the same procedure). The main difference is that the camera is configured to not clear the buffer by using the "Don't Clear" clear flag, so we can keep the data from previous frames.

After that, we render both the PreviousVisionTexture in gray color and the vision's GameObjects in the FogTexture using the FogCamera. The final result looks like this:

Image: it shows the revealed territory in the FogCamera.

Image: and here is an example of how the Fog with previous revealed territory looks in the game.

Buildings

Since buildings in Iron Marines are big and they don't move like Starcraft, we wanted to follow a similar solution.

In order to do that, we identified buildings we wanted to show below the Fog by adding a Component and configuring they were going to be rendered when not inside the Player's vision.

Then, there is a System that, when a GameObject with that Component enters the Player's vision for the first time, it creates another GameObject and configures it accordingly. That GameObject is automatically turned on when the building is not inside the Player's vision anymore and turned off when the building is inside vision. If, by some reason the building was destroyed while not in inside vision, the GameObject doesn't disappear until the Player discovers its new state.

We added a small easing when entering and leaving the Player's vision to make it look a bit smoother. Here is a video showing how it looks like:

Conclusion

Our solution lacks some of the common Fog of War features but it works perfectly for our game and looks really nice. It also performed pretty well on mobile devices, which is our main target, and if not done properly it could have affected the game negatively. We are really proud and happy with what we achieved developing Iron Marines.

That was, in great part, how we implemented Fog of War for Iron Marines. I hope you liked both the solution and the article. In the next blog post I will talk more about the new solution which includes more features.

Thanks for reading!

 

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

How we used Entity Component System (ECS) approach at Gemserk - 1/2

When we started Gemserk eight years ago, we didn't know which was the best way to make games. So before starting, we did some research. After reading some articles and presentations we were really interested in trying an Entity Component System (ECS) approach for our games. However, since we didn't find a clear guide or implementation at that point we had to create our own solution while exploring and understanding it. We named our engine ComponentsEngine.

Components Engine

The following image shows a simplified core architecture diagram:

Note: part of the design was inspired by a Flash ECS engine named PushButtonEngine.

An Entity is just a holder of state and logic. In an ECS, anything can be an Entity, from an enemy ship to the concept of a player or even a file path to a level definition. It depends a lot on the game you are making and how you want to structure it.

A Property is part of the state of an Entity, like the health or the position in the world, anything that means something for the state of the game. It can be accessed and modified from outside.

Components perform logic updating one or more Properties to change the Entity state. A component could for example change the position of an Entity given a speed and a moving direction.

They normally communicate with each other either by modifying common Properties (when on the same Entity) or by sending and receiving Messages through a MessageDispatcher (when on the same or on different Entities). They just have to register a method to handle a message. In some way, this is pretty similar to using SendMessage() method in Unity and having the proper methods in the MonoBehaviours that need to react to those messages.

EntityTemplates are an easy way to define and build specific game entities, they just add Properties and Components (and more stuff) to make an Entity behave in one way or another.

For example, a ShipTemplate could add position, velocity and health properties and some components to perform movement and to process damage from bullet hits:

public void build() {
        // ... more stuff 
        property("position", new Vector2f(0, 0));
        property("direction", new Vector2f(1, 0));

        property("speed", 5.0f);

        property("currentHealth", 100.0f);
        property("maxHealth", 100.0f);

        component(new MovementComponent());
        component(new HealthComponent());
}

An example of the MovementComponent:

public class MovementComponent() {

   @EntityProperty
   Vector2f position;

   @EntityProperty
   Vector2f direction;

   @EntityProperty
   Float speed;

   @Handles
   public void update(Message message) {
      Property<Float> dt = message.getProperty("deltaTime");
      position += direction * speed * dt.get();
   }

}

In some way, EntityTemplates are similar to Unity Prefabs or Unreal Engine Blueprints, represeting in some way a (not pure) Prototype pattern.

Some interesting stuff of our engine:

  • Since templates are applied to Entities, we can apply multiple templates to the same one. In this way, we could have common templates to add generic features like motion for example. So we could do something like OrcTemplate.apply(e) to add the orc properties and components and then MovableTemplate.apply(e), so now we have an Orc that moves.
  • Templates can apply other templates inside them. So we could do the same as before but inside OrcTemplate, we could apply MovableTemplate there. Or even use this to create specific templates, like OrcBossTemplate which is an Orc with a special ability.
  • Entities have also tags which are defined when applying a template too and are used to quickly identify entities of interest. For example, if we want to identify all bullets in the game, we could add the tag "Bullet" during the Entity creation and then, when processing a special power, we get all the bullets in the scene and make them explode. Note: in some ECS a "flag" component is used for this purpose.
  • The Property abstraction is really powerful, it can be implemented in any way, for example, an expression property like "my current health is my speed * 2". We used that while prototyping.

Some bad stuff:

  • Execution speed wasn't good, we had a lot of layers in the middle, a lot of reflection, send and receive messages (even for the update method), a lot of boxing and unboxing, etc. It worked ok in desktop but wasn't a viable solution for mobile devices.
  • The logic started to be distributed all around and it wasn't easy to reuse, and we started to have tons of special cases, when we couldn't reuse something we simply copy-pasted and changed it.
  • There was a lot of code in the Components to get/set properties instead of just doing the important logic.
  • Indirection in Properties code was powerful but we end up with stuff like this (and we didn't like it since it was too much code overhead):
e.getProperty("key").value = e.getProperty("key").value + 1.
  • We didn't manage to have a good data driven approach which is one of the best points of ECS. In order to have different behaviours we were forced to add both properties, components and even tags instead of just changing data.

Note: Some of these points can be improved but we never worked on that.

Even though we don't think the achitecture is bad, it guided us to do stuff in a way we didn't like it and didn't scale. We feel that Unity does the same thing with its GameObjects and MonoBehaviours, all the examples in their documentation go in that direction.

That was our first try to the ECS approach. In case you are interested, ComponentsEngine and all the games we did with it are on Github and even though they are probably not compiling, they could be used as reference.

This post will continue with how we later transitioned to use a more pure approach with Artemis, when we were more dedicated to mobile games.

References

Evolve your hierarchy - Classic article about Entity Component System.

PushButtonEngine - Flash Entity Component System we used as reference when developing our ComponentsEngine.

Game architecture is different - A quick transformation from normal game architecture to ECS.

Slick2D - The game library we were using during ComponentsEngine development.

 

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

Assigning interface dependencies to MonoBehaviour fields in Unity Editor

In Unity, Object's public fields of type interface are not serialized, so that means it is not possible to configure them using the editor like you do with other stuff like dependencies to MonoBehaviours.

One possible way to deflect this problem is to create an abstract MonoBehaviour and depend on that instead:

 public abstract class ServiceBase : MonoBehaviour
 {
     public abstract void DoStuff();    
 }
 
 public class ServiceUser : MonoBehaviour
 {
     public ServiceBase service;
 
     void Update()
     {
         service.DoStuff ();
     }
 }

This solution has some limitations. One of them is you can't create a sub class of multiple base classes like you do when implementing interfaces. Another limitation is that you can't easily switch between a MonoBehaviour implementation to a ScriptableObject implementation (or any other).

Working on Iron Marines and Dashy Ninja, I normally end up referencing a UnityEngine.Object and then in runtime I try to get the specific interface, like this:

 public interface IService 
 {
      void DoStuff();
 }
 
 public class CharacterExample : MonoBehaviour
 {
     public UnityEngine.Object objectWithInterface;
     IService _service;
 
     void Start ()
     {
         var referenceGameObject = objectWithInterface as GameObject;
         if (referenceGameObject != null) {
             _service = referenceGameObject.GetComponentInChildren<IService> ();
         } else {
             _service = objectWithInterface as IService;
         }
     }

     void Update()
     {
         _service.DoStuff ();
     }
 }
 

Then, in Unity Editor I can assign both GameObjects and ScriptableObjects (or any other Object):

That works pretty well but I end up having the code of how to get the interface duplicated all around.

To avoid that, I made up two helper classes with the code to retrieve the interface from the proper location. These are InterfaceReference and InterfaceObject<T>.

InterfaceReference

 [Serializable]
 public class InterfaceReference
 {
     public UnityEngine.Object _object; 
     object _cachedGameObject;
 
     public T Get<T>() where T : class
     {
         if (_object == null)
             return _cachedGameObject as T;

         var go = _object as GameObject;
 
         if (go != null) {
             _cachedGameObject = go.GetComponentInChildren<T> ();
         } else {
             _cachedGameObject = _object;
         }
 
         return _cachedGameObject as T;
     }

     public void Set<T>(T t) where T : class
     {
         _cachedGameObject = t;
         _object = t as UnityEngine.Object;
     }
 }

Usage example:

 public class CharacterExample : MonoBehaviour
 {
     public InterfaceReference service;
 
     void Update()
     {
         // some logic
         service.Get<IService>().DoStuff();
     }
 }

The advantage of this class is that it could be simply used anywhere and it is already serialized by Unity. But the disadvantage is having to ask for specific interface when calling the Get() method. If that is only in one place, great, but if you have to do it a lot in the code it doesn't look so good. In that case, it could be better to have another field and assign it once in Awake() or Start() methods.

InterfaceObject

 public class InterfaceObject<T> where T : class
 {
     public UnityEngine.Object _object;
     T _instance;
 
     public T Get() {
         if (_instance == null) {
             var go = _object as GameObject;
             if (go != null) {
                 _instance = go.GetComponentInChildren<T> ();
             } else {
                 _instance = _object as T;
             }
         }
         return _instance;
     }
 
     public void Set(T t) {
         _instance = t;
         _object = t as UnityEngine.Object;
     }
 }

Note: the Get() and Set() methods could be a C# Property also, which would be useful to allow the debugger to evaluate the property when inspecting it.

Usage example:

 [Serializable]
 public class IServiceObject : InterfaceObject<IService>
 {
     
 }
 
 public class CharacterExample : MonoBehaviour
 {
     public IServiceObject service;
 
     void Update()
     {
         // some logic
         service.Get().DoStuff();
     }
 }

The advantage of this class is that you can use the Get() method without having to cast to the specific interface. The disadvantage, however, is that it is not serializable because it is a generic class. In order to solve that, an empty implementation with a specific type must be created to allow Unity to serialize it, like it is shown in the previous example. If you don't have a lot of interfaces you want to configure, then I believe this could be a good solution.

All of this is simple but useful code to help configuring stuff in the Unity Editor.

I am working on different experiments on how to improve Unity's usage for me. Here is a link to the project in case you are interested in taking a look.

Thanks for reading

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

We are working on a new Game for mobile devices named Dashy Ninja

After the last Ludum Dare#38 we were really excited about what we accomplished and decided to work on something together in our spare time. So, we started working on a new Unity game for mobile devices currently named Dashy Ninja.

This game is about a Cat facing the most difficult and challenging trials to achieve the unachievable and go where no other Cat has gone before to become the ultimate Ninja Cat.

The mechanics of the game are simple, by making quick drag gestures the cute cat dashes through the level attaching itself to platforms on the way. If a trap gets the character, it is "magically teleported back" (no animals were harmed in the making this blog post) and starts over again. The player has to feel as Ninja as possible.

We wanted to make a proper introduction of the game so we can start sharing dev stuff of the most awesome Ninja Cat in the Universe.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

Playing with Starcraft 2 Editor to understand how a good RTS is made

When working on Iron Marines engine at work we did some research on other RTS games in order to have more knowledge on how they did some stuff and why. In this post, in particular, I want to share a bit of my research on the SC2 Editor which helped a lot when making our own editor.

The objective was to see what a Game Designer could do or not with the SC2 Editor in order to understand some decisions about the editor and the engine itself.

Obviously, by taking a look at the game mods/maps available it is clear that you could build entire games over the SC2 engine, but I wanted to see the basics, how to define and control the game logic.

As a side note, I love RTS games since I was a child, I played a lot of Dune 2 and Warcraft 1. I remember playing with the editors of Command & Conquer and Warcraft 2 also, it was really cool, so much power 😉 and fun. With one of my brothers, each one had to make a map and the other had to play and beat it (we did the same with Doom and Duke Nukem 3d editors).

SC2 Editor

SC2 maps are built with Triggers which are composed by Events, Conditions and Actions to define parts of the game logic. There are a lot of other elements as well that I will talk a bit after explaining the basics.

Here is an image of the SC2 Editor with an advanced map:

Trigger logic

The Triggers are where the general map logic is defined. They are triggered by Events and say which Actions should be performed if given Conditions are met. Even though behind the scenes the logic is C/C++ code and it is calling functions with similar names, the Editor shows it in a natural language like “Is Any Unit of Player1 Alive?” which helps for quick reading and understanding.

This is an example of a Trigger logic of a SC2 campaign map:

Events

Events are a way to Trigger the Trigger logic, in other words, when an event happens the logic is executed. Here is an example of an event triggered when the unit "SpecialMarine" enters the region "Region 001":

Conditions

Conditions are evaluated in order to execute the actions or not. Here is an example of a condition checking if unit "BadGuy" is alive or not:

Actions

Actions are executed when the event happened and the conditions are met. They could be anything supported by the editor, from ordering a structure to build a unit to showing a mission objective update on screen, among other things.

This example shows an action that enqueues to unit "BadGuy" an attack order with unit "SpecialMarine" as target, replacing existing enqueued orders in that unit. There is another action after that which turns off the Trigger in order to avoid processing its logic again.

The idea with this approach is to build the logic in a descriptive way, the Game Designer has tools to fulfill what he needs in terms of game experience. For example, he needs to make it hard to save a special unit when you reach its location, then he sends a wave of enemies to that point.

I said before that the editor generates C/C++ code behind the scenes, so, for my example:

The code generated behind the scenes is this one:

Here is a screenshot of the example I did, the red guy is the SpecialMarine (controlled by the player) and the blue one is the BadGuy (controlled by the map logic), if you move your unit inside the blue region, BadGuy comes in and attack SpecialMarine:

Even though it is really basic, download my example if you want to test it 😛 .

Parameters

In order to make the Triggers work, they need some values to check against, for example, Region1, is a region previously defined, or “Any Unit of Player1”. Most of the functions for Events, Conditions and Actions have parameters of a given Type, and the Editor allow the user to pick an object of that Type from different sources: a function, a preset, a variable, a value or even custom code:

It shows picking a Unit from units in the map (created instances).

It shows Unit picking from different functions that return a Unit.

This allows the Game Designer to adapt, in part, the logic to what is happening in the game while keeping the main structure of the logic. For example, I need to make the structures of Player2 explode when any Unit of Player1 is in Region1, I don’t care which unit I only care it is from Player1.

Game design helper elements

There are different elements that help the Game Designer when creating the map: Regions, Points, Paths and Unit Groups, among others. These elements are normally not visible by the Player but are really useful to the Game Designer to have more control over the logic.

As said before, the SC2 Editor is pretty complete, it allows you to do a lot of stuff, from creating custom cutscenes to override game data to create new units, abilities, and more but that's food for another post.

Our Editor v0.1

The first try of creating some kind of editor for our game wasn't so successful. Without the core of the game clearly defined we tried to create an editor with a lot of the SC2 Editor features. We spent some days defining a lot of stuff in abstract but in the end we aimed too far for a first iteration.

So, after that, we decided to start small. We starting by making a way to detect events over the "being defined core" at that point. An event could be for example: "when units enter an area" or "when a resource spot was captured by a player".

Here are some of the events of one of our maps:

Note: Even though they are Events we named them Triggers (dunno why), so AreaTrigger is an empty Trigger in terms of SC2 Editor with just an Event.

Events were the only thing in the editor, all the corresponding logic was done in code in one class, corresponding to that map, which captures all events and checks conditions for taking some actions, normally, sending enemies to attack some area.

Here is an example code for some of the previous defined events:

It wasn't a bad solution but had some problems:

  • The actions were separated from the level design which played against the iteration cycle (at some point our project needed between 10 and 15 seconds to compile in the Unity Editor).
  • Since it needs code to work, it requires programming knowledge and our team Game Designers aren't so good with code.

Our Editor v0.2

The second (and current) version is more Game Designer friendly, and tends to be more similar to SC2 Editor. Most of the logic is defined in the editor within multiple triggers. Each Trigger is defined as a hierarchy of GameObjects with specific components to define the Events, Conditions and Actions.

Here is an example of a map using the new system:

This declares for example a trigger logic that is activated by time, it has no conditions (so it executes always given the event) and it sends some enemies in sequence and deactivates itself at the end.

We also created a custom Editor window in order to help creating the trigger hierarchy and to simplify looking for the engine Events, Conditions and Actions. Here is part of the editor showing some of the elements we have:

All those buttons automatically create the corresponding GameObject hierarchy with the proper Components in order to make everything work according to plan. Since most of them need parameters, we are using the Unity built-in feature of linking elements given a type (a Component), so for example, for the action of forcing capture a Capturable element by team Soldiers, we have:

Unity allow us to pick a Capturable element (CapturableScript in this case) from the scene. This simplifies a lot the job of configuring the map logic.

Some common conditions could be to check if a resource spot is controlled by a given player or if a structure is alive. Common actions could be, send a wave of enemy units to a given area or deactivate a trigger.

The base code is pretty simple, it mainly defines the API while the real value of this solution is in the custom Events, Conditions and Actions.

Pros

  • Visual, and more Game Designer friendly (it is easier for Programmers too).
  • Faster iteration speed, now we can change things in Editor directly, even in runtime!
  • Easily extensible by adding more Events, Conditions and Actions, and transparent to the Game Designers since they are automatically shown in our Custom Editor.
  • Take advantage of Unity Editor for configuring stuff.
  • Easy to disable/enable some logic by turning on/off the corresponding GameObject, which is good for testing something or disable one logic for a while (for example, during ingame cinematics).
  • More control to the Game Designer, they can test and prototype stuff without asking programming team.
  • Simplified workflow for our ingame cinematics.
  • Compatible with our first version, both can run at the same time.

Cons

  • Merge the stage is harder now that it is serialized with the Unity scene, with code we didn’t have merge problems or at least it was easier to fix. One of the ideas to simplify this is to break the logic in parts and use prefabs for those parts, but it breaks when having links with scene instances (which is a common case).
  • A lot of programming responsibility is transferred to the scripting team which in this case is the Game Design team, that means possibly bad code (for example, duplicated logic), bugs (forget to turn off a trigger after processing the actions) and even performance.

Conclusion

When designing (and coding) a game, it is really important to have a good iteration cycle in each aspect of the game. Having switched to a more visual solution with all the elements at hand and avoiding code as much as we could, helped a lot with that goal.

Since what we end up doing looks similar to a scripting engine, why didn't we go with a solution like uScript or similar in the first place? the real answer is I didn't try in depth other Unity scripting solutions out there (not so happy with that), each time I tried them a bit they gave me the feeling it was too much for what we needed and I was unsure how they perform on mobile devices (never tested that). Also, I wasn't aware we would end up need a scripting layer, so I prefered to evolve over our needs, from small to big.

Taking some time to research other games and play with the SC2 Editor helped me a lot when defining how our engine should work and/or why we should go in some direction. There are more aspects of our game that were influenced in some way by how other RTS games do it, which I may share or not in the future, who knows.

I love RTS games, did I mention that before?

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

Returning to Global Game Jam after 5 years

After five years of Global Game Jam vacations, we joined this year’s the last weekend.

It was a really enjoyable time and we had a lot of fun. In this blog post I would love to share my experience as some kind of postmortem article with animated gifs, because you know, animated gifs rules.

The Theme of this jam was “Waves” and together with my team we did a 1v1 fighting game where players have to attack each other by sending waves through the floor tiles to hit the adversary.

Here is a video of our creation:

My team was composed by:

Here are links to download the game and to its source code:

Postmortem

Developing a game in two days is a crazy experience and so much can happen, it is a incredible survival test where you try not to kill each other (joking) while making the best game ever (which doesn’t happen, of course). Here is my point of view of what went right and what could have been better.

What went right

From the beginning we knew we wanted to complete the jam with something done, and with that we mean nothing incomplete goes inside the game. That helped us to keep a small and achievable scope for the time we had.

When brainstorming the game we had luck in some way and reached a solid idea twenty minutes before sharing a pitch to the rest of the jammers, after being discussing by a couple of hours. The good part was not only we were happy with it but it also was related with the theme (we wanted to do a beat’em up but we couldn’t figure out how to link it with it).

Deciding to use familiar tools like Github, Unity and Adobe Animate and doing something we all had experience in some way helped us a lot to avoid unnecessary complications. I have to mention too that half of the team works together at Ironhide so we know each other well.

Our first prototypes, both visual and mechanical, were completed soon enough to validate we were right about the game and they also were a great guide during development. Here are two gifs about each one.

Visual prototype made in Adobe Animate.

Game mechanics prototype made in Unity

Our artists did a great job with the visual style direction which was really aligned with the setting and background story we thought while brainstorming. If you are interested here is the story behind the game.

The awesome visual style of the game.

Using the physics engine sometimes seems like exaggerating a bit, mainly with small games but the truth is, when used right, it could help to achieve unexpected mechanics that add value to the game while simplifying things. In our case we used to detect not only when a wave hits a player but also to detect when to animate the tiles and it was not only a good decision but also really fun.

Animated tiles with hitbox using physics.

Using good practices like prototyping small stuff in separated Unity scenes, like we did with the tile animations, allowed us to quickly iterate a lot on how things should behave so they become solid before integrating with the rest of the game.

Knowing when to say no to new stuff, to keep focus on small, achievable things, even though, from time to time, we spent some time to think new and crazy ideas.

Having fun was a key pillar to develop the game in time while keeping the good mood during the jam.

What could have been better

We left the sound effects to the end and couldn’t iterate enough so we aren’t so happy with the results. However, it was our fault, the dude that did the sounds was great and helped us because a lot (if you are reading, Thank you!). In the future, a better approach would be to quickly test with sounds like voice made or similar downloaded effects from other games, to give him better references to work with.

Not giving room to learn and integrate things like visual effects and particles which would have improved the game experience a lot. Next time we should spend some time to research what we wanted to do and how it could be done before discarding it, I am sure some lighting effects could have improved a lot the visual experience.

We weren’t fast enough to adapt part of the art we were doing to what the game mechanics were leading us, that translates to doing art we couldn’t implement or it is not appreciated enough because it doesn’t happen a lot during the game. If we reacted faster to that we could have dedicated more time in other things we wanted to do.

Too many programmers for a small game didn’t help us when dividing tasks in order to do more in less time. In some way this forced us to work together in one machine and it wasn’t bad at all. It is not easy to know how many programmers we needed before making the team nor adapting the game to the number of programmers. In the end I felt it was better to keep a small scope instead of trying to do more because a programmer is idle. In our case our third programmer helped with game design, testing and interacting with the sounds guy.

Since the art precedes the code, it should be completed with time before integrating it and since we couldn’t anticipate the assets we almost can’t put them in the game. In fact we couldn’t integrate some. The lesson is, test the pipeline and know art deadline is like two hours before the game jam’s end in order to give time to programmers.

Using Google Drive not in the best way complicated things when sharing assets from artists to developers to integrate in the game since we had to upload and download manually. A better option would have been using Dropbox since it automatically syncs. Another would have been forcing artists to integrate them in the game themselves using Unity and Git to share it, but I didn’t want them to hate me so quick so they avoided this growth opportunity.

The game wasn’t easy to balance since it needed 2 players in order to test and improve it. Our game designer tried the game each time a new player came to see what we were doing but that were isolated cases. I believe were affected a bit by the target resolution of the first visual prototype in terms of game space which left us only with the wave speed factor but if we changed to too slow the experience was affected.

We didn’t start brainstorming fast enough, it was a bit risky and we almost didn’t find a game in time. For the next time, we should start as soon as possible.

Some of the ideas we say no because of the scope were really fun and in some way we may have lose some opportunities there.

Drinking too much beer on Saturday could have reduced our capacity but since we can’t prove it we will not stop drinking beer during Global Game Jams.

Conclusion

Participating on the Global Game Jam was great but you have to have free weekends to do it, for some of us it is not that simple to make time to join the event but I’m really sure it worth it.

I hope to have time for the next one and see our potential unleashed again. Until then, thanks my team for making a good game with me in only one weekend.

Cheers.

 

Guardar

Guardar

Guardar

Guardar

Guardar

VN:F [1.9.22_1171]
Rating: 5.0/5 (3 votes cast)

Making mockups and prototypes to minimize problems

I’m not inventing anything new here, I just want to share how making mockups and prototypes helped me to clarify and minimize some problems and in some cases even solve them with almost no cost.

For prototypes and mockups I'm using the Superpower Assets Pack of Sparklin Labs which provided me a great way of start visualizing a possible game. Thank you for that guys.

I will start talking about how I used visual mockups to quickly iterate multiple times over the layout of the user interface of my game to remove or reduce a lot of unknowns and possible problems.

After that, I will talk about making quick small prototypes to validate ideas. One of them is about performing player actions with a small delay (to simulate networking latency) and the other one is about how to solve each player having different views of the same game world.

UI mockups

For the game I’m making the player's actions were basically clear but I didn't know exactly how the UI was going to be and considering I have small experience making UIs, having a good UI solution is a big challenge.

In the current game prototype iteration, the players only have four actions, build unit, build barracks, build houses and send all units to attack the other player. At the same time, to perform those actions, they need to know how much money they have, available unit slots and how much each action cost.

To start solving this problem, I quickly iterate through several mockups, made directly in a Unity scene and using a game scene as background to test each possible UI problem case. For each iteration I compiled it to the phone and "test it" by early detecting problems like "the buttons are too small" or "can't see the money because I am covering it with my fingers", etc.

Why did I use Unity while I can do it with any image editing application and just upload the image to the phone? Well, that's is a good question, one of the answers is because I am more used to do all these stuff in Unity and I already have the template scenes. The other answer is because I was testing, at the same time, if the Unity UI solution supported what I was looking for and I could even start testing interaction feedback, like how the button will react when touched, if the money will turn to red when not having anymore, etc, something I could not test with only images.

The following gallery shows screenshots of different iterations where I tested button positions, sizes, information and support for possible future player actions. I will not go in detail here because I don't remember exactly the order nor the test but you could get an idea by looking at the images.

It took me like less than 2hs to go through more than 10 iterations, testing even visual feedback by discovering when testing that the player should quickly know when some action is disabled because of money restriction or not having unit slots available, etc. I even have to consider changing the scale of the game world to give more empty space reserved for the UI.

Player actions through delayed network

When playing network games, one thing that was a possible issue in my mind is that the player should receive feedback instantly even though the real action could be delayed a bit to be processed in the server. In the case of a move unit action in a RTS, the feedback could be just an animation showing the move destination and process the action later, but when the action considers a consuming a resource, that could be a little tricky, or at least I wasn’t sure so I decided to make a quick test for that.

Similar to the other, I created a Unity scene, in a separated project, I wanted to iterate really fast on this one. The idea to test was to have a way of processing part of the action in the client side to validate the preconditions (enough money) and to give the player instant feedback, and then process the action when it should.

After analyzing it a bit, my main concern was the player experience on executing an action and receiving instant feedback but watching the action was processed later, so I didn’t need any networking related code, I could test everything locally.

The test consisted in building white boxes with the right mouse button, each box costs $20 and you start with $100. So, the idea is that in the moment the button is pressed, a white box with half opacity appears giving the idea the action was processed and $20 are consumed, so you can’t do another action that needs more than that money. After a while, the white box is built and the preview disappear.

Here is a video showing it in action:

In the case of a server validating the action, it will work similar, the only difference is that the server could fail to validate the action (for example, the other player stole money before), in that case the player has to cancel it. So the next test was to try to process that case (visually) to see how it looks like and how it feels. The idea was similar to the previous case but after a while the game returns the money and the box preview disappears.

Here is a video showing this case in action:

It shouldn't be a common case but this is one idea on how it could be solved and I don’t think it is a bad solution.

Different views of the same game world

The problem I want to solve here is how each player will see the world. Since I can't have different worlds for each player the idea is to have different views of the same world. In the case of 3d games, having different cameras should do the trick (I suppose) but I wasn't  sure if that worked the same way for a 2d game, so I have to be sure by making a prototype.

One thing to consider is that, in the case of the UI, each player should see their own actions in the same position.

For this prototype, I used the same scene background used for the mockups, but in this case I created two cameras, one was rotated 180 degrees to show the opposite:

player0(player 1 view)

player1(player 2 view)

Since UI should be unique for each player I configured each canvas for each camera and used the culling mask to show one or another canvas.

Again, this test was really simple and quick, I believe I spent like 30 mins on it, the important thing is that I know this is a possible (and probably the) solution for this problem, which before the test I wasn't sure if it was a hard problem or not.

Conclusions

One good thing about making prototypes is that you could do a lot of shortcuts or assume stuff since the code and assets are not going to be in the game, that gives you a real fast iteration time as well as focus on one problem at a time. For example, by testing the mockups I added all assets in one folder and used Unity sprite packer without spending time on which texture format should I use for each mobile platform or if all assets can be in one texture or not, or stuff like that.

Making quick prototypes of things that you don't know how to solve, early, gives you a better vision of the scope of problems and it is better to know that as soon as possible. Sometimes you could have a lead on how to solve them and how expensive the solution could be and that gives you a good idea on when you have to attack that problem or if you want it or not (for example, forget about a specific feature). If you can't figure it out possible solutions even after making prototypes, then that feature is even harder than you thought.

Prototyping is cheap and it is fun because it provides a creative platform where the focus is on solving one problem without a lot of restrictions, that allows developing multiple solutions, and since it is a creative process, everyone in game development (game designers, programmers, artists, etc) could participate.

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

VN:F [1.9.22_1171]
Rating: 5.0/5 (4 votes cast)

Delegating responsibilities from the engine to the game

When building a platform for a game I tend to spend too much time thinking solutions for every possible problem, in part because it is fun and also a good exercise and in part because I am not sure which of those problems could affect the game. The idea of this post is to share why sometimes I believe it is better to delegate problems and solutions to the game instead of solving them in the engine.

In the case of the game state, I was trying to create an API on the platform side to be used by the game to easily represent it. I started by having a way to collaborate with the game state by storing values in it, like this:

public interface GameState {
    void storeInt(string name, int number);
    void storeFloat(string name, float number);
}

In order to use it, a class on the game side has to implement an interface which allows it to collaborate in part of the game state data:

public class MyCustomObject : GameStateCollaborator
{
    public void Collaborate(GameState gameState){
        gameState.storeFloat("myHealth", 100.0f);
        gameState.storeInt("mySpeed", 5);
    }
}

It wasn't bad with the first tests but when I tried to use in a more complex situation the experience was a bit cumbersome since I had more data to store. I even felt like I was trying to recreate a serialization system and that wasn’t the idea of this API.

Since I have no idea what the game wants to save or even how it wants to save it, I changed a bit the paradigm. The GameState is now more like a concept without implementation, that part is going to be decided on the game side.

public interface GameState {
     
}

So after that change, the game has to implement the GameState and each game state collaborator will have to depend on that custom implementation, like this:

public class MyCustomGameState : GameState  
{
    public int superImportantValueForTheGame;
    public float anotherImportantValueForTheGame;
}

public class MyCustomObject : GameStateCollaborator
{
    public void Collaborate(GameState gameState)
    {
        var myCustomGameState = gameState as MyCustomGameState;
        myCustomGameState.anotherImportantValueForTheGame = 100.0f;
        myCustomGameState.superImportantValueForTheGame = 5;
    }
}

In this way, I don’t know or care how the game wants to store its game state, what I know is there is a concept of GameState that is responsible of providing the information the platform needs to make features, like a replay or savegame, work. For example, at some point I could have something like:

public interface GameStateSave 
{
    void Save(GameState gameState);
}

And let the game decide how to save its own game state even though the engine is responsible of how and when that interface is used to perform some task.

In the end, the platform/engine ends up being more like a framework, providing tools or life cycles to the user (the game) to easily do some work in a common way.

VN:F [1.9.22_1171]
Rating: 5.0/5 (2 votes cast)

The story of the non deterministic Replay

This is the story of how I discovered my simplified replay system wasn’t so deterministic as I believed because I had an ugly bug, but read the post if you want to know where exactly.

While integrating the lockstep engine on the game I am working on, I decided to do something to save and load replays to be able to easily reproduce some bugs I was experimenting. After I had that done and working, it was pretty awesome to see I can replay the same game multiple times (food for another blog post by the way), however, I thought it could be fun and easy to play them faster, why not.

Since I have a fixed time step logic, it should should be pretty straightforward, simply use a multiplied time and then the fixed timestep logic would do all the work and the game logic shouldn't notice the change. I decided to give it a try and it worked…. almost, when playing the replays at higher speeds I noticed some visual differences but I wasn’t totally sure (it could be interpolation code).

To verify, I went back to the test project, where I had the moving box, and test it there, but I needed some way to be sure. Since I have already a way to calculate checksums of the game state, I used that to verify the game states when playing replays at different speeds (from 2x to 16x).

It failed, even though it only failed to validate some frames, following frames were not necessarily invalid (this is something important to consider).

invalid-state Image 1: It shows one of the best tools in the world to check game states when replaying a game.

So, I was right, I saw some differences, I could be sure that something was happening. The thing was, with only the checksums I couldn't know what the real difference was. Next step, making something to detect it.

In order to do that, I had to change to start saving (at least for debug) the game state, not only the checksum, and to check differences between stored game states in the replay and the current game state (when replaying the game) when checksum validation fails. It worked too, now I have the exact place where the differences are.

invalidstate_realdiff

Image 2: It shows why serializing all the game state in one string is the best thing to do in your life.

After testing it a bit, I noticed another curious thing, the game validation wasn’t always failing given the same replay and the same speed. That gave me a hint that the problem was probably not related with the game code itself (the moving box).

So, if I played the replay at 1x, it was validated properly. If I played the replay at 8x, it failed, most of the time, but not always. So, it seems there is something related with speed I don’t understand yet.

I decided to test the same replay but with Unity timescale modified, my first test was using 1x for replay but 5x for the timescale, validation failed, then the opposite, 10x for replay but 0.1x for timescale, and it worked well. So the problem seems to be related with my accumulator logic inside the fixed timestep logic?

Some test cycles later, it turns out that, it was indeed a bug in one of the core classes of the engine!

The problem was on my class LockstepFixedUpdate, the first version was overriding the Update() method and performing lockstep logic, it worked ok if only at most one fixed update is processed, but in the case a big delta time arrives, it only process lockstep logic once for the first fixed update and never again.

That means that, in case replay commands were to be processed in frame 3, we are in frame 1 and a big dt of 10 frames arrives, then lockstep logic checks only in frame one and never again until all 10 frames were processed. This bug even bypass the lockstep!

Since I made a test to replicate the bug, it was really easy to fix it, I changed to process the lockstep logic with each fixed step updates and it works fine now, I have high speed replays!! YEAH!!

Conclusion

In the process of finding this bug I started to expand the engine support and create better tools, this is really important if I want to build something solid over it.

The only way to detect issues as soon as possible is to iterate over the engine as soon as possible and to do that, use cases are needed and games provide the best use cases. In my case, I am using not only the game I am trying to make but also other similar games as references when deciding what I want and how I want to test it, for example, having replays, being able to play the replay at different speeds, being able to save the replays, etc. Also, being able to replicate a bug in a small test case where you can iterate quickly to fix it is super useful.

Detecting (and having) problems like this in a small and simple game gives the idea of the complexity of a medium to big game, all the variables and the difficulties, it is not something to underestimate, so when developers say they couldn't add multiplayer features to their game because it was really hard to do it, it is not a lie.

I love all of this stuff, even though I understand it is not an easy path.

To complete this post, here is a video showing a prototype of how I load and play a replay which was created by playing with two players in LAN, one was my computer and the other was my phone:

The quote of the day is 'Fail as much as possible, as soon as possible to avoid failing when it is too late'.

Hope you enjoyed the journey.

VN:F [1.9.22_1171]
Rating: 5.0/5 (2 votes cast)