During Bankin’ Bacon development we ended up using both Unity’s legacy Input System and new Input System, and we wanted to share our experience with them.

We knew from the beginning we wanted to experiment with the new Input System but since we had no experience and we only had three days, we started by using only the legacy one.

Legacy input system

After prototyping a bit and iterating over the game concept, we decided we wanted to control everything Gamepads (using Xbox 360 Controllers) and the player would be able to do four actions, move (left stick), target (right stick), dash and attack.

For those actions, we created an input class that reads the Input and store their state so it can be used later by our character class. The code is something like this:

class UnitController : MonoBehaviour {
  
  Vector2 movementDirection;
  Vector2 fireDirection;
  
  bool isFiring;
  bool isDashing;

  UnitControllerAsset inputAsset;

  void Update() {
    movementDirection.x = Input.GetAxis(inputAsset.moveHorizontalAxis);
    movementDirection.y = Input.GetAxis(inputAsset.moveVerticalAxis);
    ...
    isFiring = Input.GetButtonDown(inputAsset.fire1Button);
    ...
  }
}

The Player’s prefab can be configured to use a specific set of Input keys by creating a UnitControllerAsset asset and assigning it to the controller. The asset looks like this:

class UnitControllerAsset : ScriptableObject {
  public string moveHorizontalAxis;
  public string moveVerticalAxis;

  public string fireHorizontalAxis;
  public string fireVerticalAxis;

  public string fire1Button;
  public string fire2Button;
}

In order to perform actions, our character class checks the state of the UnitController values and acts accordingly. For example:

class UnitCharacter : MonoBehaviour {
  UnitController controller;
  
  void Update()
  {
    transform.position += controller.movingDirection * speed * dt;
    if (controller.isFiring && cooldown) {
      FireProjectile(controller.fireDirection);
    }
    ...
  }
}

Note: this is an oversimplified code, the game’s code is super ugly.

From the Unity’s InputManager side, we created action keys for each player and configure them using different joystick numbers:

This was a hell to manage, I mean, it wasn’t so hard but it was super easy to make mistakes and not know where. To simplify a bit this task, we normally modify the ProjectSettings/InputManager.asset file directly using a text editor so we can do copy, paste and replace text.

Following this approach we quickly had something working for two players and if we wanted more players we would have just to copy the actions and configure some prefabs.

Mac/Windows differences

Since life is not easy, buttons and axis are mapped differently between Windows and Mac (at least with the driver we were using for Xbox 360 Controllers). To overcome this issue, we had to implement a hack to support different platform Input mapping. What we do is, we duplicate action keys for each OS and we read them differently on that. So, we end up having something like Player0_Fire1 for Windows and Player0_Fire1Mac for Mac (you can see that in the previous InputManager’s image) . Here is an example of the hack code:

void Update() {
  if (Application.platform == RuntimePlatform.OSXPlayer || Application.platform == RuntimePlatform.OSXEditor)
  {
      fx = Input.GetAxis(_inputAsset.fireHorizontalAxis + "Mac");
      fy = Input.GetAxis(_inputAsset.fireVerticalAxis + "Mac");
      firing1 = Input.GetButtonDown(_inputAsset.fire1Button + "Mac");
      firing2 = Input.GetButtonDown(_inputAsset.fire2Button + "Mac");
  }
}

We are not responsible if you want to use this code and your computer explodes.

By the end of the second day of development we had our Gamepads working and we were able to go from 2 to 4 players support by just adding the actions mapping for new players in the InputManager and creating some prefabs.

Even though that was working fine in the computer we were using for development, it didn’t on our personal computers at home and we didn’t know why.

New input system

Since we were worried that could happen to more people and we love livin’ on the edge (we used Unity 2019.1 for the game), we decided to spend the last day trying to improve our input issues by using the new Input System (whaaaaaaaat?).

We started by creating another project named LearnAboutNewInputSystem and importing the needed packages by following these installation steps. The idea was to iterate and learn about the API in a safe context and, only after we manage to do what we needed, integrate it in the game.

Once we had the project ready, we created an Input Actions asset with Create > Input Actions and configured some actions to start testing. Here is the asset configuration:

We specified a continuous axis Action, named Movement, that receives input from the Gamepad left stick and from the keyboard keys WASD. In order to react to that action, we created a GameObject with a PlayerInput MonoBehaviour and mapped to our custom MonoBehaviour methods using Unity Events.

PlayerInput inspector automatically shows Unity Events for each action you create on the Input Actions configuration asset:

Note: it has a bug what adds same action multiple times each time it reloads code or something like that.

And here is our code to handle the action event:

public class MyControllerTest {
  public void OnMovement(InputAction.CallbackContext context) {
    var axis = context.ReadValue<Vector2>();
    Debug.LogFormat("Moving to direction {0}", axis);    
  }
}

That worked well, however we started to see some possible problems. First, even though we received callbacks continuously for the Gamepad left stick, we only received callbacks for the keyboard when a key was pressed or released but not all the time as expected. Second, we didn’t know how to identify different Gamepads, so with the current test, each time a left stick on any connected Gamepad was moved, our callback was invoked.

Note: we didn’t know about the PlayerInputManager class while developing the game. We tried using it now (while writing this blog post) but we detected also some problems.

By checking about new Input System in Unity’s Forums we found some people trying to do something similar and one dev suggested doing this and also checking the test cases for how to use the API. Following those recommendations, we managed to make our first version of multiple Gamepads support.

Here is the code:

public class MyPlayersManager : MonoBehaviour {
  InputUser[] _users;

  Gamepad[] _gamepads;

  void Start()
  {
    _users = new InputUser[4];
    _gamepads = new Gamepad[4];

    InputUser.listenForUnpairedDeviceActivity = 4;

    InputUser.onChange += OnControlsChanged;
    InputUser.onUnpairedDeviceUsed += ListenForUnpairedGamepads;

    for (var i = 0; i < _users.Length; i++)
    {
      _users[i] = InputUser.CreateUserWithoutPairedDevices();
    }
  }

  void OnControlsChanged(InputUser user, InputUserChange change, InputDevice device)
  {
    if (change == InputUserChange.DevicePaired)
    {
      var playerId = _users.ToList().IndexOf(user);
      _gamepads[playerId] = device as Gamepad;
    } else if (change == InputUserChange.DeviceUnpaired)
    {
      var playerId = _users.ToList().IndexOf(user);
      _gamepads[playerId] = null;
    }
  }

  void ListenForUnpairedGamepads(InputControl control)
  {
    if (control.device is Gamepad)
    {
      for (var i = 0; i < _users.Length; i++)
      {
        // find a user without a paired device
        if (_users[i].pairedDevices.Count == 0)
        {
          // pair the new Gamepad device to that user
          _users[i] = InputUser.PerformPairingWithDevice(control.device, _users[i]);
          return;
        }
      }
    }
  }
}

What we do here is to listen for any raw event, for example pressing a button, from unpaired Gamepads and pair them with users without paired devices.

In the Forums, they also recommend creating a new instance of the Input Actions asset for each user, we tested that and worked somehow but we realized we didn’t need it for the game so we decided to just read the Gamepad values directly.

Integrating it in Bankin’Bacon

To integrate it in the game and be able to use it in any scene, we created a Singleton named UnitNewInputSingleton, using a ScriptableObject, initiated the first time it was invoked. Each time we want to know the state of a Gamepad for a user, we add a dependency to the asset and use it directly from code.

To implement a new controller using the new input, we first created an abstract class for the UnitController and then created a new implementation using a reference to the UnitNewInputSingleton to ask for the raw data of the Player’s Gamepad. Here is the code of the new input:

public class UnitControllerNewInput : UnitControllerBaseInput {

  [SerializeField]
  private UnitNewInputSingleton _inputMaster;

  [SerializeField] 
  private UnitControllerInputAsset _keyboardControls;

  private Vector2 _moveDirection;
  private Vector2 _targetDirection;

  public override Vector3 MoveDirection => new Vector3(_moveDirection.x, 0, _moveDirection.y);
  public override Vector3 FireDirection => new Vector3(_targetDirection.x, 0, _targetDirection.y);

  public override bool IsFiring1 { get; set; }
  public override bool IsFiring2 { get; set; }

  private int _inputPlayerId;

  private void Start()
  {
    _inputPlayerId = _inputMaster.RegisterPlayer();
  }

  private void Update()
  {
    _moveDirection = new Vector2();
    _targetDirection = new Vector2();

    var gamepad = _inputMaster.GetGamepad(_inputPlayerId);
    
    if (gamepad != null)
    {
      _moveDirection = gamepad.leftStick.ReadValue();
      _targetDirection = gamepad.rightStick.ReadValue();
      IsFiring1 = gamepad.rightShoulder.wasPressedThisFrame || IsFiring1;
      IsFiring2 = gamepad.leftShoulder.wasPressedThisFrame || IsFiring2;
    } 
  }
}

Since we had access to the raw input, on some scenes where we just wanted to know if any Gamepad button was pressed, we iterated over all the Gamepads. This was used to restart the game for example.

var start = _inputMaster.Gamepads.Any(g => g != null && g.startButton.wasReleasedThisFrame);
var select = _inputMaster.Gamepads.Any(g => g != null && g.selectButton.wasReleasedThisFrame);

if (start)
{
    OnGameRestart();
} else if (select)
{
    SceneLists.Load().LoadMenu();
}

If you want to see more about one day solution code check the game Gitlab page.

Finally

Even though Legacy Input System works for quickly prototyping, it is really limited to do Gamepads support and it sucks in terms of configuration. It even doesn’t support remap joystick actions using the default Unity’s Launcher Window.

The new Input System is looking better already but we believe it still has a lot of issues to fix and probably not ready for production. It also has some strange design decisions (in our humble opinion) like having split screen stuff in the API and configuration scripts. The API should be clean and provide low level control and then there should be optional packages to do more stuff depending the game.

We hope they keep working and improve it as much as possible and release a stable version soon.


We want to share a bit about the game we did for Ludum Dare Jam #44 last weekend, it’s named:

Made by:

Links

And now a bit of dev story by our friend Enzo.

Dev Story

Initially, when we heard about the theme Your life is currency we were a bit bummed since we were set on doing a roguelike shoot em’ up game and were not so sure how to fit the theme into the mechanics, however, after a few cups of coffee and a brainstorming session around the whiteboard we ended up coming with the idea that would flesh out to be Bankin’ Bacon.

Initial ideas on character concept.

We discussed using lifesteal mechanics, simply addressing the theme through art assets, using HP for upgrades (but adding a lot of upgrades would take time away from polishing) and more. Eventually we ended up going for the idea of having HP being your ammo and the price to pay for every skill.

Once the core idea behind the mechanics was settled, we needed a main character. We tossed around many ideas like a gold golem being chased by dwarves, bankers throwing money stacks at each other but chose to go with a lovable character that would fit the theme perfectly… A piggy bank that shoots its own coins.

Our characters are taking shape.

A few hours later our awesome artist had this bouncy fella ready for integrating.

So, now that the character was settled and that the artist was working on the environment art we had to fully define the game’s mechanics.

Since we have some game jams under our belt we’ve learnt the hard way that the best way to address features during a jam is to have the ideal game in mind and the minimum viable product (MVP) to call it a closed game. Then, start building the MVP and polish our way on top of that.

Terrain and colors concept.

Our main goal was to make a game that would deliver a fun couch multiplayer experience but instead of shooting like crazy you’d have to be frugal about it and make your shots count, after all… Your life is currency.

In order to make the players play frugally, we needed mechanics that would encourage that sort of play.

Planned mechanics that made it into the game :)

  • Shooting does self damage to the shooter.
  • Getting hit does more damage than shooting. (so shooting would make sense)
  • Players die if getting shot drops their HP below zero.
  • Dashing makes you invulnerable but costs HP.
  • Players with no coins won’t be able to shoot. (So you can’t die from shooting.)

Planned mechanics that didn’t make it into the game :(

  • Starting out with 50% max HP
  • Dashing into coins would steal the coin shot.
  • Player scale would vary according to HP. (so that the winning player would have a bigger hit box)
  • Player speed would vary according to HP. (So that the losing player would be faster)
  • Different coin values (i.e. quarters, nickels, pennies, etc.)

Why mention the mechanics that didn’t make it into the game? Well, to convey the whole “ideal game vs MVP” when it comes to jam development and also to explain the reason behind adding a new mechanic, coin dropping.

Even though we managed to make the game encourage players to be frugal about shooting and dashing, we were encouraging players not to do anything that was fun in the game, therefore, we needed a way to keep the coins around so players could keep playing, but doing regular powerup drops would inflate the game’s economy, thus, making about 80% of the coins used drop around the level was the right way to go.

When it comes to level design, we started by trying out Unity’s Probuilder, which came in handy for prototyping really complex levels with tunnels and nooks to hide in that would turn the game into more of a hide n’ seek type of play, but two issues came up from doing that:

  • The camera needed for using tunnels and the like was a bit hectic for the type of game and stretched the art assets, so we ended up switching to a fixed camera.
  • Doing that sort of levels would require an amount of art assets that would be impossible to make during a jam without sacrificing the art standards.

However, even though we didn’t use assets made in Probuilder, it played a crucial role alongside Progrid for testing out level sizes, prototypes and ultimately assembling the level with the final assets. After several iterations using complex designs, simple levels with little obstacles proved to be the way to go leaving more time for polishing the environmental decoration.

In conclusion, the overall experience was more than satisfactory, leaving us with a somewhat polished game that could provide plenty of fun couch multiplayer experiences.

To finish up, we leave you a longer gameplay video, enjoy!

Thanks for reading!


Over the last months, I was researching, in my spare time, some ideas to reuse code between Unity projects. I was mainly interested in using generic code from Iron Marines in multiple projects we were prototyping at Ironhide.

The target solution in mind was to have something like NPM or Maven where packages are stored in a remote repository, and you can depend on a version of them in any project by just adding a configuration.

Some time ago, Unity added the Unity Package Manager (UPM) which does exactly that, it automatically downloads a list of packages given a dependency configuration. I was really excited with this solution since it goes along the direction I wanted.

Note: to know more about dependency management follow this link, even though it is about specific dependency manager, it gives the global idea of how they work.

Workflow

So, a good workflow would be something like this.

Imagine there are two projects, project A and project B and the first has some useful code we want to use in the second.

In order to do that, project A must be uploaded to the dependency system and tagged with a specific version to be identified, for example, version 0.0.1.

Then, to import that code in project B, we just add a dependency in the package dependency declaration.

{
  "dependencies": {
    "com.gemserk.projectA": "0.0.1"
  }
}

Suppose we keep working on the project B while someone working on another stuff added a new functionality to project A, and we want it. Since the new feature was released in version 0.0.5 of project A, we need to update the package dependency declaration to depend on it, like this:

{
  "dependencies": {
    "com.gemserk.projectA": "<strong>0.0.5</strong>"
  }
}

And that will download it and now project B can access the new feature. That is basically a common workflow.

Since we aren’t Unity, we can’t upload our code to their packages repository (yet?). However, there other are ways to declare dependencies stored elsewhere.

Depend on a project in the local filesystem

UPM supports depending on a project stored in the local filesystem like this:

{
  "dependencies": {
    "com.gemserk.projectA": "file:../../ProjectA/"
  }
}

For example, project A is stored in a sibling folder to project B.

/Workspace
  /ProjectB
    /Packages
      manifest.json
  /ProjectA

Note: There is a great Gist by LotteMakesStuff about this topic and more (thanks Rubén for the link).

This approach has the advantage of using local projects with any structure, stored anywhere (Git or not, more on that later). It even allows depending on a sub folder of project A, so project A could be a complete Unity project with lots of stuff (test scenes, testing assemblies, etc) but project B only depends on the shared code.

It needs all developers to use a similar folder structure for all the projects and download all of them together to work. Transitive dependencies is not supported by UPM following this approach, so if project A depends on project Zero, project B would never automatically find out.

It has some unexpected behaviors in code, at least in Visual Studio, when editing project A (add new code, refactor, etc) by opening project B solution through Unity. That’s is probably a side effect on how they create the VS solution or may be just a bug.

It doesn’t support versioning. Each time project A is modified, project B will automatically know about the change. This could be an issue if project A changed its API and we don’t want to (or can’t) update project B yet.

Depend on a Git project

Another option is to use a link to a Git project, like this:

"dependencies": {
  "com.gemserk.projectA": "https://github.com/gemserk/projectA.git",
}

Dependencies can be stored directly in Git projects (in Github or any other Git server). It supports depending on a specific commit, branch or tag to simulate versioning, but changing versions should be done by hand since UPM doesn’t show the list of versions like it does with Unity packages.

Since it looks at the root folder, a different Git project is needed for each dependency. This is not necessary a bad thing, but it is not possible to have a Git project with multiple folders, one for each project, something that might be common in some cases, like we did with our LibGDX projects.

A bigger problem related to this is that project A should have only the code to be used by other projects, otherwise UPM will import everything.

It also lacks support for transitive dependencies since UPM doesn’t process the dependencies declaration of the Git project.

UPDATE: this is a great tutorial on using git as package provider.

Mixing both approaches with Git submodules

There is also a mix using filesystem reference and Git submodules to overcome the versioning disadvantage of the first approach.

For example, project A is downloaded as project B submodule structure pointing to specific branch/tag/commit.

/ProjectB_Root
   /ProjectB
      /Packages/manifest.json
   /ProjectA (Git submodule, specific tag)

In this way, we have specific versions of project A in each project, and we can share code by uploading it to Git to that submodule and tagging versions. We still have the transitive dependency limitation, and other problems related to Git submodules (more on that later).

Using a packages repository

The last way is to use a local or remote package repository like Nexus, Artifactory, NPM, Verdaccio, etc, and then configure it in the manifest.json:

{
  "registry": "https://unitypackages.gemserk.com",
  "dependencies": {
    "com.gemserk.projectA": "0.0.1"
  }
}

This approach is similar to using Unity’s packages but a remote repository in a private server is needed to share code among developers. It is also possible to upload the code to a public server like NPM. Obviously, a private server need maintenance but I believe it is the long term solution if you have private code you can’t share with the world.

At Gemserk we had our own repository using Nexus and at some point we even uploaded some of our dependencies to Maven central. We configured it when we started making games and probably touched once or twice per year, maintenance wasn’t a problem at all.

I tried to use version range with UPM to see if I can add a dependency to a package 1.0.x and that would automatically download latest 1.0 (for example, 1.0.26) but couldn’t make it work, not sure if it is not supported yet or I am doing it wrong. UPDATE: it is not supported yet but it is going to be in the long term plan.

With this approach, project A is integrated like any Unity’s package and the code is accessible in the VS solution but it can’t be easily modified through project B. Any changes in project A should be made apart, a new version must be uploaded to the repository and then update project B dependency declaration. In some cases this shouldn’t be an issue but, for some development cycles, it could.

Note: When we used Maven, there was a concept of SNAPSHOT which is a work in progress version that it is overwritten each time it is uploaded to the dependency system and projects depending on it use latest version automatically. That was super useful for development and the reason I tested version ranges with UPM.

UPDATE: I tested creating a packages server using Verdaccio and see how UPM interacts with it, and how hard it was to upload a package to the server. It turns out that it is relatively simple, UPM shows the list of available versions and probably get transitive dependencies (didn’t test that yet). I followed this tutorial in case you want to test it too.

After that initial research and considering some of the current limitations of the UPM, I started looking for other options as well.

Using Git Submodules without UPM

In this approach, code is imported using a Git submodule in the project B itself.

/ProjectB
   /Assets
      /ProjectA (submodule)

Following this approach the code is more integrated, changes can be made and uploaded to that submodule directly from project B. In case there are different versions, project B can depend on specific branch/tag/commit of that submodule too.

Similar to the Git + UPM approach, the submodule should contain only the stuff needed to be reused, otherwise it will import it too since submodules point to the root folder of the Git project. However, since it is integrated in the other projects, it should be easier to edit in some way.

It is an interesting approach but has some drawbacks as well, for example, having to update each Git submodule manually by each developer, or the root folder problem.

Reusing code by hand

There is an ultimate form of reusing code: copy and paste the code from project to project. Even though it is ugly and doesn’t scale at all, it might work when having to do some quick tests.

Conclusion

Whatever the chosen way of sharing code, first it must be decoupled in a nice way and that’s what I’m planning to write about in the next blog post.

That was my research for now, there are a lot of links to keep digging in the UPM solution. I hope they keep improving it to support transitive dependencies for filesystem and git approaches and version ranges.

References


As I said in the previous blog post, some time ago I started working on a new Fog of War / Vision System solution aiming the following features:

  • Being able to render the fog of war of each player at any time, for replays and debug.
  • Being able to combine multiple players’ visions for alliances, spectator mode or watching replays.
  • Blocking vision by different terrain heights or other elements like bushes.
  • Optimized to support 50+ units at the same time in mobile devices at 60FPS.
  • It should look similar to modern games like Starcraft 2 and League of Legends (in fact, SC2 is already eight years old, not sure if that is consider a modern game or not :P).

This is an example of what I want:

Example: fog of war in Starcraft 2

To simplify a bit writing this article, when I write unit I mean not only units but also structures or anything that could affect the fog of war in the game.

Logic

First, there is a concept named UnitVision used to represent anything that reveals fog. Here is the data structure:

struct UnitVision
{
   // A bit mask representing a group of players inside 
   // the vision system.
   int players;

   // The range of the vision (in world coordinates)
   float range;
   
   // the position (in world coordinates)
   vector2 position;

   // used for blocking vision
   short terrainHeight;
}

Normally, a game will have one for each unit but there could be cases where a unit could have more (for example a large unit) or even none.

A bit mask is used to specify a group of players, so for example, if player 0 is 0001 and player1 is 0010, then 0011 means the group formed by player0 and player1. Since it is an int, it supports up to sizeof(int) players.

Most of the time the group would contain only one player but there might be some situations, like a general effect or cinematic, etc, that needs to be seen by all players and one possible solution is to use a unitVision with more than one player.

The terrainHeight field stores the current height of the unit and it is used for blocking vision or not. It normally will be the world terrain’s height on that position if it is a ground unit but there are cases like flying units or special abilities that could change the unit height that should be consider when calculating blocked vision. It is the game’s responsibility to update that field accordingly.

There is another concept named VisionGrid that represents the vision for all the players. Here is the data structure:

struct VisionGrid
{
    // the width and height of the grid (needed to access the arrays)
    int width, height;

    // array of size width * height, each entry has an int with the 
    // bits representing which players have this entry in vision.
    int[] values;

    // similar to the values but it just stores if a player visited
    // that entry at some point in time.
    int[] visited;

    void SetVisible(i, j, players) {
        values[i + j * width] |= players;
        visited[i + j * width] |= players;
    }

    void Clear() {
        values.clear(0);
    }

    bool IsVisible(i, j, players) {
        return (values[i + j * width] & players) &gt; 0;
    }

    bool WasVisible(i, j, players) {
        return (visited[i + j * width] & players) &gt; 0;
    }
}

Note: arrays have a size of width * height.

The bigger the grid is, the slower it gets to calculate vision but it also has more information which could be useful for units’ behaviors or to get better fog rendering.The smaller the grid is, the opposite. A good balance must be defined from the beginning in order to build the game over that decision.

Here is an example of a grid over the game world:

Given a grid entry for a world position, the structure stores an int in the values array with the data of which players have that position inside vision. For example, if the entry has 0001 stored, it means only the player0 sees that point. If it has 0011, then both the player0 and player1.

This structure also stores when a player revealed fog in the past in the visited array which is used mainly for rendering purposes (gray fog) but could also be used by the game logic (to check if a player knows some information for example).

The method IsVisible(i, j, players) will return true if any of the players in the bit mask has the position visible. The method WasVisible(i, j, players) is similar but will check the visited array.

So, for example, if player1 and player2 (0010 and 0100 in bits) are in an alliance, then when player2 wants to know if an enemy is visible to perform an attack, it can call isVisible method with the bitmask for both players 0110.

Calculating vision

Each time the vision grid is updated, the values array is cleared and recalculated.

Here is a pseudo code of the algorithm:

void CalculateVision()
{
   visionGrid.Clear()
   
   for each unitVision in world {
      for each gridEntry inside unitVision.range {
         if (not IsBlocked(gridEntry)) {
            // where set visible updates both the values and the
            // visited arrays.
            grid.SetVisible(gridEntry.i, gridEntry.j, 
                            unitVision.players)
         }
      }
   }
}

To iterate over grid entries inside range, it first calculates vision’s position and range in grid coordinates, named gridPosition and gridRange, and then it draws a filled circle of gridRange radius around gridPosition.

Blocked vision

In order to detect blocked vision, there is another grid of the same size with terrain’s height information. Here is its data structure:

struct Terrain {
    // the width and height of the grid (needed to access the arrays) 
    int width, height;

    // array of size width * height, has the terrain level of the 
    // grid entry. 
    short[] height;

    int GetHeight(i, j) {
       return height[i + j * width];
    }
}

Here is an example of how the grid looks over the game:

Note: that image is handmade as an example, there might be some mistakes.

While iterating through the vision’s grid entries around unitVision’s range, to detect if the entry is visible or not, the system checks if there are no obstacles to the vision’s center. To do that, it draws a line from the entry’s position to the center’s position.

If all the grid entries in the line are in the same height or below, then the entry is visible. Here is an example where the blue dot represents the entry being calculated and white dots the line to the center.

If there is at least one entry in the line that is in a greater height, then the line of sight is blocked. Here is an example where the blue dot represents the entry we want to know if it is visible or not, white dots represents entries in the line in the same height and red dots represents entries in a higher ground.

Once it detects one entry above the vision it doesn’t need to continue drawing the line to the vision center.

Here is a pseudo algorithm:

bool IsBlocked()
{
   for each entry in line to unitVision.position {
      height = terrain.GetHeight(entry.position)
      if (height &gt; unitVision.height) {
         return true;
      }
   }
   return false;
}

Optimizations

  • If an entry was already marked as visible while iterating over all unit visions then there is no need to recalculate it.
  • Reduce the size of the grid.
  • Update the fog less frequently (In Starcraft there is a delay of about 1 second, I recently noticed while playing, it is an old game).

Rendering

To render the Fog of War, first I have a small texture of the same size of the grid, named FogTexture, where I write a Color array of the same size using the Texture2D.SetPixels() method.

Each frame, I iterate on each VisionGrid entry and set the corresponding Color to the array using the values and visited arrays. Here is a pseudo algorithm:

void Update()
{
   for i, j in grid {
       colors[i + j * width] = black
       if (visionGrid.IsVisible(i, j, activePlayers))
           colors[pixel] = white
       else if (visionGrid.WasVisible(i, j, activePlayers))
           colors[pixel] = grey // this is for previous vision
   }
   texture.SetPixels(colors)
}

The field activePlayers contains a bit mask of players and it is used to render the current fog of those players. It will normally contain just the main player during game but in situations like replay mode, for example, it can change at any time to render different player’s vision.

In the case that two players are in an alliance, a bitmask for both players can be used to render their shared vision.

After filling the FogTexture, it is rendered in a RenderTexture using a Camera with a Post Processing filter used to apply some blur to make it look better. This RenderTexture is four times bigger in order to get a better result when applying the Post Processing effects.

Once I have the RenderTexture, I render it over the game with a custom shader that treats the image as an alpha mask (white is transparent and black is opaque, or red in this case since I don’t need other color channels) similar to how we did with Iron Marines.

Here is how it looks like:

And here is how it looks in the Unity’s Scene View:

The render process is something like this:

Easing

There are some cases when the fog texture changed dramatically from one frame to the other, for example when a new unit appears or when a unit moves to a higher ground.

For those cases, I added easing on the colors array, so each entry in the array transitions in time from the previous state to the new one in order to minimize the change. It was really simple, it added a bit of performance cost when processing the texture pixels but in the end it was so much better than I preferred to pay that extra cost (it can be disabled at any time).

At first I wasn’t sure about writing pixels directly to a texture since I thought it would be slow but, after testing on mobile devices, it is quite fast so it shouldn’t be an issue.

Unit visibility

To know if a unit is visible or not, the system checks for all the entries where the unit is contained (big units could occupy multiple entries) and if at least one of them is visible then the unit is visible. This check is useful to know if a unit can be attacked for example.

Here is a pseudo code:

bool IsVisible(players, unit)
{ 
  // it is a unit from one of the players
  if ((unit.players & players) &gt; 0)
    return true;

  // returns all the entries where the unit is contained
  entries = visionGrid.GetEntries(unit.position, unit.size)

  for (entry in entries) {
    if (visionGrid.IsVisible(entry, players)) 
      return true;
  }

  return false;
}

Which units are visible is related with the fog being rendered so we use the same activePlayers field to check whether to show or hide a unit.

To avoid rendering units I followed a similar approach to what we did for Iron Marines using the GameObject’s layer, so if the unit is visible, the default layer is set to its GameObject and if the unit is not visible, a layer that is culled from the game camera is set.

void UpdateVisibles() { 
  for (unit in units) { 
    unit.gameObject.layer = IsVisible(activePlayers, unit) : default ? hidden; 
  } 
}

Finally

This is how everything looks working together:

Conclusion

When simplifying the world in a grid and started thinking in terms of textures, it was easier to apply different kind of image algorithms like drawing a filled circle or a line which were really useful when optimizing. There are even more image operations that could be used for the game logic and rendering.

SC2 has a lot of information in terms of textures, not only the player’s vision, and they provide an API to access it and it is being used for machine learning experiments.

I am still working on more features and I plan to try some optimization experiments like using c# job system. I am really excited about that one but I first have to transform my code to make it work. I would love to write about that experiment.

Using a blur effect for the fog texture has some drawbacks like revealing a bit of higher ground when it shouldn’t. I want to research a bit of some other image effect to apply where black color is not modified when blurring but not sure if that is possible or if it is the proper solution. One thing that I want to try though is an upscale technique like the one used in League of Legends when creating the fog texture and then reduce the applied blur effect, all of this to try to minimize the issue.

After writing this blog post and having to create some images to explain concepts I believe it could be great to add more debug features like showing the vision or terrain grid itself at any time or showing a line from one point to another to show where the vision is blocked and why, among other stuff. That could be useful at some point.

This blog post was really hard to write since, even though I am familiarized with the logic, it was hard to define the proper concepts to be clear when explaining it. In the end, I feel like I forgot to explain some stuff but I can’t realize exactly what.

As always, I really hope you enjoyed it and it would be great to hear your feedback to improve writing this kind of blog posts.

Thanks for reading!


For the last 3 years, I’ve been working on Iron Marines at Ironhide Game Studio, a Real time Strategy Game for mobile devices. During its development, we created a Fog of War solution that works pretty well for the game but it lacks some of the common features other RTS games have, and how to improve that is something I wanted to learn at some point in my life.

Recently, after reading a Riot Games Engineering blog post about Fog of War in League of Legends, I got motivated and started prototyping a new implementation.

In this blog post I will explain Iron Marines’ Fog of War solution in detail and then I will write another blog post about the new solution and explain why I consider it is better than the first one.

Fog of War in Strategy Games

It normally represents the missing information about the battle, for example, not knowing how the terrain is yet, or outdated information, for example, the old position of an enemy base. Player units and buildings provide vision that removes Fog during the game revealing information about the terrain and the current location and state of the enemies.

Example: Dune 2 and its Fog of War representing the unknown territory (by the way, you can play Dune 2 online).

Example: Warcraft: Orcs and Humans' Fog of War (it seems you can play Warcraft online too).

The concept of Fog of War is being used in strategy games since more than 20 years now, which is a lot for video games.

Process

We started by researching other games and deciding what we wanted before start implementing anything.

After that, we decided to target a similar solution to Starcraft (by the way, it is free to play now, just download Battle.net and create an account). In that game, units and buildings have a range of vision that provide vision to the Player. Unexplored territory is covered with full opacity black fog while previously explored territory is covered by half opacity fog, revealing what the Player know about it, information that doesn’t change during the game.

Enemy units and buildings are visible only if they are inside Player’s vision but buildings leave over a last known location after they are not visible anymore. I believe the main reason for that they can’t normally move (with the exception of some Terran buildings) so it is logical to assume they will stay in that position after losing vision and might be vital information about the battle.

Iron Marines

Given those rules, we created mock images to see how we wanted it to look in our game before started implementing anything.

Mock Image 1: Testing terrain with different kind of Fog in one of the stages of Iron Marines.

Mock Image 2: Testing now with enemy units to see when they should be visible or not.

We started by prototyping the logic to see if it works for our game or not and how we should adapt it.

For that, we used an int matrix representing a discrete version of the game world where the Player’s vision is. A matrix’s entry with value 0 means the Player has no vision at that position and a value of 1 or greater means it has.

Image: in this matrix there are 3 visions, and one has greater range.

Units and buildings’ visions will increment 1 to the value of all entries that represent world positions inside their vision range. Each time they move, we first decrease 1 from its previous position and then we increment 1 in the new position.

We have a matrix for each Player that is used for showing or hiding enemy units and buildings and for auto targeting abilities that can’t fire outside Player’s vision.

To determine if an enemy unit or building is visible or not, we first get the corresponding entry of the matrix by transforming its world position and check if the stored value is greater than 0 or not. If not, we change its GameObject layer to one that is culled from the main camera to avoid rendering it, we named that layer “hidden”. If it is visible, we change it back to the default layer, so it starts being rendered again.

Image: shows how enemy units are not rendered in the Game view. I explain later why buildings are rendered even outside the Player’s vision.

Visuals

We started by just rendering a black or grey color quad over the game world for each matrix’s entry, here is an image showing how it looks like (it is the only one I found in the chest of memories):

This allowed us to prototype and decide some features we didn’t want. In particular, we avoided blocking vision by obstacles like mountains or trees since we preferred to avoid the feeling of confinement and also we don’t have multiple levels of terrain like other games do. I will talk more about that feature in the next blog post.

After we knew what we wanted, and tested in the game for a while, we decided to start improving the visual solution.

The improved version consists in rendering a texture with the Fog of War over the entire game world, similar to what we did when we created the visual mocks.

For that, we created a GameObject with a MeshRenderer and scaled it to cover the game world. That mesh renders a texture named FogTexture, which contains the Fog information, using a Shader that considers pixels’ colors as an inverted alpha channel, from White color as full transparent to Black color as full opaque.

Now, in order to fill the FogTexture, we created a separated Camera, named FogCamera, that renders to the texture using a RenderTexture. For each object that provides vision in the game world, we created a corresponding GameObject inside the FogCamera by transforming its position accordingly and scaling it based on the vision’s range. We use a separated Unity’s Layer that is culled from other cameras to only render those GameObjects in the FogCamera.

To complete the process, each of those objects have a SpriteRenderer with a small white Ellipse texture to render white pixels inside the RenderTexture.

Note: we use an Ellipse instead of a Circle to simulate the game perspective.

Image: This is the texture used for each vision, it is a white Ellipse with transparency (I had to make the transparency opaque so the reader can see it).

Image: this is an example of the GameObjects and the FogCamera.

In order to make the FogTexture look smooth over the game, we applied a small blur to the FogCamera when rendering to the RenderTexture. We tested different blur shaders and different configurations until we found one that worked fine on multiple mobile devices. Here is how it looks like:

And here is how the Fog looks like in the game, without and with blur:

For the purpose of rendering previously revealed territory, we had to add a previous step to the process. In this step, we configured another camera, named PreviousFogCamera, using a RenderTexture too, named PreviousVisionTexture, and we first render the visions there (using the same procedure). The main difference is that the camera is configured to not clear the buffer by using the “Don’t Clear” clear flag, so we can keep the data from previous frames.

After that, we render both the PreviousVisionTexture in gray color and the vision’s GameObjects in the FogTexture using the FogCamera. The final result looks like this:

Image: it shows the revealed territory in the FogCamera.

Image: and here is an example of how the Fog with previous revealed territory looks in the game.

Buildings

Since buildings in Iron Marines are big and they don’t move like Starcraft, we wanted to follow a similar solution.

In order to do that, we identified buildings we wanted to show below the Fog by adding a Component and configuring they were going to be rendered when not inside the Player’s vision.

Then, there is a System that, when a GameObject with that Component enters the Player’s vision for the first time, it creates another GameObject and configures it accordingly. That GameObject is automatically turned on when the building is not inside the Player’s vision anymore and turned off when the building is inside vision. If, by some reason the building was destroyed while not in inside vision, the GameObject doesn’t disappear until the Player discovers its new state.

We added a small easing when entering and leaving the Player’s vision to make it look a bit smoother. Here is a video showing how it looks like:

Conclusion

Our solution lacks some of the common Fog of War features but it works perfectly for our game and looks really nice. It also performed pretty well on mobile devices, which is our main target, and if not done properly it could have affected the game negatively. We are really proud and happy with what we achieved developing Iron Marines.

That was, in great part, how we implemented Fog of War for Iron Marines. I hope you liked both the solution and the article. In the next blog post I will talk more about the new solution which includes more features.

Thanks for reading!