How we used Entity Component System (ECS) approach at Gemserk - 2/2

So, after our first attempt on using ECS, when we started to develop mobile games and moved to the LibGDX framework, we decided to abandon our ComponentsEngine and start over.

We were still reading about ECS while we were creating our set of tools and code over LibGDX. At some point in time, some of the Java community developers started Artemis, a lightweight ECS, and we decided to give it a try.

Artemis

This is a simplified core architecture diagram:

Note: we used a modified version of Artemis with some additions like enable/disable an Entity.

In this pure approach, Entities and Components are just data, and they are related by identifiers, like these tables in a relational database:

Where we have two entities, both with a PositionComponent and only one of them with MovementComponent.

An example of this components in code:

public class PositionComponent : Component {
    public float x, y;
}

public class MovementComponent: Component {
    public float speed;
}

EntitySystems perform the game logic. They normally work on a subset of Components from an Entity but they could need it in order to enable/disable/destroy it (if it is part of its logic).

An example of a System is LimitLinearVelocitySystem, used in multiple of our games to limit the physics velocity of an Entity with PhysicsComponent and LimitVelocityLimitComponent:

public void process(Entity e) {
    PhysicsComponent physicsComponent = 
                     Components.getPhysicsComponent(e);
    Body body = physicsComponent.getPhysics().getBody();

    LinearVelocityLimitComponent limitComponent = 
             e.getComponent(LinearVelocityLimitComponent.class);
    Vector2 linearVelocity = body.getLinearVelocity();

    float speed = linearVelocity.len();
    float maxSpeed = limitComponent.getLimit();

    if (speed > maxSpeed) {
        float factor = maxSpeed / speed;
        linearVelocity.mul(factor);
        body.setLinearVelocity(linearVelocity);
    }
}

There are some extra classes like the TagManager which allows assigning a unique string identifier to an Entity in order to find it, and the GroupManager which allows adding entities to groups identified by a name, which is more like how tags are used everywhere else.

Even though it could look similar to our ComponentsEngine where the Properties in that engine correspond to the Components in this one, and Components to Systems, there is an important difference: Systems are not part of an Entity and work horizontally over all entities with specific Components (data). So, in this approach, changing only data from entities indeed change their behaviour in the game, so it is really data driven.

Since Entities and Components are just data, their instances in memory are easier to reuse. An Entity is just an id, so it is obvious. A Component is a set of data that doesn't care too much which Entity it belongs to. So, after the Entity doesn't need it anymore, it could be reused elsewhere, improving a lot memory usage and garbage collection.

Artemis, and other pure ECS implementations, are really lightweight and performant frameworks. The real power comes from the Systems and Components built over them. In our case, we created a lot of Systems that we started to reuse between games, from SuperFlyingThing to VampireRunner and even in Clash of the Olympians.

Scripting

One of the most interesting ones is the Scripting framework we created over Artemis. It was really useful and gave us a lot of power to overcome some of the limitations we were facing with the pure approach when we need to create a specific logic for a specific moment during the game and (probably) never again and we didn't want a System for that.

They work similar to Unity MonoBehaviours and our logic Components from ComponentsEngine, and they also belong to the Entity lifecycle. One difference, however, is that they try to avoid storing data in their instances as much as possible and instead try to store and read data from Components, like Systems do.

Here is an example Script from one of our games:

public class MovementScript extends ScriptJavaImpl {
	
  @Override
  public void update(World world, Entity e) {

    MovementComponent movementComponent = Components.getMovementComponent(e);
    SpatialComponent spatialComponent = Components.getSpatialComponent(e);
    ControllerComponent controllerComponent = Components.getControllerComponent(e);
		
    Controller controller = controllerComponent.controller;
    Movement movement = movementComponent.getMovement();
    Spatial spatial = spatialComponent.getSpatial();
		
    float rotationAngle = controllerComponent.rotationSpeed * GlobalTime.getDelta();
		
    Vector2 linearVelocity = movement.getLinearVelocity();
		
    if (controller.left) {
        spatial.setAngle(spatial.getAngle() + rotationAngle);
        linearVelocity.rotate(rotationAngle);
    } else if (controller.right) {
        spatial.setAngle(spatial.getAngle() - rotationAngle);
        linearVelocity.rotate(-rotationAngle);
    }
		
  }
	
}

Note: GlobalTime.getDelta() is similar to Unity's Time API.

One problem about the Scripting framework is that it is not always clear when some logic should be in a Script or a System, and that made reusability a bit harder. In the case of the example above, it is obvious it should be moved to a System.

Templates

Another useful thing we did over Artemis was using templates to build entities in a specific way, similar to what we had in ComponentsEngine.

We used also a concept of Parameters in templates in order to customize parts of them. Similar to ComponentsEngine, templates could apply other templates and different templates could be applied to the same Entity.

Here is an example of an Entity template:

public void apply(Entity entity) {

    String id = parameters.get("id");
    String targetPortalId = parameters.get("targetPortalId");
    String spriteId = parameters.get("sprite", "PortalSprite");
    Spatial spatial = parameters.get("spatial");
    Script script = parameters.get("script", new PortalScript());

    Sprite sprite = resourceManager.getResourceValue(spriteId);

    entity.addComponent(new TagComponent(id));
    entity.addComponent(new SpriteComponent(sprite, Colors.darkBlue));
    entity.addComponent(new Components.PortalComponent(targetPortalId, spatial.getAngle()));
    entity.addComponent(new RenderableComponent(-5));
    entity.addComponent(new SpatialComponent(spatial));
    entity.addComponent(new ScriptComponent(script));

    Body body = bodyBuilder //
            .fixture(bodyBuilder.fixtureDefBuilder() //
                    .circleShape(spatial.getWidth() * 0.35f) //
                    .categoryBits(CategoryBits.ObstacleCategoryBits) //
                    .maskBits((short) (CategoryBits.AllCategoryBits & ~CategoryBits.ObstacleCategoryBits)) //
                    .sensor()) //
            .position(spatial.getX(), spatial.getY()) //
            .mass(1f) //
            .type(BodyType.StaticBody) //
            .userData(entity) //
            .build();

    entity.addComponent(new PhysicsComponent(new PhysicsImpl(body)));

}

That template configures a Portal entity in SuperFlyingThing which teleports the main ship from one place to another. Click here for the list of templates used in that game.

Interacting with other systems

Sometimes you already have a solution for something, like the physics engine, and you want to integrate it with the ECS. The way we found was to create a way to synchronize data from that system to the ECS and vice versa, sometimes having to replicate a bit of data in order to have it easier for the game to use it.

Finally

Even though we had to do some modifications and we know it could still be improved, we loved using a pure ECS in the way we did.

It was used to build different kind of games/prototypes and even a full game like the Clash of the Olympians for mobile devices, and it worked pretty well.

What about now?

Right now we are not using any of this in Unity. However we never lost interest nor feith in ECS, and there are starting to appear new solutions over Unity in the last years that caught our attention. Entitas is one of them. The Unity team is working towards this path too as shown in this video. Someone who din't want to wait started implementing that solution in his own way. We've also watched some Unite and GDC talks about using this approach in big games and some of them even have a Scripting layer too, which is awesome since, in some way, it validates we weren't so wrong ;).

I am exited to try ECS approach again in the near future. I believe it could be a really good foundation for multiplayer games and that is something I'm really interested in doing at some point in my life.

Thanks for reading.

References

 

 

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

How we used Entity Component System (ECS) approach at Gemserk - 1/2

When we started Gemserk eight years ago, we didn't know which was the best way to make games. So before starting, we did some research. After reading some articles and presentations we were really interested in trying an Entity Component System (ECS) approach for our games. However, since we didn't find a clear guide or implementation at that point we had to create our own solution while exploring and understanding it. We named our engine ComponentsEngine.

Components Engine

The following image shows a simplified core architecture diagram:

Note: part of the design was inspired by a Flash ECS engine named PushButtonEngine.

An Entity is just a holder of state and logic. In an ECS, anything can be an Entity, from an enemy ship to the concept of a player or even a file path to a level definition. It depends a lot on the game you are making and how you want to structure it.

A Property is part of the state of an Entity, like the health or the position in the world, anything that means something for the state of the game. It can be accessed and modified from outside.

Components perform logic updating one or more Properties to change the Entity state. A component could for example change the position of an Entity given a speed and a moving direction.

They normally communicate with each other either by modifying common Properties (when on the same Entity) or by sending and receiving Messages through a MessageDispatcher (when on the same or on different Entities). They just have to register a method to handle a message. In some way, this is pretty similar to using SendMessage() method in Unity and having the proper methods in the MonoBehaviours that need to react to those messages.

EntityTemplates are an easy way to define and build specific game entities, they just add Properties and Components (and more stuff) to make an Entity behave in one way or another.

For example, a ShipTemplate could add position, velocity and health properties and some components to perform movement and to process damage from bullet hits:

public void build() {
        // ... more stuff 
        property("position", new Vector2f(0, 0));
        property("direction", new Vector2f(1, 0));

        property("speed", 5.0f);

        property("currentHealth", 100.0f);
        property("maxHealth", 100.0f);

        component(new MovementComponent());
        component(new HealthComponent());
}

An example of the MovementComponent:

public class MovementComponent() {

   @EntityProperty
   Vector2f position;

   @EntityProperty
   Vector2f direction;

   @EntityProperty
   Float speed;

   @Handles
   public void update(Message message) {
      Property<Float> dt = message.getProperty("deltaTime");
      position += direction * speed * dt.get();
   }

}

In some way, EntityTemplates are similar to Unity Prefabs or Unreal Engine Blueprints, represeting in some way a (not pure) Prototype pattern.

Some interesting stuff of our engine:

  • Since templates are applied to Entities, we can apply multiple templates to the same one. In this way, we could have common templates to add generic features like motion for example. So we could do something like OrcTemplate.apply(e) to add the orc properties and components and then MovableTemplate.apply(e), so now we have an Orc that moves.
  • Templates can apply other templates inside them. So we could do the same as before but inside OrcTemplate, we could apply MovableTemplate there. Or even use this to create specific templates, like OrcBossTemplate which is an Orc with a special ability.
  • Entities have also tags which are defined when applying a template too and are used to quickly identify entities of interest. For example, if we want to identify all bullets in the game, we could add the tag "Bullet" during the Entity creation and then, when processing a special power, we get all the bullets in the scene and make them explode. Note: in some ECS a "flag" component is used for this purpose.
  • The Property abstraction is really powerful, it can be implemented in any way, for example, an expression property like "my current health is my speed * 2". We used that while prototyping.

Some bad stuff:

  • Execution speed wasn't good, we had a lot of layers in the middle, a lot of reflection, send and receive messages (even for the update method), a lot of boxing and unboxing, etc. It worked ok in desktop but wasn't a viable solution for mobile devices.
  • The logic started to be distributed all around and it wasn't easy to reuse, and we started to have tons of special cases, when we couldn't reuse something we simply copy-pasted and changed it.
  • There was a lot of code in the Components to get/set properties instead of just doing the important logic.
  • Indirection in Properties code was powerful but we end up with stuff like this (and we didn't like it since it was too much code overhead):
e.getProperty("key").value = e.getProperty("key").value + 1.
  • We didn't manage to have a good data driven approach which is one of the best points of ECS. In order to have different behaviours we were forced to add both properties, components and even tags instead of just changing data.

Note: Some of these points can be improved but we never worked on that.

Even though we don't think the achitecture is bad, it guided us to do stuff in a way we didn't like it and didn't scale. We feel that Unity does the same thing with its GameObjects and MonoBehaviours, all the examples in their documentation go in that direction.

That was our first try to the ECS approach. In case you are interested, ComponentsEngine and all the games we did with it are on Github and even though they are probably not compiling, they could be used as reference.

This post will continue with how we later transitioned to use a more pure approach with Artemis, when we were more dedicated to mobile games.

References

Evolve your hierarchy - Classic article about Entity Component System.

PushButtonEngine - Flash Entity Component System we used as reference when developing our ComponentsEngine.

Game architecture is different - A quick transformation from normal game architecture to ECS.

Slick2D - The game library we were using during ComponentsEngine development.

 

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

Assigning interface dependencies to MonoBehaviour fields in Unity Editor

In Unity, Object's public fields of type interface are not serialized, so that means it is not possible to configure them using the editor like you do with other stuff like dependencies to MonoBehaviours.

One possible way to deflect this problem is to create an abstract MonoBehaviour and depend on that instead:

 public abstract class ServiceBase : MonoBehaviour
 {
     public abstract void DoStuff();    
 }
 
 public class ServiceUser : MonoBehaviour
 {
     public ServiceBase service;
 
     void Update()
     {
         service.DoStuff ();
     }
 }

This solution has some limitations. One of them is you can't create a sub class of multiple base classes like you do when implementing interfaces. Another limitation is that you can't easily switch between a MonoBehaviour implementation to a ScriptableObject implementation (or any other).

Working on Iron Marines and Dashy Ninja, I normally end up referencing a UnityEngine.Object and then in runtime I try to get the specific interface, like this:

 public interface IService 
 {
      void DoStuff();
 }
 
 public class CharacterExample : MonoBehaviour
 {
     public UnityEngine.Object objectWithInterface;
     IService _service;
 
     void Start ()
     {
         var referenceGameObject = objectWithInterface as GameObject;
         if (referenceGameObject != null) {
             _service = referenceGameObject.GetComponentInChildren<IService> ();
         } else {
             _service = objectWithInterface as IService;
         }
     }

     void Update()
     {
         _service.DoStuff ();
     }
 }
 

Then, in Unity Editor I can assign both GameObjects and ScriptableObjects (or any other Object):

That works pretty well but I end up having the code of how to get the interface duplicated all around.

To avoid that, I made up two helper classes with the code to retrieve the interface from the proper location. These are InterfaceReference and InterfaceObject<T>.

InterfaceReference

 [Serializable]
 public class InterfaceReference
 {
     public UnityEngine.Object _object; 
     object _cachedGameObject;
 
     public T Get<T>() where T : class
     {
         if (_object == null)
             return _cachedGameObject as T;

         var go = _object as GameObject;
 
         if (go != null) {
             _cachedGameObject = go.GetComponentInChildren<T> ();
         } else {
             _cachedGameObject = _object;
         }
 
         return _cachedGameObject as T;
     }

     public void Set<T>(T t) where T : class
     {
         _cachedGameObject = t;
         _object = t as UnityEngine.Object;
     }
 }

Usage example:

 public class CharacterExample : MonoBehaviour
 {
     public InterfaceReference service;
 
     void Update()
     {
         // some logic
         service.Get<IService>().DoStuff();
     }
 }

The advantage of this class is that it could be simply used anywhere and it is already serialized by Unity. But the disadvantage is having to ask for specific interface when calling the Get() method. If that is only in one place, great, but if you have to do it a lot in the code it doesn't look so good. In that case, it could be better to have another field and assign it once in Awake() or Start() methods.

InterfaceObject

 public class InterfaceObject<T> where T : class
 {
     public UnityEngine.Object _object;
     T _instance;
 
     public T Get() {
         if (_instance == null) {
             var go = _object as GameObject;
             if (go != null) {
                 _instance = go.GetComponentInChildren<T> ();
             } else {
                 _instance = _object as T;
             }
         }
         return _instance;
     }
 
     public void Set(T t) {
         _instance = t;
         _object = t as UnityEngine.Object;
     }
 }

Note: the Get() and Set() methods could be a C# Property also, which would be useful to allow the debugger to evaluate the property when inspecting it.

Usage example:

 [Serializable]
 public class IServiceObject : InterfaceObject<IService>
 {
     
 }
 
 public class CharacterExample : MonoBehaviour
 {
     public IServiceObject service;
 
     void Update()
     {
         // some logic
         service.Get().DoStuff();
     }
 }

The advantage of this class is that you can use the Get() method without having to cast to the specific interface. The disadvantage, however, is that it is not serializable because it is a generic class. In order to solve that, an empty implementation with a specific type must be created to allow Unity to serialize it, like it is shown in the previous example. If you don't have a lot of interfaces you want to configure, then I believe this could be a good solution.

All of this is simple but useful code to help configuring stuff in the Unity Editor.

I am working on different experiments on how to improve Unity's usage for me. Here is a link to the project in case you are interested in taking a look.

Thanks for reading

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

Playing with Starcraft 2 Editor to understand how a good RTS is made

When working on Iron Marines engine at work we did some research on other RTS games in order to have more knowledge on how they did some stuff and why. In this post, in particular, I want to share a bit of my research on the SC2 Editor which helped a lot when making our own editor.

The objective was to see what a Game Designer could do or not with the SC2 Editor in order to understand some decisions about the editor and the engine itself.

Obviously, by taking a look at the game mods/maps available it is clear that you could build entire games over the SC2 engine, but I wanted to see the basics, how to define and control the game logic.

As a side note, I love RTS games since I was a child, I played a lot of Dune 2 and Warcraft 1. I remember playing with the editors of Command & Conquer and Warcraft 2 also, it was really cool, so much power 😉 and fun. With one of my brothers, each one had to make a map and the other had to play and beat it (we did the same with Doom and Duke Nukem 3d editors).

SC2 Editor

SC2 maps are built with Triggers which are composed by Events, Conditions and Actions to define parts of the game logic. There are a lot of other elements as well that I will talk a bit after explaining the basics.

Here is an image of the SC2 Editor with an advanced map:

Trigger logic

The Triggers are where the general map logic is defined. They are triggered by Events and say which Actions should be performed if given Conditions are met. Even though behind the scenes the logic is C/C++ code and it is calling functions with similar names, the Editor shows it in a natural language like “Is Any Unit of Player1 Alive?” which helps for quick reading and understanding.

This is an example of a Trigger logic of a SC2 campaign map:

Events

Events are a way to Trigger the Trigger logic, in other words, when an event happens the logic is executed. Here is an example of an event triggered when the unit "SpecialMarine" enters the region "Region 001":

Conditions

Conditions are evaluated in order to execute the actions or not. Here is an example of a condition checking if unit "BadGuy" is alive or not:

Actions

Actions are executed when the event happened and the conditions are met. They could be anything supported by the editor, from ordering a structure to build a unit to showing a mission objective update on screen, among other things.

This example shows an action that enqueues to unit "BadGuy" an attack order with unit "SpecialMarine" as target, replacing existing enqueued orders in that unit. There is another action after that which turns off the Trigger in order to avoid processing its logic again.

The idea with this approach is to build the logic in a descriptive way, the Game Designer has tools to fulfill what he needs in terms of game experience. For example, he needs to make it hard to save a special unit when you reach its location, then he sends a wave of enemies to that point.

I said before that the editor generates C/C++ code behind the scenes, so, for my example:

The code generated behind the scenes is this one:

Here is a screenshot of the example I did, the red guy is the SpecialMarine (controlled by the player) and the blue one is the BadGuy (controlled by the map logic), if you move your unit inside the blue region, BadGuy comes in and attack SpecialMarine:

Even though it is really basic, download my example if you want to test it 😛 .

Parameters

In order to make the Triggers work, they need some values to check against, for example, Region1, is a region previously defined, or “Any Unit of Player1”. Most of the functions for Events, Conditions and Actions have parameters of a given Type, and the Editor allow the user to pick an object of that Type from different sources: a function, a preset, a variable, a value or even custom code:

It shows picking a Unit from units in the map (created instances).

It shows Unit picking from different functions that return a Unit.

This allows the Game Designer to adapt, in part, the logic to what is happening in the game while keeping the main structure of the logic. For example, I need to make the structures of Player2 explode when any Unit of Player1 is in Region1, I don’t care which unit I only care it is from Player1.

Game design helper elements

There are different elements that help the Game Designer when creating the map: Regions, Points, Paths and Unit Groups, among others. These elements are normally not visible by the Player but are really useful to the Game Designer to have more control over the logic.

As said before, the SC2 Editor is pretty complete, it allows you to do a lot of stuff, from creating custom cutscenes to override game data to create new units, abilities, and more but that's food for another post.

Our Editor v0.1

The first try of creating some kind of editor for our game wasn't so successful. Without the core of the game clearly defined we tried to create an editor with a lot of the SC2 Editor features. We spent some days defining a lot of stuff in abstract but in the end we aimed too far for a first iteration.

So, after that, we decided to start small. We starting by making a way to detect events over the "being defined core" at that point. An event could be for example: "when units enter an area" or "when a resource spot was captured by a player".

Here are some of the events of one of our maps:

Note: Even though they are Events we named them Triggers (dunno why), so AreaTrigger is an empty Trigger in terms of SC2 Editor with just an Event.

Events were the only thing in the editor, all the corresponding logic was done in code in one class, corresponding to that map, which captures all events and checks conditions for taking some actions, normally, sending enemies to attack some area.

Here is an example code for some of the previous defined events:

It wasn't a bad solution but had some problems:

  • The actions were separated from the level design which played against the iteration cycle (at some point our project needed between 10 and 15 seconds to compile in the Unity Editor).
  • Since it needs code to work, it requires programming knowledge and our team Game Designers aren't so good with code.

Our Editor v0.2

The second (and current) version is more Game Designer friendly, and tends to be more similar to SC2 Editor. Most of the logic is defined in the editor within multiple triggers. Each Trigger is defined as a hierarchy of GameObjects with specific components to define the Events, Conditions and Actions.

Here is an example of a map using the new system:

This declares for example a trigger logic that is activated by time, it has no conditions (so it executes always given the event) and it sends some enemies in sequence and deactivates itself at the end.

We also created a custom Editor window in order to help creating the trigger hierarchy and to simplify looking for the engine Events, Conditions and Actions. Here is part of the editor showing some of the elements we have:

All those buttons automatically create the corresponding GameObject hierarchy with the proper Components in order to make everything work according to plan. Since most of them need parameters, we are using the Unity built-in feature of linking elements given a type (a Component), so for example, for the action of forcing capture a Capturable element by team Soldiers, we have:

Unity allow us to pick a Capturable element (CapturableScript in this case) from the scene. This simplifies a lot the job of configuring the map logic.

Some common conditions could be to check if a resource spot is controlled by a given player or if a structure is alive. Common actions could be, send a wave of enemy units to a given area or deactivate a trigger.

The base code is pretty simple, it mainly defines the API while the real value of this solution is in the custom Events, Conditions and Actions.

Pros

  • Visual, and more Game Designer friendly (it is easier for Programmers too).
  • Faster iteration speed, now we can change things in Editor directly, even in runtime!
  • Easily extensible by adding more Events, Conditions and Actions, and transparent to the Game Designers since they are automatically shown in our Custom Editor.
  • Take advantage of Unity Editor for configuring stuff.
  • Easy to disable/enable some logic by turning on/off the corresponding GameObject, which is good for testing something or disable one logic for a while (for example, during ingame cinematics).
  • More control to the Game Designer, they can test and prototype stuff without asking programming team.
  • Simplified workflow for our ingame cinematics.
  • Compatible with our first version, both can run at the same time.

Cons

  • Merge the stage is harder now that it is serialized with the Unity scene, with code we didn’t have merge problems or at least it was easier to fix. One of the ideas to simplify this is to break the logic in parts and use prefabs for those parts, but it breaks when having links with scene instances (which is a common case).
  • A lot of programming responsibility is transferred to the scripting team which in this case is the Game Design team, that means possibly bad code (for example, duplicated logic), bugs (forget to turn off a trigger after processing the actions) and even performance.

Conclusion

When designing (and coding) a game, it is really important to have a good iteration cycle in each aspect of the game. Having switched to a more visual solution with all the elements at hand and avoiding code as much as we could, helped a lot with that goal.

Since what we end up doing looks similar to a scripting engine, why didn't we go with a solution like uScript or similar in the first place? the real answer is I didn't try in depth other Unity scripting solutions out there (not so happy with that), each time I tried them a bit they gave me the feeling it was too much for what we needed and I was unsure how they perform on mobile devices (never tested that). Also, I wasn't aware we would end up need a scripting layer, so I prefered to evolve over our needs, from small to big.

Taking some time to research other games and play with the SC2 Editor helped me a lot when defining how our engine should work and/or why we should go in some direction. There are more aspects of our game that were influenced in some way by how other RTS games do it, which I may share or not in the future, who knows.

I love RTS games, did I mention that before?

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

Guardar

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

A Simple Selection History Window For Unity

This is a small blog post about a Unity editor window we made at work to simplify our lives when using Unity.

In Unity, when you select a reference to an asset, it is focused in the Project window, losing the current selection and probably the folder you were working on. Since there is no navigation history, most of the time having to search the previous selection is a pain in the ass.

As we were doing some stuff today at work, we were inspired (after two million times we lost our selection) so I decided to do a simple Editor Window to store and show a history of selected objects. It took me like 30 minutes to make it (plus one more hour at home).

Here are the results:

In case you are interested in using and/or improving it, here is the github project:

https://github.com/acoppes/unity-history-window

As always, hope you like it.

Guardar

Guardar

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

The story of the non deterministic Replay

This is the story of how I discovered my simplified replay system wasn’t so deterministic as I believed because I had an ugly bug, but read the post if you want to know where exactly.

While integrating the lockstep engine on the game I am working on, I decided to do something to save and load replays to be able to easily reproduce some bugs I was experimenting. After I had that done and working, it was pretty awesome to see I can replay the same game multiple times (food for another blog post by the way), however, I thought it could be fun and easy to play them faster, why not.

Since I have a fixed time step logic, it should should be pretty straightforward, simply use a multiplied time and then the fixed timestep logic would do all the work and the game logic shouldn't notice the change. I decided to give it a try and it worked…. almost, when playing the replays at higher speeds I noticed some visual differences but I wasn’t totally sure (it could be interpolation code).

To verify, I went back to the test project, where I had the moving box, and test it there, but I needed some way to be sure. Since I have already a way to calculate checksums of the game state, I used that to verify the game states when playing replays at different speeds (from 2x to 16x).

It failed, even though it only failed to validate some frames, following frames were not necessarily invalid (this is something important to consider).

invalid-state Image 1: It shows one of the best tools in the world to check game states when replaying a game.

So, I was right, I saw some differences, I could be sure that something was happening. The thing was, with only the checksums I couldn't know what the real difference was. Next step, making something to detect it.

In order to do that, I had to change to start saving (at least for debug) the game state, not only the checksum, and to check differences between stored game states in the replay and the current game state (when replaying the game) when checksum validation fails. It worked too, now I have the exact place where the differences are.

invalidstate_realdiff

Image 2: It shows why serializing all the game state in one string is the best thing to do in your life.

After testing it a bit, I noticed another curious thing, the game validation wasn’t always failing given the same replay and the same speed. That gave me a hint that the problem was probably not related with the game code itself (the moving box).

So, if I played the replay at 1x, it was validated properly. If I played the replay at 8x, it failed, most of the time, but not always. So, it seems there is something related with speed I don’t understand yet.

I decided to test the same replay but with Unity timescale modified, my first test was using 1x for replay but 5x for the timescale, validation failed, then the opposite, 10x for replay but 0.1x for timescale, and it worked well. So the problem seems to be related with my accumulator logic inside the fixed timestep logic?

Some test cycles later, it turns out that, it was indeed a bug in one of the core classes of the engine!

The problem was on my class LockstepFixedUpdate, the first version was overriding the Update() method and performing lockstep logic, it worked ok if only at most one fixed update is processed, but in the case a big delta time arrives, it only process lockstep logic once for the first fixed update and never again.

That means that, in case replay commands were to be processed in frame 3, we are in frame 1 and a big dt of 10 frames arrives, then lockstep logic checks only in frame one and never again until all 10 frames were processed. This bug even bypass the lockstep!

Since I made a test to replicate the bug, it was really easy to fix it, I changed to process the lockstep logic with each fixed step updates and it works fine now, I have high speed replays!! YEAH!!

Conclusion

In the process of finding this bug I started to expand the engine support and create better tools, this is really important if I want to build something solid over it.

The only way to detect issues as soon as possible is to iterate over the engine as soon as possible and to do that, use cases are needed and games provide the best use cases. In my case, I am using not only the game I am trying to make but also other similar games as references when deciding what I want and how I want to test it, for example, having replays, being able to play the replay at different speeds, being able to save the replays, etc. Also, being able to replicate a bug in a small test case where you can iterate quickly to fix it is super useful.

Detecting (and having) problems like this in a small and simple game gives the idea of the complexity of a medium to big game, all the variables and the difficulties, it is not something to underestimate, so when developers say they couldn't add multiplayer features to their game because it was really hard to do it, it is not a lie.

I love all of this stuff, even though I understand it is not an easy path.

To complete this post, here is a video showing a prototype of how I load and play a replay which was created by playing with two players in LAN, one was my computer and the other was my phone:

The quote of the day is 'Fail as much as possible, as soon as possible to avoid failing when it is too late'.

Hope you enjoyed the journey.

VN:F [1.9.22_1171]
Rating: 5.0/5 (2 votes cast)

Building 2d animations using Inkscape and Synfig

In this blog post we want to share a method to animate Inkscape SVG objects using Synfig Studio, trying to follow a similar approach to the Building 2d sprites from 3d models using Blender blog post.

A small introduction about Inkscape

Inkscape is one of the best open source, multi platform and free tools to work with vector graphics using the open standard SVG.

After some time using Inkscape, I have learned how to make a lot of things and feel great using it. However, it lacks of some features which would make it a great tool, for example, a way to animate objects by making interpolations of its different states defining key frames and using a time line, among others.

It has some ways to create interpolations of objects between two different states but it is unusable since it doesn't work with groups, so if you have a complex object made of a group of several other objects, then you have to interpolate all of them. If you make some modification on of the key frames, then you have to interpolate everything again.

Synfig comes into action

Synfig Studio is a free and open-source 2D animation tool, it works with vector graphics as well. It lets you create nice animations using a time line and key frames and lets you easily export the animation. However, it uses its own format, so you can't directly import an SVG. Luckily, the format is open and there are already some ways to transform from SVG to Synfig.

In particular I tried an Inkscape extension named svg2sif which lets you save files in Synfig format and seems to work fine (the page of the extension explains how to install it). I don't know possible limitations of the svg2sif Inkscape extension, so use it with caution, don't expect everything to work fine.

Now that we have the method defined, we will explain it by showing an example.

Creating an object in Inkscape

We start by creating an Inkscape object to be animated later. For this mini tutorial I created a black creature named Bor...ahem! Gishus Maximus:

Modelling Gishus Maximus using Inkscape

Here is the SVG if you are interested on it, sadly WordPress doesn't support SVG files as media files.

With the model defined, we have to save it as Synfig format using the extension, so go to "Save a Copy..." and select the .sif format (added by the svg2sif extension), and save it.

Animating the object in Synfig

Now that we have the Synfig file we open it and voilà, we can animate it. However, there is a bug, probably with the svg2sif extension and the time line is missing. To fix it, we have to create a new document and copy the shape from the one exported by Inkscape to the new one.

The next step is to use your super animation skill and animate the object. In my case I created some kind of eating animation by making a mouth, opening it slow and then closing it fast:

Animating Gishus Maxumis using Synfig

Here is the Synfig file with the animation if you are interested on it.

To export it, use the "Show the Render Settings Dialog" button and configure how much frames per second you want, among other things, and then export it using the Render button. You can export it to different format, for example, a list of separated PNG files for each animation frame or an animated GIF. However, it you can't configure some of the formats and the exported file is not what I wanted so I preferred to export to a list of PNG files and then use the convert tool to create the animated GIF:

Finally, I have a time lapse of how I applied the method if you want to watch it:

Extra section: Importing the animation in your game

After we have separated PNG files for the animation, we can create a sprite sheet or use another tools to create files to be easily imported by a game framework. For this example, I used a Gimp plug-in named Sprite Tape to import all the separated PNG files and create a sprite sheet:

If you are a LibGDX user and want to use the Texture Packer, you can create a folder and copy the PNG files changing their names to animationname_01, animationname_02, etc, and let Texture Packer to automatically import it.

Conclusions

One problem with this method is that you can't easily modify your objects in Inkscape and then automatically import them in Synfig and update the current animation to work with it. So, once you moved to Synfig you have to keep there to avoid making a lot of duplicated work. This could be avoided if Inkscape provided a good animation extension.

Synfig Studio is a great tool but not the best of course, it is not intuitive (as Gimp, Blender and others) and it has some bugs that make it explode without reason. On the other hand, it is open source, free and multi platform and the best part is that it works well for what we need right now 😉

This method allow us to animate vector graphics which is great since it is a way for programmers like us to animate their programmer art 😀

Finally, I am not an animation expert at all, so this blog post could be based on some wrong assumptions. So, if you are one, feel free to correct me and share your opinions.

As always, hope you like the post.

VN:F [1.9.22_1171]
Rating: 4.3/5 (18 votes cast)

Testing Android controls on libGDX PC application using libGDX remote

When testing our games, sometimes we only need to test the game controls on the Android devices, not all the game, and deploying it on the device is always slower than running it on the PC.

The LibGDX library provides a way to send your Android device input data to through the network, one of the possible uses is to control your libGDX PC application with it.

Here is a demo video using the remote input, made by libGDX team:


(note: magic starts after minute 2)

In the case of Super Flying Thing, we need it when testing and debugging some controls that cannot be tested directly on PC like the TiltController for example (we need device orientation data). Recently, we starting using it also to test and customize other controls like the the ClassicController (despite it has a bit of lag in some cases).

Sadly, my camera is also my android phone so I can't make a video of how we are using it.

One interesting point of this is that we could use multiple android devices (and probably other devices like iPod/iPhone/iPad/others) to create multiple wireless controllers for PC, and that, for example, could be funny if we want to make a local multiplayer game.

Another interesting point is that you could implement your own remote input server to read the device input data sent by the GDX Remote Android Application, so you wouldn't be forced to use libGDX in your server application.

VN:F [1.9.22_1171]
Rating: 4.3/5 (7 votes cast)

Building 2d sprites from 3d models using Blender

In this post I want to share the process used to create the ship sprite sheet of Super Flying Thing (STF from now on). I will assume you have a basic experience with Blender or software alike.

Model and texture

The first step will be to get or create a model you want for the sprite sheet. In my case, the ship was created from zero, following this excellent tutorial (a bit boring to watch, but great content). Of course, as I am not an artist and I only dedicated three hours, my model sucks a bit compared to the one of the tutorial.

Set the camera

Now that you have a model, you will have to configure the camera for the Blender rendering process. The idea is to put it pointing to the model, and bring it closer to the model until you get it in the camera view port. The next image shows an example:

In the case of SFT, I configured the view port to be 64x64.

Create the animation

Now, you have to create the animation you want to render. For SFT, I wanted to create the animation of ship rotating 360 degrees. As the speed of the ship is approx 300 degrees per second, and I was using a 30FPS animation, then I wanted to have the matching frames for the complete rotation animation, in this case, 36 frames. So I created one key frame for 0 degrees, 90, 180, 270 and 360 degrees. The last one is only used to complete the interpolation and is not included in the final animation.

ship animation key frames

Be sure to configure the animation to use a linear interpolation to match the frame with the ship's angle, unless you want otherwise. To do that, open the Graph Editor, and from Key menu modify Interpolation Mode to Linear.

Finally, export the animation using Render animation from blender, it will create each frame of the animation in your temp folder. Be aware you will get a better result using an Anti-Aliasing algorithm, for example, Catmull-Rom, thanks void256 for the tip.

Here is the difference:


The left image is not using anti aliasing at all, the right one is using Catmull-Rom with 16 samples per pixel.

And here is the animation:

The process is tools independent, so you could use, for example, 3D Studio if you want. You only need a good artist, as we do.

As always, hope you like the post and it could be of help.

VN:F [1.9.22_1171]
Rating: 5.0/5 (14 votes cast)

Animation4j - Synchronizer

In this post I will talk a bit more about animation4j project, particularly the Synchronizer class.

When working with transitions we could want to declare them in a seamlessly way. In a previous post I introduced the Transition class which lets you define a transition of an object from one state to another but it requires manual synchronization from transition object values to your own object values. In retrospective, that doesn't seems the best way to achieve seamless usage.

Meet the Synchronizer

For this reason, animation4j provides a class named Synchronizer, which handles all the synchronization internally for you. To create a new transition you have to declare something like this:

	// create a synchronizer in some part of your initialization code
	Synchronizer synchronizer = new Synchronizer();
	
	// create a transition in some parte of your game logic, for example when the user press something
	Vector2f position = new Vector2f(100, 100);
	synchronizer.transition(position, Transitions.transitionBuilder().end(new Vector2f(200, 200)).time(500));

	// this will perform the synchronization, it will set to my object the current value of the transition.
	// normally this call will be on an update() method of your game
	synchronizer.synchronize(delta);

	// so now my object will be update with the new values
	System.out.prinln(position);

note: I used a Vector2f class in the example because it is a known class, all libraries have a Vector implementation.

In the previous example, I create a transition of a Vector2f class from (100,100) to (200, 200) in 500ms.

To register a new transition to be synchronized you call transition() method and each time synchronize(delta) method is called, Synchronizer will perform synchronization of all registered transitions. The registration method comes in to flavors, one will modify the object you pass but it only works if you are using a mutable object. The other one works over the field of an object using reflection and calling public set method to modify the internal field. It works for both mutable and immutable objects because it is setting a new value each time, however, one pending improvement is to modify current field value, instead creating a new one each time, when the field class is mutable.

Synchronizer class could be used in a static way using the Synchronizers class, which holds an internal singleton instance to do all the work. That is really useful when you want to use the synchronization stuff wherever you want. However a restriction is that it will updates all registered transitions and that could be a problem if you have different states for your game, for example, a pause game state which keeps rendering a playing game state, you don't want the transitions started in that state to go on, so in that case could use different instances for each game state.

TransitionBuilder is your friend

Another interesting stuff added to the library is the TransitionBuilder class provided inside the Transitions class. In previous versions, there were different methods to create a Transition using the Transitions factory class, one only specifying speed, other specifying speed and end value, other specifying speed and the interpolation functions, and the list goes on and on. In order to improve this, the TransitionBuilder class was created. It lets you specify the parameters you want when building a Transition, for example:

	Transition transition = Transitions.transitionBuilder().start(v1)
		.end(v2)
		.time(5000)
		.functions(InterpolationFunctions.easeIn(), InterpolationFunctions.easeOut())
		.build();

The previous code will create a transition from value v1 to value v2 in 5000ms using the declared interpolation functions to interpolate between the internal values of v1 and v2 (as explained in a previous post).

These classes simplify a lot working with animation4j but there is still a lot of work to do in order to improve usage and internal performance.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)