# Our solution to handle multiple screen sizes in Android – Part three

In the previous posts of this series we talked about our solution to handle multiple screen sizes for game menus, in particular we showed the main menu of the game Clash of the Olympians. In this post we are going to talk about what we did inside the game itself. As a side note, the solution we used here is simple and specific for this game, hope it could help as example but don't expect a silver bullet.

### Scaling to match the physics world

As we use Box2D in Clash of the Olympians, the first step was to use a proper scale between Box2D bodies and our assets. The basic approach was to consider that 1m (meter in MKS system) was 32px, so in our target resolution of 800x480 could show 25m x 15m. We picked that scale because it gives pretty numbers both in terms of the game area and in terms of our assets, for example, a character of 64px of height is 2m tall. In particular, Achilles has a height of approx 60px which is equivalent to 1.875m using our scale, that sounds pretty reasonable for that character.

The image shows the relation between screen size in pixels (800x480 in this case) and the game world in meters.

### Defining a virtual area to show

We previously said that we could show 25m x 15m, in fact, the height is not so important in Clash of the Olympians since the game mainly depends in the horizontal distance. So, if we had an imaginary with a resolution of 800x400 (really wide, an aspect ratio of 2) we would show in that case 12.5m of height, we could assume that if we show at least that height the game balance would be not affected at all (enemies are never spawned too high). However, in terms of horizontal distance we want to show always the same area across all devices to avoid changing the game balance (for example, if you could see less area you couldn't react in the proper time to some waves), that is why we decided to show always 25m in terms of width.

The image shows how we still show the same game world width of 25m on a 800x600 device.

### Scaling the world back to match the screen size

Finally, in order to show this virtual area of 25m x H (with H >= 12.5m), we have to calculate the proper scale to set our game camera in each device. For example, in the case of having a Nexus 7 (1280x720 resolution device) the scale to show 25m of horizontal size is 51.2x since we know that 1280 / scale = 25, then 1280 / 25 = 51.2. In the case of a Samsung Galaxy Y (480x320 resolution device) the scale would be 19.2x since 480 / 25 = 19.2. Translating this inside the game would be something as easy as:

`camera.scale = screen.width / 25`

### Final thoughts

This is not a general solution, it depends a lot in the game we were making and the things we could assume like the game height doesn't matter.

Even though the solution is specific and not so cool as the previous posts, we hope it could be of help when making your own game.

VN:F [1.9.22_1171]

# Our solution to handle multiple screen sizes in Android – Part two

Continuing with the previous blog post, in this post we are going to talk about the code behind the theory. It consists in three concepts, the VirtualViewport, the OrthographicCameraWithVirtualViewport and the MultipleVirtualViewportBuilder.

### VirtualViewport

It defines a virtual area where the game stuff is contained and provides a way to get the real width and height to use with a camera in order to always show the virtual area. Here is the code of this class:

```public class VirtualViewport {

float virtualWidth;
float virtualHeight;

public float getVirtualWidth() {
return virtualWidth;
}

public float getVirtualHeight() {
return virtualHeight;
}

public VirtualViewport(float virtualWidth, float virtualHeight) {
this(virtualWidth, virtualHeight, false);
}

public VirtualViewport(float virtualWidth, float virtualHeight, boolean shrink) {
this.virtualWidth = virtualWidth;
this.virtualHeight = virtualHeight;
}

public float getWidth() {
return getWidth(Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
}

public float getHeight() {
return getHeight(Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
}

/**
* Returns the view port width to let all the virtual view port to be shown on the screen.
*
* @param screenWidth
*            The screen width.
* @param screenHeight
*            The screen Height.
*/
public float getWidth(float screenWidth, float screenHeight) {
float virtualAspect = virtualWidth / virtualHeight;
float aspect = screenWidth / screenHeight;
if (aspect > virtualAspect || (Math.abs(aspect - virtualAspect) < 0.01f)) {
return virtualHeight * aspect;
} else {
return virtualWidth;
}
}

/**
* Returns the view port height to let all the virtual view port to be shown on the screen.
*
* @param screenWidth
*            The screen width.
* @param screenHeight
*            The screen Height.
*/
public float getHeight(float screenWidth, float screenHeight) {
float virtualAspect = virtualWidth / virtualHeight;
float aspect = screenWidth / screenHeight;
if (aspect > virtualAspect || (Math.abs(aspect - virtualAspect) < 0.01f)) {
return virtualHeight;
} else {
return virtualWidth / aspect;
}
}

}
```

So, if we have a virtual area of 640x480 and want to show it on a screen of 800x480 we can do the next steps in order to get the proper values that we have to use as the camera viewport for that screen:

```VirtualViewport virtualViewport = new VirtualViewport(640, 480);
float realViewportWidth = virtualViewport.getWidth(800, 480);
float realViewportHeight = virtualViewport.getHeight(800, 480);
// now set the camera viewport values
camera.setViewportFor(realViewportWidth, realViewportHeight);
```

### OrthographicCameraWithVirtualViewport

In order to simplify the work when using LibGDX library, we created a subclass of LibGDX's OrthographicCamera with specific behavior to update the camera viewport using the VirtualViewport values. Here is its code:

```public class OrthographicCameraWithVirtualViewport extends OrthographicCamera {

Vector3 tmp = new Vector3();
Vector2 origin = new Vector2();
VirtualViewport virtualViewport;

public void setVirtualViewport(VirtualViewport virtualViewport) {
this.virtualViewport = virtualViewport;
}

public OrthographicCameraWithVirtualViewport(VirtualViewport virtualViewport) {
this(virtualViewport, 0f, 0f);
}

public OrthographicCameraWithVirtualViewport(VirtualViewport virtualViewport, float cx, float cy) {
this.virtualViewport = virtualViewport;
this.origin.set(cx, cy);
}

public void setPosition(float x, float y) {
position.set(x - viewportWidth * origin.x, y - viewportHeight * origin.y, 0f);
}

@Override
public void update() {
float left = zoom * -viewportWidth / 2 + virtualViewport.getVirtualWidth() * origin.x;
float right = zoom * viewportWidth / 2 + virtualViewport.getVirtualWidth() * origin.x;
float top = zoom * viewportHeight / 2 + virtualViewport.getVirtualHeight() * origin.y;
float bottom = zoom * -viewportHeight / 2 + virtualViewport.getVirtualHeight() * origin.y;

projection.setToOrtho(left, right, bottom, top, Math.abs(near), Math.abs(far));
combined.set(projection);
Matrix4.mul(combined.val, view.val);
invProjectionView.set(combined);
Matrix4.inv(invProjectionView.val);
frustum.update(invProjectionView);
}

/**
* This must be called in ApplicationListener.resize() in order to correctly update the camera viewport.
*/
public void updateViewport() {
setToOrtho(false, virtualViewport.getWidth(), virtualViewport.getHeight());
}
}
```

### MultipleVirtualViewportBuilder

This class allows us to build a better VirtualViewport given the minimum and maximum areas we want to support performing the logic we explained in the previous post. For example, if we have a minimum area of 800x480 and a maximum area of 854x600, then, given a device of 480x320 (3:2) it will return a VirtualViewport of 854x570 which is a good match of a resolution which contains the minimum area and is smaller than the maximum area and has the same aspect ratio of 480x320.

```public class MultipleVirtualViewportBuilder {

private final float minWidth;
private final float minHeight;
private final float maxWidth;
private final float maxHeight;

public MultipleVirtualViewportBuilder(float minWidth, float minHeight, float maxWidth, float maxHeight) {
this.minWidth = minWidth;
this.minHeight = minHeight;
this.maxWidth = maxWidth;
this.maxHeight = maxHeight;
}

public VirtualViewport getVirtualViewport(float width, float height) {
if (width >= minWidth && width <= maxWidth && height >= minHeight && height <= maxHeight)
return new VirtualViewport(width, height, true);

float aspect = width / height;

float scaleForMinSize = minWidth / width;
float scaleForMaxSize = maxWidth / width;

float virtualViewportWidth = width * scaleForMaxSize;
float virtualViewportHeight = virtualViewportWidth / aspect;

if (insideBounds(virtualViewportWidth, virtualViewportHeight))
return new VirtualViewport(virtualViewportWidth, virtualViewportHeight, false);

virtualViewportWidth = width * scaleForMinSize;
virtualViewportHeight = virtualViewportWidth / aspect;

if (insideBounds(virtualViewportWidth, virtualViewportHeight))
return new VirtualViewport(virtualViewportWidth, virtualViewportHeight, false);

return new VirtualViewport(minWidth, minHeight, true);
}

private boolean insideBounds(float width, float height) {
if (width < minWidth || width > maxWidth)
return false;
if (height < minHeight || height > maxHeight)
return false;
return true;
}

}
```

In case the aspect ratio is not supported, it will return the minimum area.

### Floating elements

As we explained in the previous post, there are some cases where we need stuff that should be always at fixed positions in the screen, for example, the audio and music buttons in Clash of the Olympians. In order to do that we need to make the position of those buttons depend on the VirtualViewport. In the next section where we explain how to use all together we show an example of how to do a floating element.

### Using the code together

Finally, here is an example showing how to use these concepts in a LibGDX application:

```public class VirtualViewportExampleMain extends com.badlogic.gdx.Game {

private OrthographicCameraWithVirtualViewport camera;

// extra stuff for the example
private SpriteBatch spriteBatch;
private Sprite minimumAreaSprite;
private Sprite maximumAreaSprite;
private Sprite floatingButtonSprite;
private BitmapFont font;

private MultipleVirtualViewportBuilder multipleVirtualViewportBuilder;

@Override
public void create() {
multipleVirtualViewportBuilder = new MultipleVirtualViewportBuilder(800, 480, 854, 600);
VirtualViewport virtualViewport = multipleVirtualViewportBuilder.getVirtualViewport(Gdx.graphics.getWidth(), Gdx.graphics.getHeight());

camera = new OrthographicCameraWithVirtualViewport(virtualViewport);
// centers the camera at 0, 0 (the center of the virtual viewport)
camera.position.set(0f, 0f, 0f);

// extra code
spriteBatch = new SpriteBatch();

Pixmap pixmap = new Pixmap(64, 64, Format.RGBA8888);
pixmap.setColor(Color.WHITE);
pixmap.fillRectangle(0, 0, 64, 64);

minimumAreaSprite = new Sprite(new Texture(pixmap));
minimumAreaSprite.setPosition(-400, -240);
minimumAreaSprite.setSize(800, 480);
minimumAreaSprite.setColor(0f, 1f, 0f, 1f);

maximumAreaSprite = new Sprite(new Texture(pixmap));
maximumAreaSprite.setPosition(-427, -300);
maximumAreaSprite.setSize(854, 600);
maximumAreaSprite.setColor(1f, 1f, 0f, 1f);

floatingButtonSprite = new Sprite(new Texture(pixmap));
floatingButtonSprite.setPosition(virtualViewport.getVirtualWidth() * 0.5f - 80, virtualViewport.getVirtualHeight() * 0.5f - 80);
floatingButtonSprite.setSize(64, 64);
floatingButtonSprite.setColor(1f, 1f, 1f, 1f);

font = new BitmapFont();
font.setColor(Color.BLACK);
}

@Override
public void resize(int width, int height) {
super.resize(width, height);

VirtualViewport virtualViewport = multipleVirtualViewportBuilder.getVirtualViewport(Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
camera.setVirtualViewport(virtualViewport);

camera.updateViewport();
// centers the camera at 0, 0 (the center of the virtual viewport)
camera.position.set(0f, 0f, 0f);

// relocate floating stuff
floatingButtonSprite.setPosition(virtualViewport.getVirtualWidth() * 0.5f - 80, virtualViewport.getVirtualHeight() * 0.5f - 80);
}

@Override
public void render() {
super.render();
Gdx.gl.glClearColor(1f, 0f, 0f, 1f);
Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
camera.update();

// render stuff...
spriteBatch.setProjectionMatrix(camera.combined);
spriteBatch.begin();
maximumAreaSprite.draw(spriteBatch);
minimumAreaSprite.draw(spriteBatch);
floatingButtonSprite.draw(spriteBatch);
font.draw(spriteBatch, String.format("%1\$sx%2\$s", Gdx.graphics.getWidth(), Gdx.graphics.getHeight()), -20, 0);
spriteBatch.end();
}

public static void main(String[] args) {
LwjglApplicationConfiguration config = new LwjglApplicationConfiguration();

config.title = VirtualViewportExampleMain.class.getName();
config.width = 800;
config.height = 480;
config.fullscreen = false;
config.useGL20 = true;
config.useCPUSynch = true;
config.forceExit = true;
config.vSyncEnabled = true;

new LwjglApplication(new VirtualViewportExampleMain(), config);
}

}
```

In the example there are three colors, green represents the minimum supported area, yellow the maximum supported area and red represents the area outside. If we see red it means that aspect ratio is not supported. There is a floating element colored white, which is always relocated in the top right corner of the screen, unless we are on an unsupported aspect ratio, in that case it is just located in the top right corner of the green area.

The next video shows the example in action:

UPDATE: you can download the source code to run on Eclipse from here.

### Conclusion

In these two blog posts we explained in a simplified way how we managed to support different aspect ratios and resolutions for Clash of the Olympians, a technique that could be used as an acceptable way of handling different screen sizes for a wide range of games, and it is not hard to use.

As always, we hope you liked it and that it could be useful for you when developing your games. Opinions and suggestions are always welcome if you want to comment 🙂 and also share it if you liked it and think other people could benefit from this code.

VN:F [1.9.22_1171]

# Our solution to handle multiple screen sizes in Android - Part one

Developing games for multiple devices is not an easy task. Given the variety of devices, one of the most common problem is having to handle multiple screen sizes, which means different resolutions and aspect ratios.

In this blog post we want to share what we did to minimize this problem when making Ironhide's Clash of the Olympians for Android.

In the next sections we are going to show some common ways of handling the multiple screens problem and then our way.

### Stretching the content

One common approach when developing a game is making the game for a fixed resolution, for example, making the game for 800x480.

Based on that, you can have the next layout in one of your game's screens:

Main screen of Clash of the Olympians in a 800x480 device.

Then, to support other screen sizes the idea is to stretch the content to the other device screen:

Main screen on a 800x600 device, stretched from 800x480.

The main problem is that the aspect ratio is affected and that is visually unacceptable.

### Stretching + keeping aspect ratio

To solve part of the previous problem, one common technique is stretching but keeping the correct aspect ratio by adding dead space to the borders of the screen so the real game area aspect ratio is the same on different devices. For example:

Main screen in a 800x600 device with borders.

Main screen in a 854x480 device with borders.

This is an easy way to attack this multiple screen size problem, you can even create some nice borders instead of the black borders shown in the previous image to improve how it looks.

However, in some cases this is not acceptable either since it doesn't look so good or it feels like the game wasn't made for that device.

### Our solution: Using a Virtual Viewport

Our approach consists in adapting what is shown in the game screen area to the device screen size.

First, we define a range of aspect ratios we want to support, for example, in the case of clash we defined 4:3 (800x600) and 16:9 (854x480) as our border case aspect ratios, so all aspect ratios in the middle of those two should be supported.

Given those two aspect ratios, we defined our maximum area as 854x600 and our minimum area as 800x480 (the union and intersection between 800x600 and 854x480, respecively). The idea is to cover the maximum area with stuff, but the important stuff (buttons, information, etc) should be always included in the minimum area.

The red rectangle shows the minimum area while the blue rectangle shows the maximum area.

Then, given a device resolution we calculate an area that matches the device aspect ratio and is included in the virtual area. For example, given a device with a resolution of 816x544 (4:3), this is what is shown:

The green rectangle shows the matching area for 816x544.

This is how the main screen is shown in a 816x544 device.

In case we are on a bigger or lower resolution than the maximum or minimum area we defined, respectively, for example a screen of 480x320 (3:2), what we do is calculate the aspect ratio and find a corresponding match for that aspect ratio in the area we defined. In the case of the example, one match could be 800x534 since it is 3:2 aspect ratio and it is inside our virtual area. Then we scale down to fit the screen.

The green rectangle shows the calculated area for a resolution of 800x534 (matching the aspect of the 480x320 device).

This is what is shown of the main screen in a 480x320 device (click to enlarge the image).

### Floating elements

For some elements of the game, such as buttons, maintaining their fixed world position for different screen sizes doesn't look good, so what we do is making them floating elements. That means they are always at the same screen position, the next images shows an example with the main screen buttons:

Main screen's buttons distribution for a 854x480 device.

Main screen's buttons distribution for a 800x600 device. As you can see, buttons are relocated to match the screen size.

Finally, we want to show a video of this multiple screen sizes auto adjustment in real time:

Adjusting the game to the screen size in real time.

### Some limitations

As we are scaling up/down in some cases to match the corresponding screen, some devices could perceive some blur since we are using linear filtering and the final position of the elements after the camera transformations could be not integer positions. This problem is minimized with better density devices and assets.

Layouts could change between different devices, for example, the layout for a phone could be different to the layout of a tablet device.

Text is a special case, when rendering text just downscaling it is not a correct solution since it could be not readable. You may have to re-layout text for lower resolution devices to show it bigger and readable.

### Conclusion

If you design your game screens follow this approach, it is not so hard to support multiple screen sizes in an acceptable way. However there is still a lot of detail to take care of, like the problems we talked in the previous section.

In the next part of this blog post we will show some code based on LibGDX for those interested in how we implemented all this.

Thanks for reading and hope you liked it.

VN:F [1.9.22_1171]

# Drawing a projectile trajectory like Angry Birds using LibGDX

We had to implement a projectile trajectory like Angry Birds for our current game and we wanted to share a bit how we did it.

### Introduction

In Angry Birds, the trajectory is drawn after you fired a bird showing its trajectory to help you decide the next shot. Knowing the trajectory of the current projectile wasn't totally needed in that version of the game since you have the slingshot and that tells you, in part, where the current bird is going.

In Angry Birds Space, they changed to show the trajectory of the current bird because they changed the game mechanics and now birds can fly different depending the gravity of the planets, the slingshot doesn't tell you the real direction anymore. So, that was the correct change to help the player with the new rules.

We wanted to test how drawing a trajectory, like Angry Birds Space does for the next shot, could help the player.

### Calculating the trajectory

The first step is to calculate the function f(t) for the projectile trajectory. In our case, projectiles have a normal behavior (there are no mini planets) so the formula is simplified:

We found an implementation for the equation in stackoverflow, here the code is:

```class ProjectileEquation {

public float gravity;
public Vector2 startVelocity = new Vector2();
public Vector2 startPoint = new Vector2();

public float getX(float t) {
return startVelocity.x * t + startPoint.x;
}

public float getY(float t) {
return 0.5f * gravity * t * t + startVelocity.y * t + startPoint.y;
}

}
```

With that class we have an easy way to calculate x and y coordinates given the time.

### Drawing it to the screen

If we follow a similar approach of Angry Birds, we can draw colored points for the projectile trajectory.

In our case, we created a LibGDX Actor dedicated to draw the Trajectory of the projectile. It first calculates the trajectory using the previous class and then renders it by using a Sprite and drawing it for each point of the trajectory by using the SpriteBatch's draw method. Here is the code:

```public static class Controller  {

public float power = 50f;
public float angle = 0f;

}

public static class TrajectoryActor extends Actor {

private Controller controller;
private ProjectileEquation projectileEquation;
private Sprite trajectorySprite;

public int trajectoryPointCount = 30;
public float timeSeparation = 1f;

public TrajectoryActor(Controller controller, float gravity, Sprite trajectorySprite) {
this.controller = controller;
this.trajectorySprite = trajectorySprite;
this.projectileEquation = new ProjectileEquation();
this.projectileEquation.gravity = gravity;
}

@Override
public void act(float delta) {
super.act(delta);
projectileEquation.startVelocity.set(controller.power, 0f);
projectileEquation.startVelocity.rotate(controller.angle);
}

@Override
public void draw(SpriteBatch batch, float parentAlpha) {
float t = 0f;
float width = this.width;
float height = this.height;

float timeSeparation = this.timeSeparation;

for (int i = 0; i < trajectoryPointCount; i++) {
float x = this.x + projectileEquation.getX(t);
float y = this.y + projectileEquation.getY(t);

batch.setColor(this.color);
batch.draw(trajectorySprite, x, y, width, height);

t += timeSeparation;
}
}

@Override
public Actor hit(float x, float y) {
return null;
}

}
```

The idea of using the Controller class is to be able to modify the values from outside of the actor by using a shared class between different parts of the code.

### Further improvements

To make it look nicer, one possible addition is to decrement the size of the trajectory points and to reduce their opacity.

In order to do that we drawn each point of the trajectory each time with less alpha in the color and smaller by changing the width and height when calling spritebatch.draw().

We also added a fade in transition to show the trajectory instead making it instantly appear and that works great too, but that is in the game.

Another possible improvement, but depends on the game you are making, is to separate the points using a fixed distance. In order to do that, we have to be dependent on x and not t. So we added a method to the ProjectileEquation class that given a fixed distance and all the values of the class it returns the corresponding t in order to maintain the horizontal distance between points, here is the code:

```	public float getTForGivenX(float x) {
return (x - startPoint.x) / (startVelocity.x);
}
```

Now we can change the draw method of the TrajectoryActor to do, before starting to draw the points:

```	float fixedHorizontalDistance = 10f;
timeSeparation = projectileEquation.getTForGivenX(fixedHorizontalDistance);
```

Not sure which one is the best option between using x or t as the main variable, as I said before, I suppose it depends on the game you are making.

Here is a video showing the results:

If you want to see it working you can test the webstart of the prototypes project, or you can go to the code and see the dirty stuff.

### Conclusion

Making a trajectory if you know the correct formula is not hard and it looks nice, it also could be used to help the players maybe as part of the basic gameplay or maybe as a powerup.

Hope you like it.

VN:F [1.9.22_1171]

# Area triggers using Box2D, Artemis and SVG paths

As we explained in previous posts, we are using Inkscape to design the levels of some of our games, in particular, our current project. In this post we want to share how we are making area triggers using Box2D sensor bodies, Artemis and SVG paths.

### What is an area trigger

When we say area trigger we mean something that should be triggered, an event for example, when an entity/game object enters the area, to perform custom logic, for example, ending the game or showing a message. Some game engines provides this kind of stuff, for example Unity3d with its Collider class and different events like OnTriggerEnter.

### Building an area trigger in Inkscape

Basically, we use SVG paths with custom XML data to define the area trigger to later parse it by the game level loader to create the corresponding game entities. The following screen shot shows an example of an area defined using Inkscape:

Right now, we are exporting two values with the SVG path, the event we want to fire identified by the XML attribute named eventId, and extra data for that event identified by the XML attribute eventData. For example, for our current game we use the eventId `showTutorial` with a text we want to share with the player on eventData attribute like `"Welcome to the training grounds"`. The following example shows the XML data added to the SVG path:

```

```

The exported data may depend on your framework or game, so you should export whatever data you need instead.

### Defining the area trigger inside the game

Inside the game, we have to define a entity/game object for the area trigger. In the case of our current game, that entity is composed by a Box2D sensor body with a shape built using the SVG path and a Script with logic to perform when the main character collides it.

We use sensor bodies because they are mainly used to detect collisions but not to react to them by changing their angular and linear velocities. As we explained in a previous post, we are using our custom builders to help when building Box2D bodies and fixtures. Our current body declaration looks like this:

```
Body body = bodyBuilder //
.fixture(bodyBuilder.fixtureDefBuilder() //
.polygonShape(vertices) // the vertices from the SVG path
.categoryBits(Collisions.Triggers) // the collision category of this body
.sensor() //
) //
.position(0f, 0f) //
.type(BodyType.StaticBody) //
.angle(0f) //
.userData(entity) //
.build();
```

The previous code depends on specific stuff of the current game but it could be modified to be reused in other projects.

As we explained in another previous post, we are using a basic scripting framework over Artemis. Our current script to detect the collision looks like this:

```
public static class TriggerWhenShipOverScript extends ScriptJavaImpl {

private final String eventId;
private final String eventData;

EventManager eventManager;

public TriggerWhenShipOverScript(String eventId, String eventData) {
this.eventId = eventId;
this.eventData = eventData;
}

@Override
public void update(World world, Entity e) {
PhysicsComponent physicsComponent = Components.getPhysicsComponent(e);
Contacts contacts = physicsComponent.getContact();

if (contacts.isInContact()) {
eventManager.submit(eventId, eventData);
e.delete();
}
}
}
```

For the current game, we are testing this stuff for a way to communicate with the player by showing messages from time to time, for example, in a basic tutorial implementation. The next video shows an example of that working inside the game:

### Conclusion

The idea of the post is to share a common technique of triggering events when a game object enters an area, which is not framework dependent. So you could use the same technique using your own framework instead Box2D and Artemis, a custom level file format instead SVG and the editor of your choice instead Inkscape.

### References

VN:F [1.9.22_1171]

# Building 2d animations using Inkscape and Synfig

In this blog post we want to share a method to animate Inkscape SVG objects using Synfig Studio, trying to follow a similar approach to the Building 2d sprites from 3d models using Blender blog post.

### A small introduction about Inkscape

Inkscape is one of the best open source, multi platform and free tools to work with vector graphics using the open standard SVG.

After some time using Inkscape, I have learned how to make a lot of things and feel great using it. However, it lacks of some features which would make it a great tool, for example, a way to animate objects by making interpolations of its different states defining key frames and using a time line, among others.

It has some ways to create interpolations of objects between two different states but it is unusable since it doesn't work with groups, so if you have a complex object made of a group of several other objects, then you have to interpolate all of them. If you make some modification on of the key frames, then you have to interpolate everything again.

### Synfig comes into action

Synfig Studio is a free and open-source 2D animation tool, it works with vector graphics as well. It lets you create nice animations using a time line and key frames and lets you easily export the animation. However, it uses its own format, so you can't directly import an SVG. Luckily, the format is open and there are already some ways to transform from SVG to Synfig.

In particular I tried an Inkscape extension named svg2sif which lets you save files in Synfig format and seems to work fine (the page of the extension explains how to install it). I don't know possible limitations of the svg2sif Inkscape extension, so use it with caution, don't expect everything to work fine.

Now that we have the method defined, we will explain it by showing an example.

### Creating an object in Inkscape

We start by creating an Inkscape object to be animated later. For this mini tutorial I created a black creature named Bor...ahem! Gishus Maximus:

Here is the SVG if you are interested on it, sadly WordPress doesn't support SVG files as media files.

With the model defined, we have to save it as Synfig format using the extension, so go to "Save a Copy..." and select the .sif format (added by the svg2sif extension), and save it.

### Animating the object in Synfig

Now that we have the Synfig file we open it and voilà, we can animate it. However, there is a bug, probably with the svg2sif extension and the time line is missing. To fix it, we have to create a new document and copy the shape from the one exported by Inkscape to the new one.

The next step is to use your super animation skill and animate the object. In my case I created some kind of eating animation by making a mouth, opening it slow and then closing it fast:

Here is the Synfig file with the animation if you are interested on it.

To export it, use the "Show the Render Settings Dialog" button and configure how much frames per second you want, among other things, and then export it using the Render button. You can export it to different format, for example, a list of separated PNG files for each animation frame or an animated GIF. However, it you can't configure some of the formats and the exported file is not what I wanted so I preferred to export to a list of PNG files and then use the convert tool to create the animated GIF:

Finally, I have a time lapse of how I applied the method if you want to watch it:

### Extra section: Importing the animation in your game

After we have separated PNG files for the animation, we can create a sprite sheet or use another tools to create files to be easily imported by a game framework. For this example, I used a Gimp plug-in named Sprite Tape to import all the separated PNG files and create a sprite sheet:

If you are a LibGDX user and want to use the Texture Packer, you can create a folder and copy the PNG files changing their names to animationname_01, animationname_02, etc, and let Texture Packer to automatically import it.

### Conclusions

One problem with this method is that you can't easily modify your objects in Inkscape and then automatically import them in Synfig and update the current animation to work with it. So, once you moved to Synfig you have to keep there to avoid making a lot of duplicated work. This could be avoided if Inkscape provided a good animation extension.

Synfig Studio is a great tool but not the best of course, it is not intuitive (as Gimp, Blender and others) and it has some bugs that make it explode without reason. On the other hand, it is open source, free and multi platform and the best part is that it works well for what we need right now 😉

This method allow us to animate vector graphics which is great since it is a way for programmers like us to animate their programmer art 😀

Finally, I am not an animation expert at all, so this blog post could be based on some wrong assumptions. So, if you are one, feel free to correct me and share your opinions.

As always, hope you like the post.

VN:F [1.9.22_1171]

# Implementing transitions between screens

Using transitions between game screens is a great way to provide smoothness between screen changes, for example, fade out one screen and then fade in the next one. The next video shows an example of those effects our Vampire Runner game.

In this post, we will show a possible implementation of transitions between screens using LibGDX, however the code should be independent enough to be easily ported to other frameworks.

Although we implemented it using the our own concept of GameState, we will try to use LibGDX Screen concept in this post to simplify understandability.

### Implementation

The implementation is based in the concept of TransitionEffect. A TransitionEffect holds the render logic of one of the effects of the transition being performed.

```class TransitionEffect {

// returns a value between 0 and 1 representing the level of completion of the transition.
protected float getAlpha() { .. }

void update(float delta) { .. }

void render(Screen current, Screen next);

boolean isFinished() { .. }

TransitionEffect(float duration) { .. }
}
```

An implementation example of a TransitionEffect is a FadeOutTransitionEffect to perform a fade out effect:

```class FadeOutTransitionEffect extends TransitionEffect {

Color color = new Color();

@Override
public void render(Screen current, Screen next) {
current.render();
color.set(0f, 0f, 0f, getAlpha());
// draw a quad over the screen using the color
}

}
```

Then, in order to perform a transition between Screens, we need a custom Screen with the logic to apply render each transition effect and to set the next Screen when the transition is over. This is a possible implementation:

```class TransitionScreen implements Screen {
Game game;

Screen current;
Screen next;

int currentTransitionEffect;
ArrayList<TransitionEffect> transitionEffects;

TransitionScreen(Game game, Screen current, Screen next, ArrayList<TransitionEffect> transitionEffects) {
this.current = current;
this.next = next;
this.transitionEffects = transitionEffects;
this.currentTransitionEffect = 0;
this.game = game;
}

void render() {
if (currentTransitionEffect >= transitionEffects.size()) {
game.setScreen(next);
return;
}

transitionEffects.get(currentTransitionEffect).update(getDelta());
transitionEffects.get(currentTransitionEffect).render(current, next);

if (transitionEffects.get(currentTransitionEffect).isFinished())
currentTransitionEffect++;
}
}
```

Finally, each time we want to perform a transition between two screens, we have to create a new TransitionScreen with the current and next Screens and a collection of effects we want. For example:

```	Screen current = game.getScreen();
Screen next = new HighscoresScreen();

ArrayList<TransitionEffect> effects = new ArrayList<TransitionEffect>();

Screen transitionScreen = new TransitionScreen(game, current, next, effects);

game.setScreen(transitionScreen);
```

As we mention before, we use our own concepts in our implementation. If you want to see our code take a look at the classes ApplicationListenerGameStateBasedImpl, GameState and GameStateTransitionImpl (do not expect the best code in the world).

### Conclusion

Adding transitions between the game screens gives users a feeling of smoothness, and we believe it worth the effort.

Also, we like the current design lets you implement different effects for the transitions, we only shown fade out and fade in as example because they are really simple to implement and we are using only those for our games.

As always, hope you like the post.

VN:F [1.9.22_1171]

# Toasting with LibGDX Scene2D and Animation4j

For our latest Vampire Runner update we changed to use LibGDX scene2d instead Android GUI. The main reason for the change is that we wanted to use a common GUI API for Android and PC, and sadly we can't do that using Android API. With LibGDX scene2d we can code once and run in both platforms.

In particular, the toast feature of the Android API was really interesting to have and we want to share how we implemented it using LibGDX scene2d.

### Toasting

A toast is defined as a scene2d Window that shows some text and disappear after a while, this is a pseudo code to give the idea of how to create that toast window:

```Actor toast(String text, float time, Skin skin) {
Window window = new Window(skin);
...
window.action(new Action() {
act(float delta) {
// update the animation
// if the animation is finished, we remove the window from the stage.
}
});
...
return window;
}
```

To animate the toast, we create a TimelineAnimation using animation4j defining that the window should move from outside the screen to inside the screen, wait some time and then go out of the screen again. The code looks like this:

```TimelineAnimation toastAnimation = Builders.animation( //
Builders.timeline() //
.value(Builders.timelineValue(window, Scene2dConverters.actorPositionTypeConverter) //
.keyFrame(0f, new float[] { window.x, outsideY }) //
.keyFrame(1f, new float[] { window.x, insideY }) //
.keyFrame(4f, new float[] { window.x, insideY }) //
.keyFrame(5f, new float[] { window.x, outsideY }) //
) //
) //
.started(true) //
.delay(0f) //
.speed(5f / time) //
.build();
```

That code creates a new animation which modifies the position of the Window each time update() method is called.

Of course, you can animate the Window using LibGDX custom Actions or another animation framework like Universal Tween Engine, that is up to you.

If you want to see the code itself, you can see the Actor factory named Actors at our Github of commons-gdx.

In our subclass of Game, we added an empty Stage updated in each render() method, and a toast(string) method which creates a toast as explained before using default Skin and time.

```MyGame extends Game {

Stage stage;
float defaultTime;
Skin defaultSkin;

render() {
// all our game update and render logic
...
stage.act(delta);
stage.draw();
}

toast(String text) {
}
}
```

So, if we want to toast about something, we only have to call game.toast("something") and voilá.

You can see a running example of this, you can run the Gui.Scene2dToastPrototype of our prototypes webstart (recommended), or watch the next video:

### Conclusion

Despite being a bit incomplete and buggy yet, scene2d API is almost easy to use and it is great if you want to do simple stuff.

Using scene2d is great for our simple need of GUI interfaces because we can quickly test all the stuff in PC. In Vampire Runner we are using scene2d for the feedback dialog, the new version available dialog and for the change username screen.

An interesting thing to have in mind when using scene2d API is that you can make your own Skin to achieve a more integrated look and feel.

As always, hope you like the post and could be of help.

VN:F [1.9.22_1171]

# How we use Box2D with Artemis

As you may know from our previous posts or from your personal knowledge (obviously), Box2D is a 2D physics engine and Artemis is an Entity System Framework. Box2D is used to add physics behavior to games however it could be used only to detect collisions if you want (that means no dynamic behavior). In this post, we want to share a bit how we are using both frameworks together.

### Introduction

The main idea is to react to physics events, like two bodies colliding, to perform some game logic. For example, whenever the main character ship touches an asteroid, it explodes.

When you use Artemis, the game logic is done in an Artemis System or a Script (custom), if you use our customization. The ideal situation would be if you could check in your game logic which entities are in contact or not. In order to make that work, you have to find a way to link a Box2D contact with an Artemis Entity and vice versa.

### Our solution

The first thing we do is, to each Artemis Entity we want to have a physics behavior, we add a PhysicsComponent the Box2D Body of the Entity and a Contacts instance where all the Box2D contacts for that Body are stored. Also, in order to get the Entity from the Body, we set the its userData pointing to the Entity.

The Contacts concept gives us useful methods to get information about contacts and the API looks like this:

```    getContactsCount() : int - returns the contacts quantity
getContact(index: int) : Contact - returns the contact information
```

And our Contact concept API, returned by the Contacts getContact() method, looks like this:

```    getMyFixture() : Fixture - returns the fixture in contact of the Contacts owner Entity.
getOtherFixture() : Fixture - returns the fixture of the other Entity.
getNormal() : Vector2 - returns the normal of the contact.
```

(note: we decided to make a deep copy of the contacts information since it is recommended in the Box2D manual if you use a ContactsListener)

Then, we have a ContactsListener (named PhysicsListener) which, whenever a contact is reported (begin or end), it gets the bodies from the contact and gets the entities from each body userData and then adds or removes the contact data to/from each Entity's PhysicsComponent using its Contacts instance.

(note: we decided to use a custom ContactListener since it is recommended in the Box2D manual)

Finally, in each Artemis System or Script, we use the Entity's PhysicsComponent to get the contacts data and we proceed to do the logic we want, for example, destroy the character or enable some special ability, etc.

Here is an example of how we use it inside a Script from our Leave me Alone game:

```public void update(World world, Entity e) {
PhysicsComponent physicsComponent = Components.getPhysicsComponent(e);

Contacts contacts = physicsComponent.getContact();

if (!contacts.isInContact())
return;

boolean shouldExplode = false;

for (int i = 0; i < contacts.getContactCount(); i++) {

Contact contact = contacts.getContact(i);
Entity otherEntity = (Entity) contact.getOtherFixture().getBody().getUserData();

GroupComponent groupComponent = Components.getGroupComponent(otherEntity);

if (groupComponent == null)
continue;

if (groupComponent.group.equals(Groups.EnemyCharacter)) {
shouldExplode= true;
break;
}

}

if (shouldExplode)
eventManager.dispatch(Events.MainExploded, e);
}
```

If you use Box2D and you are starting to use Artemis or vice versa, hope this post could help you. Otherwise, I hope you like it.

Also, if you use Artemis with Box2D in another way, would be great to have your point of view.

Thanks.

VN:F [1.9.22_1171]

# Modifying textures using libGDX Pixmap in runtime - Explained

We have previously shown a bit how we were using LibGDX Pixmap to modify textures in runtime here and here for a game prototype we were doing. In this post I want to share more detail of how we do that. The objective was to make destructible terrain like in Worms 2.

### Introduction

When you work with OpenGL textures, you can't directly modify their pixels whenever you want since they are on OpenGL context. To modify them you have to upload an array of bytes using glTexImage2D or glTexSubImage2D. The problem is you have to maintain on the application side an array of bytes representing the modifications you want to do.

To simplify working with byte arrays representing images, LibGDX provides a useful class named Pixmap which is a map of pixels kept in local memory with some methods to interact with a native library to perform all modifications with better performance.

### Moving data from Pixmap to OpenGL Texture

In our prototypes, we wanted to remove part of the terrain whenever a missile touches it, like a Worms 2 explosion. That means we need some way to detect the collisions between the missile and the terrain and then a way to remove pixels from a texture.

We simplified the first problem by getting the color of the pixel only for the missile's position and checking if it was transparent or not. A more correct solution could be using a bitmap mask to check collisions between pixels but we wanted to simplify the work for now.

For the second problem, given a radius of explosion of the missile, we used the pixmap fillCircle method by previously setting the color to (0,0,0,0) (fully transparent) and disabled Pixmap blending to override those pixels.

But that only modified the pixmap data, now we needed to modify the OpenGL texture. To do that, we called OpenGL glTexImage2D using the bytes of the pixmap as the new texture data and that worked correctly.

### Transforming from world coordinates to Pixmap coordinates

One problem when working with pixmaps is we have to map world coordinates (the position of the missile for example) to coordinates inside the Pixmap.

This image shows the coordinate system of the Pixmap, it goes from 0 to width in x and 0 to height in y.

This image shows how we normally need to move, rotate and resize the Pixmap in a game.

To solve this, we are using a LibGDX Sprite to maintain the Pixmap transformation, so we can easily move, rotate and scale it. Then, we can use that information to project a world coordinate to Pixmap coordinate by applying the inverse transform, here is the code:

```	public void project(Vector2 position, float x, float y) {
position.set(x, y);

float centerX = sprite.getX() + sprite.getOriginX();
float centerY = sprite.getY() + sprite.getOriginY();

position.rotate(-sprite.getRotation());

float scaleX = pixmap.getWidth() / sprite.getWidth();
float scaleY = pixmap.getHeight() / sprite.getHeight();

position.x *= scaleX;
position.y *= scaleY;

pixmap.getWidth() * 0.5f, //
-pixmap.getHeight() * 0.5f //
);

position.y *= -1f;
}
```

(note: it is the first version at least, it could have bugs and could be improved also)

To simplify our work with all this stuff, we created a class named PixmapHelper which manage a Pixmap, a Texture and a Sprite, so we could move the Sprite wherever we wanted to and if we modify the pixmap through the PixmapHelper then the Texture was automatically updated and hence the Sprite (since it uses internally the Texture).

The next video shows how we tested the previous work in a prototype were we simulated cluster bombs (similar to Worms 2):

### Some adjustments to improve performance

Instead of always working with a full size Pixmap by modifying it and then moved to the OpenGL texture, we created smaller Pixmaps of fixed sizes: 32x32, 64x64, etc. Then, each time we needed to make an explosion, we used the best Pixmap for that explosion and then we called glTexSubImage2D instead glTexImage2D to avoid updating untouched pixels. One limitation of this modification is we have to create and maintain several fixed size pixmaps depending on the modification size. Our current greater quad is 256x256 (almost never used).

Then, we changed to store each modification instead performing them in the moment the PixmapHelper erase method was called, and we added an update method which performs all modifications together. This improvement allow us to call Pixmap update method when we wanted, maybe one in three game updates or things like that.

### Conclusion

Despite using LibGDX Pixmap for better performance, moving data to and from OpenGL context is not a cheap operation, on Android devices this could means some pauses when refreshing the modified textures with the new data. However, there is a lot of space for performance improvement, some ideas are to work only with pixmap of less bits instead RGBA8888 and use that one as the collisions context and as the mask of the real image (even using shaders), between other ideas.

Finally, the technique looks really nice and we believe that it could be used without problems for a simple game but it is not ready yet to manage a greater game like Worms 2.

Hope you like the technique and if you use it, maybe even also share your findings.

P.S.: In case you were wondering: yes, I love Worms 2.

VN:F [1.9.22_1171]