Our solution to handle multiple screen sizes in Android โ€“ Part two

February 13th, 2013

Continuing with the previous blog post, in this post we are going to talk about the code behind the theory. It consists in three concepts, the VirtualViewport, the OrthographicCameraWithVirtualViewport and the MultipleVirtualViewportBuilder.

VirtualViewport

It defines a virtual area where the game stuff is contained and provides a way to get the real width and height to use with a camera in order to always show the virtual area. Here is the code of this class:

public class VirtualViewport {

	float virtualWidth;
	float virtualHeight;

	public float getVirtualWidth() {
		return virtualWidth;
	}

	public float getVirtualHeight() {
		return virtualHeight;
	}

	public VirtualViewport(float virtualWidth, float virtualHeight) {
		this(virtualWidth, virtualHeight, false);
	}

	public VirtualViewport(float virtualWidth, float virtualHeight, boolean shrink) {
		this.virtualWidth = virtualWidth;
		this.virtualHeight = virtualHeight;
	}

	public float getWidth() {
		return getWidth(Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
	}

	public float getHeight() {
		return getHeight(Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
	}

	/**
	 * Returns the view port width to let all the virtual view port to be shown on the screen.
	 * 
	 * @param screenWidth
	 *            The screen width.
	 * @param screenHeight
	 *            The screen Height.
	 */
	public float getWidth(float screenWidth, float screenHeight) {
		float virtualAspect = virtualWidth / virtualHeight;
		float aspect = screenWidth / screenHeight;
		if (aspect > virtualAspect || (Math.abs(aspect - virtualAspect) < 0.01f)) {
			return virtualHeight * aspect;
		} else {
			return virtualWidth;
		}
	}

	/**
	 * Returns the view port height to let all the virtual view port to be shown on the screen.
	 * 
	 * @param screenWidth
	 *            The screen width.
	 * @param screenHeight
	 *            The screen Height.
	 */
	public float getHeight(float screenWidth, float screenHeight) {
		float virtualAspect = virtualWidth / virtualHeight;
		float aspect = screenWidth / screenHeight;
		if (aspect > virtualAspect || (Math.abs(aspect - virtualAspect) < 0.01f)) {
			return virtualHeight;
		} else {
			return virtualWidth / aspect;
		}
	}

}

So, if we have a virtual area of 640x480 and want to show it on a screen of 800x480 we can do the next steps in order to get the proper values that we have to use as the camera viewport for that screen:

VirtualViewport virtualViewport = new VirtualViewport(640, 480);
float realViewportWidth = virtualViewport.getWidth(800, 480);
float realViewportHeight = virtualViewport.getHeight(800, 480);
// now set the camera viewport values
camera.setViewportFor(realViewportWidth, realViewportHeight);

OrthographicCameraWithVirtualViewport

In order to simplify the work when using LibGDX library, we created a subclass of LibGDX's OrthographicCamera with specific behavior to update the camera viewport using the VirtualViewport values. Here is its code:

public class OrthographicCameraWithVirtualViewport extends OrthographicCamera {

	Vector3 tmp = new Vector3();
	Vector2 origin = new Vector2();
	VirtualViewport virtualViewport;
	
	public void setVirtualViewport(VirtualViewport virtualViewport) {
		this.virtualViewport = virtualViewport;
	}

	public OrthographicCameraWithVirtualViewport(VirtualViewport virtualViewport) {
		this(virtualViewport, 0f, 0f);
	}

	public OrthographicCameraWithVirtualViewport(VirtualViewport virtualViewport, float cx, float cy) {
		this.virtualViewport = virtualViewport;
		this.origin.set(cx, cy);
	}

	public void setPosition(float x, float y) {
		position.set(x - viewportWidth * origin.x, y - viewportHeight * origin.y, 0f);
	}

	@Override
	public void update() {
		float left = zoom * -viewportWidth / 2 + virtualViewport.getVirtualWidth() * origin.x;
		float right = zoom * viewportWidth / 2 + virtualViewport.getVirtualWidth() * origin.x;
		float top = zoom * viewportHeight / 2 + virtualViewport.getVirtualHeight() * origin.y;
		float bottom = zoom * -viewportHeight / 2 + virtualViewport.getVirtualHeight() * origin.y;

		projection.setToOrtho(left, right, bottom, top, Math.abs(near), Math.abs(far));
		view.setToLookAt(position, tmp.set(position).add(direction), up);
		combined.set(projection);
		Matrix4.mul(combined.val, view.val);
		invProjectionView.set(combined);
		Matrix4.inv(invProjectionView.val);
		frustum.update(invProjectionView);
	}

	/**
	 * This must be called in ApplicationListener.resize() in order to correctly update the camera viewport. 
	 */
	public void updateViewport() {
		setToOrtho(false, virtualViewport.getWidth(), virtualViewport.getHeight());
	}
}

MultipleVirtualViewportBuilder

This class allows us to build a better VirtualViewport given the minimum and maximum areas we want to support performing the logic we explained in the previous post. For example, if we have a minimum area of 800x480 and a maximum area of 854x600, then, given a device of 480x320 (3:2) it will return a VirtualViewport of 854x570 which is a good match of a resolution which contains the minimum area and is smaller than the maximum area and has the same aspect ratio of 480x320.

public class MultipleVirtualViewportBuilder {

	private final float minWidth;
	private final float minHeight;
	private final float maxWidth;
	private final float maxHeight;

	public MultipleVirtualViewportBuilder(float minWidth, float minHeight, float maxWidth, float maxHeight) {
		this.minWidth = minWidth;
		this.minHeight = minHeight;
		this.maxWidth = maxWidth;
		this.maxHeight = maxHeight;
	}

	public VirtualViewport getVirtualViewport(float width, float height) {
		if (width >= minWidth && width <= maxWidth && height >= minHeight && height <= maxHeight)
			return new VirtualViewport(width, height, true);

		float aspect = width / height;

		float scaleForMinSize = minWidth / width;
		float scaleForMaxSize = maxWidth / width;

		float virtualViewportWidth = width * scaleForMaxSize;
		float virtualViewportHeight = virtualViewportWidth / aspect;

		if (insideBounds(virtualViewportWidth, virtualViewportHeight))
			return new VirtualViewport(virtualViewportWidth, virtualViewportHeight, false);

		virtualViewportWidth = width * scaleForMinSize;
		virtualViewportHeight = virtualViewportWidth / aspect;

		if (insideBounds(virtualViewportWidth, virtualViewportHeight))
			return new VirtualViewport(virtualViewportWidth, virtualViewportHeight, false);
		
		return new VirtualViewport(minWidth, minHeight, true);
	}
	
	private boolean insideBounds(float width, float height) {
		if (width < minWidth || width > maxWidth)
			return false;
		if (height < minHeight || height > maxHeight)
			return false;
		return true;
	}

}

In case the aspect ratio is not supported, it will return the minimum area.

Floating elements

As we explained in the previous post, there are some cases where we need stuff that should be always at fixed positions in the screen, for example, the audio and music buttons in Clash of the Olympians. In order to do that we need to make the position of those buttons depend on the VirtualViewport. In the next section where we explain how to use all together we show an example of how to do a floating element.

Using the code together

Finally, here is an example showing how to use these concepts in a LibGDX application:

public class VirtualViewportExampleMain extends com.badlogic.gdx.Game {

	private OrthographicCameraWithVirtualViewport camera;
	
	// extra stuff for the example
	private SpriteBatch spriteBatch;
	private Sprite minimumAreaSprite;
	private Sprite maximumAreaSprite;
	private Sprite floatingButtonSprite;
	private BitmapFont font;

	private MultipleVirtualViewportBuilder multipleVirtualViewportBuilder;

	@Override
	public void create() {
		multipleVirtualViewportBuilder = new MultipleVirtualViewportBuilder(800, 480, 854, 600);
		VirtualViewport virtualViewport = multipleVirtualViewportBuilder.getVirtualViewport(Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
		
		camera = new OrthographicCameraWithVirtualViewport(virtualViewport);
		// centers the camera at 0, 0 (the center of the virtual viewport)
		camera.position.set(0f, 0f, 0f);
		
		// extra code
		spriteBatch = new SpriteBatch();
		
		Pixmap pixmap = new Pixmap(64, 64, Format.RGBA8888);
		pixmap.setColor(Color.WHITE);
		pixmap.fillRectangle(0, 0, 64, 64);
		
		minimumAreaSprite = new Sprite(new Texture(pixmap));
		minimumAreaSprite.setPosition(-400, -240);
		minimumAreaSprite.setSize(800, 480);
		minimumAreaSprite.setColor(0f, 1f, 0f, 1f);
		
		maximumAreaSprite = new Sprite(new Texture(pixmap));
		maximumAreaSprite.setPosition(-427, -300);
		maximumAreaSprite.setSize(854, 600);
		maximumAreaSprite.setColor(1f, 1f, 0f, 1f);
		
		floatingButtonSprite = new Sprite(new Texture(pixmap));
		floatingButtonSprite.setPosition(virtualViewport.getVirtualWidth() * 0.5f - 80, virtualViewport.getVirtualHeight() * 0.5f - 80);
		floatingButtonSprite.setSize(64, 64);
		floatingButtonSprite.setColor(1f, 1f, 1f, 1f);
		
		font = new BitmapFont();
		font.setColor(Color.BLACK);
	}
	
	@Override
	public void resize(int width, int height) {
		super.resize(width, height);
		
		VirtualViewport virtualViewport = multipleVirtualViewportBuilder.getVirtualViewport(Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
		camera.setVirtualViewport(virtualViewport);
		
		camera.updateViewport();
		// centers the camera at 0, 0 (the center of the virtual viewport)
		camera.position.set(0f, 0f, 0f);
		
		// relocate floating stuff
		floatingButtonSprite.setPosition(virtualViewport.getVirtualWidth() * 0.5f - 80, virtualViewport.getVirtualHeight() * 0.5f - 80);
	}
	
	@Override
	public void render() {
		super.render();
		Gdx.gl.glClearColor(1f, 0f, 0f, 1f);
		Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
		camera.update();
		
		// render stuff...
		spriteBatch.setProjectionMatrix(camera.combined);
		spriteBatch.begin();
		maximumAreaSprite.draw(spriteBatch);
		minimumAreaSprite.draw(spriteBatch);
		floatingButtonSprite.draw(spriteBatch);
		font.draw(spriteBatch, String.format("%1$sx%2$s", Gdx.graphics.getWidth(), Gdx.graphics.getHeight()), -20, 0);
		spriteBatch.end();
	}

	public static void main(String[] args) {
		LwjglApplicationConfiguration config = new LwjglApplicationConfiguration();

		config.title = VirtualViewportExampleMain.class.getName();
		config.width = 800;
		config.height = 480;
		config.fullscreen = false;
		config.useGL20 = true;
		config.useCPUSynch = true;
		config.forceExit = true;
		config.vSyncEnabled = true;

		new LwjglApplication(new VirtualViewportExampleMain(), config);
	}

}

In the example there are three colors, green represents the minimum supported area, yellow the maximum supported area and red represents the area outside. If we see red it means that aspect ratio is not supported. There is a floating element colored white, which is always relocated in the top right corner of the screen, unless we are on an unsupported aspect ratio, in that case it is just located in the top right corner of the green area.

The next video shows the example in action:

UPDATE: you can download the source code to run on Eclipse from here.

Conclusion

In these two blog posts we explained in a simplified way how we managed to support different aspect ratios and resolutions for Clash of the Olympians, a technique that could be used as an acceptable way of handling different screen sizes for a wide range of games, and it is not hard to use.

As always, we hope you liked it and that it could be useful for you when developing your games. Opinions and suggestions are always welcome if you want to comment :) and also share it if you liked it and think other people could benefit from this code.

Thanks for reading.

VN:F [1.9.22_1171]
Rating: 4.9/5 (38 votes cast)

Our solution to handle multiple screen sizes in Android - Part one

January 22nd, 2013

Developing games for multiple devices is not an easy task. Given the variety of devices, one of the most common problem is having to handle multiple screen sizes, which means different resolutions and aspect ratios.

In this blog post we want to share what we did to minimize this problem when making Ironhide's Clash of the Olympians for Android.

In the next sections we are going to show some common ways of handling the multiple screens problem and then our way.

Stretching the content

One common approach when developing a game is making the game for a fixed resolution, for example, making the game for 800x480.

Based on that, you can have the next layout in one of your game's screens:


Main screen of Clash of the Olympians in a 800x480 device.

Then, to support other screen sizes the idea is to stretch the content to the other device screen:


Main screen on a 800x600 device, stretched from 800x480.

The main problem is that the aspect ratio is affected and that is visually unacceptable.

Stretching + keeping aspect ratio

To solve part of the previous problem, one common technique is stretching but keeping the correct aspect ratio by adding dead space to the borders of the screen so the real game area aspect ratio is the same on different devices. For example:


Main screen in a 800x600 device with borders.


Main screen in a 854x480 device with borders.

This is an easy way to attack this multiple screen size problem, you can even create some nice borders instead of the black borders shown in the previous image to improve how it looks.

However, in some cases this is not acceptable either since it doesn't look so good or it feels like the game wasn't made for that device.

Our solution: Using a Virtual Viewport

Our approach consists in adapting what is shown in the game screen area to the device screen size.

First, we define a range of aspect ratios we want to support, for example, in the case of clash we defined 4:3 (800x600) and 16:9 (854x480) as our border case aspect ratios, so all aspect ratios in the middle of those two should be supported.

Given those two aspect ratios, we defined our maximum area as 854x600 and our minimum area as 800x480 (the union and intersection between 800x600 and 854x480, respecively). The idea is to cover the maximum area with stuff, but the important stuff (buttons, information, etc) should be always included in the minimum area.


The red rectangle shows the minimum area while the blue rectangle shows the maximum area.

Then, given a device resolution we calculate an area that matches the device aspect ratio and is included in the virtual area. For example, given a device with a resolution of 816x544 (4:3), this is what is shown:


The green rectangle shows the matching area for 816x544.


This is how the main screen is shown in a 816x544 device.

In case we are on a bigger or lower resolution than the maximum or minimum area we defined, respectively, for example a screen of 480x320 (3:2), what we do is calculate the aspect ratio and find a corresponding match for that aspect ratio in the area we defined. In the case of the example, one match could be 800x534 since it is 3:2 aspect ratio and it is inside our virtual area. Then we scale down to fit the screen.


The green rectangle shows the calculated area for a resolution of 800x534 (matching the aspect of the 480x320 device).


This is what is shown of the main screen in a 480x320 device (click to enlarge the image).

Floating elements

For some elements of the game, such as buttons, maintaining their fixed world position for different screen sizes doesn't look good, so what we do is making them floating elements. That means they are always at the same screen position, the next images shows an example with the main screen buttons:


Main screen's buttons distribution for a 854x480 device.


Main screen's buttons distribution for a 800x600 device. As you can see, buttons are relocated to match the screen size.

Finally, we want to show a video of this multiple screen sizes auto adjustment in real time:


Adjusting the game to the screen size in real time.

Some limitations

As we are scaling up/down in some cases to match the corresponding screen, some devices could perceive some blur since we are using linear filtering and the final position of the elements after the camera transformations could be not integer positions. This problem is minimized with better density devices and assets.

Layouts could change between different devices, for example, the layout for a phone could be different to the layout of a tablet device.

Text is a special case, when rendering text just downscaling it is not a correct solution since it could be not readable. You may have to re-layout text for lower resolution devices to show it bigger and readable.

Conclusion

If you design your game screens follow this approach, it is not so hard to support multiple screen sizes in an acceptable way. However there is still a lot of detail to take care of, like the problems we talked in the previous section.

In the next part of this blog post we will show some code based on LibGDX for those interested in how we implemented all this.

Thanks for reading and hope you liked it.

VN:F [1.9.22_1171]
Rating: 4.7/5 (51 votes cast)

Clash of the Olympians for Android

January 5th, 2013

For the last eight months approx we were working with Ironhide Game Studio on a port to Android of their game Clash of the Olympians originally made for Flash. We are happy to announce that it was released on Google Play on last December 6th.

It is not a direct port since it has new features like bonuses for making combos during the game, new enemy behaviors, a hero room to see your score when you finish the game and multiple save slots. Also, the game mechanics changed a bit since they were adapted to touch devices and the game was rebalanced to match the new controls.

If you didn't already, go and get it on Google Play:

https://play.google.com/store/apps/details?id=com.ironhide.games.clashoftheolympians

QR code:

Hope you enjoy it.

VN:F [1.9.22_1171]
Rating: 5.0/5 (13 votes cast)

Decoupling game logic from input handling logic

August 23rd, 2012

In this post we want to share how we are decoupling our game logic from the input handling as we explained briefly in a previous post about different controls we tested for Super Flying Thing.

Introduction

There are different ways to handle the input in a game. Basically, you could have a framework that provides a way to define event handlers for each input event, or to poll input values from the API. LibGDX provides both worlds so it is up to you what you consider best for your game. We prefer to poll for input values for the game logic itself.

When starting to make games, you probably feel tempted to add the input handling logic in one or more base concepts of your game, for example, if you were making Angry Birds you probably would add it to the Slingshot class to detect when to fire a bird or not. That is not totally bad if you are making a quick prototype but it is not recommended for long term because it would be harder to add or change between different control implementations.

Abstracting the input

To improve a bit this scenario in our games, we are using an intermediary class named Controller. That class provides values more friendly and related with the game concepts. A possible Controller class for our example could be:

class SlingshotController {
	boolean charging;
	Vector2 direction;
}

Now, we could process the input handling in one part of the code and update a common Controller instance shared between it and the game logic. Something like this:

class SlingshotMouseControllerLogic extends InputListener {

	Slingshot slingshot;
	SlingshotController controller;

	public boolean touchDown (InputEvent event, float x, float y, int pointer, int button) {
		slingshotPosition = slingshot.getPosition();
		controller.charging = slingshotPosition.isNear(x,y);
		return controller.charging;
	}

	public void touchUp (InputEvent event, float x, float y, int pointer, int button) {
		if (!controller.charging) 
			return;
		controller.charging = false;
	}

	public void touchDragged (InputEvent event, float x, float y, int pointer) {
		if (!controller.charging) 
			return;
		slingshotPosition = slingshot.getPosition();
		controller.direction.set(slingshotPosition);
		controller.direction.sub(x,y);
	}	
}

Or if you are polling the input:

class SlingshotMouseControllerLogic implements Updateable {

	Slingshot slingshot;
	SlingshotController controller;

	boolean touchWasPressed = false;

	public void update(float delta) {
		boolean currentTouchPressed = Gdx.input.isPressed();

		slingshotPosition = slingshot.getPosition();
		x = Gdx.input.getX();
		y = Gdx.input.getY();

		if (!touchWasPressed && currentTouchPressed) {
			controller.charging = slingshotPosition.isNear(x,y);
			touchWasPressed = true;
		} 

		if(touchWasPressed && !currentTouchPressed) {
			controller.charging = false;
			touchWasPressed = false
		}

		if (!controller.charging)
			return;

		controller.direction.set(slingshotPosition);
		controller.direction.sub(x,y);	
	}
	
}

Now, the Slingshot implementation will look something like this:

class Slingshot { 

	// multiple fields
	// render logic 

	Controller controller;
	boolean wasCharging = false;

	update() {
		if (controller.charging && !wasCharging) {
			// starts to draw stuff based on the state we are now charging...
			
			// charges a bird in the slingshot.

			wasCharging = true;
		} else if (!controller.charging && wasCharging) {
			// stops drawing the slingshot as charging

			// fires the bird!!
	
			wasCharging = false;
		}
		// more stuff...
	}
}

As you can see, the game concept Slingshot doesn't know anything about input anymore and we could switch to use the keyboard, Xbox 360 Controller, etc, and the game logic will not notice the change.

Conclusion

Decoupling your game logic from the input by abstracting it in a class is a good way to keep your game logic depending mainly on game concepts making it easier to understand and improving its design. Also, it is a good way to create several controls for the game (input, AI, network, recorded input, etc), while the game logic don't even notice the change.

This post provides a really simple concept, the concept of abstraction, it is nothing new and probably most of the game developers are already doing this, even though we wanted to share it, maybe it is helpful for someone.

We tried to use simple and direct code in this post to increase understandability, however in our games, as we use an entity system framework, we do it a bit different using components, scripts and systems instead of direct classes for concepts like the class Slingshot we presented in this post, but that's food for another blog post.

Finally, we use another abstraction layer over the framework input handling which provides us a better and simplified API to poll info from, that's why we prefer to poll values, food for another post as well.

Hope you like it, as always.

VN:F [1.9.22_1171]
Rating: 4.6/5 (17 votes cast)

Drawing a projectile trajectory like Angry Birds using LibGDX

July 3rd, 2012

We had to implement a projectile trajectory like Angry Birds for our current game and we wanted to share a bit how we did it.

Introduction

In Angry Birds, the trajectory is drawn after you fired a bird showing its trajectory to help you decide the next shot. Knowing the trajectory of the current projectile wasn't totally needed in that version of the game since you have the slingshot and that tells you, in part, where the current bird is going.

In Angry Birds Space, they changed to show the trajectory of the current bird because they changed the game mechanics and now birds can fly different depending the gravity of the planets, the slingshot doesn't tell you the real direction anymore. So, that was the correct change to help the player with the new rules.

We wanted to test how drawing a trajectory, like Angry Birds Space does for the next shot, could help the player.

Calculating the trajectory

The first step is to calculate the function f(t) for the projectile trajectory. In our case, projectiles have a normal behavior (there are no mini planets) so the formula is simplified:

We found an implementation for the equation in stackoverflow, here the code is:

class ProjectileEquation {

	public float gravity;
	public Vector2 startVelocity = new Vector2();
	public Vector2 startPoint = new Vector2();

	public float getX(float t) {
		return startVelocity.x * t + startPoint.x;
	}

	public float getY(float t) {
		return 0.5f * gravity * t * t + startVelocity.y * t + startPoint.y;
	}

}

With that class we have an easy way to calculate x and y coordinates given the time.

Drawing it to the screen

If we follow a similar approach of Angry Birds, we can draw colored points for the projectile trajectory.

In our case, we created a LibGDX Actor dedicated to draw the Trajectory of the projectile. It first calculates the trajectory using the previous class and then renders it by using a Sprite and drawing it for each point of the trajectory by using the SpriteBatch's draw method. Here is the code:

public static class Controller  {
	
	public float power = 50f;
	public float angle = 0f;
	
}

public static class TrajectoryActor extends Actor {

	private Controller controller;
	private ProjectileEquation projectileEquation;
	private Sprite trajectorySprite;

	public int trajectoryPointCount = 30;
	public float timeSeparation = 1f;

	public TrajectoryActor(Controller controller, float gravity, Sprite trajectorySprite) {
		this.controller = controller;
		this.trajectorySprite = trajectorySprite;
		this.projectileEquation = new ProjectileEquation();
		this.projectileEquation.gravity = gravity;
	}

	@Override
	public void act(float delta) {
		super.act(delta);
		projectileEquation.startVelocity.set(controller.power, 0f);
		projectileEquation.startVelocity.rotate(controller.angle);
	}

	@Override
	public void draw(SpriteBatch batch, float parentAlpha) {
		float t = 0f;
		float width = this.width;
		float height = this.height;

		float timeSeparation = this.timeSeparation;
		
		for (int i = 0; i < trajectoryPointCount; i++) {
			float x = this.x + projectileEquation.getX(t);
			float y = this.y + projectileEquation.getY(t);

			batch.setColor(this.color);
			batch.draw(trajectorySprite, x, y, width, height);

			t += timeSeparation;
		}
	}

	@Override
	public Actor hit(float x, float y) {
		return null;
	}

}

The idea of using the Controller class is to be able to modify the values from outside of the actor by using a shared class between different parts of the code.

Further improvements

To make it look nicer, one possible addition is to decrement the size of the trajectory points and to reduce their opacity.

In order to do that we drawn each point of the trajectory each time with less alpha in the color and smaller by changing the width and height when calling spritebatch.draw().

We also added a fade in transition to show the trajectory instead making it instantly appear and that works great too, but that is in the game.

Another possible improvement, but depends on the game you are making, is to separate the points using a fixed distance. In order to do that, we have to be dependent on x and not t. So we added a method to the ProjectileEquation class that given a fixed distance and all the values of the class it returns the corresponding t in order to maintain the horizontal distance between points, here is the code:

	public float getTForGivenX(float x) {
		return (x - startPoint.x) / (startVelocity.x);
}

Now we can change the draw method of the TrajectoryActor to do, before starting to draw the points:

	float fixedHorizontalDistance = 10f;
	timeSeparation = projectileEquation.getTForGivenX(fixedHorizontalDistance);

Not sure which one is the best option between using x or t as the main variable, as I said before, I suppose it depends on the game you are making.

Here is a video showing the results:

If you want to see it working you can test the webstart of the prototypes project, or you can go to the code and see the dirty stuff.

Conclusion

Making a trajectory if you know the correct formula is not hard and it looks nice, it also could be used to help the players maybe as part of the basic gameplay or maybe as a powerup.

Hope you like it.

VN:F [1.9.22_1171]
Rating: 4.3/5 (22 votes cast)

Android and Desktop games internationalization using Java and LibGDX

May 23rd, 2012

Recently, we had to add multiple language support for a game we are developing. As you may know, Java provides classes to simplify the task of making your application available in multiple languages. In this post we want to share a bit our experience when using Java localization classes in a LibGDX application to provide multiple language support for both Android and desktop platforms.

Quick introduction

Java provides a class named ResourceBundle which provides a way to store resources (mainly strings) for a given locale, so you can ask for a string identified by a key and it will return the text depending the current locale. You can read the article Java Internationalization: Localization with ResourceBundles if you want to know more about how to use Java classes for internationalization. The rest of the post assumes you know something about Locale and ResourceBundle classes.

Why we don't use Android resources

Android provides also a way to support multiple locale resources but it depends on Android API, so we prefer to use the Java API instead which should work on all platforms.

Our experience when using Java internationalization on Android

When letting ResourceBundle to automatically load resources bundles from properties files, Java expects them to be in ISO-8859-1 encoding. However, it seems Android behaves in a different way and expects another encoding by default. So, when resource bundles are automatically loaded in Android from an ISO-8859-1 properties file with special characters, it loads them wrong.

The first try

The first solution we tried to fix this was to call ResourceBundle.getBundle() method using a custom Control implementation which creates PropertyResourceBundles using an InputReader with the correct encoding. Here is a code example to achieve that:

public class EncodingControl extends Control {

	String encoding;

	public EncodingControl(String encoding) {
		this.encoding = encoding;
	}

	@Override
	public ResourceBundle newBundle(String baseName, Locale locale, String format, ClassLoader loader, boolean reload) 
			throws IllegalAccessException, InstantiationException, IOException {
		String bundleName = toBundleName(baseName, locale);
		String resourceName = toResourceName(bundleName, "properties");
		ResourceBundle bundle = null;
		InputStream inputStream = null;
		try {
			inputStream = loader.getResourceAsStream(resourceName);
			bundle = new PropertyResourceBundle(new InputStreamReader(inputStream, encoding));
		} finally {
			if (inputStream != null)
				inputStream.close();
		}
		return bundle;
	}
}

After that, we customized the Control class to work with LibGDX FileHandle in order to place the properties files in the assets folder. Here is the final code for our Control implementation:

public class GdxFileControl extends Control {

	private String encoding;
	private FileType fileType;

	public GdxFileControl(String encoding, FileType fileType) {
		this.encoding = encoding;
		this.fileType = fileType;
	}
	
	public ResourceBundle newBundle(String baseName, Locale locale, String format, ClassLoader loader, boolean reload) 
			throws IllegalAccessException, InstantiationException, IOException {
		// The below is a copy of the default implementation.
		String bundleName = toBundleName(baseName, locale);
		String resourceName = toResourceName(bundleName, "properties");
		ResourceBundle bundle = null;
		FileHandle fileHandle = Gdx.files.getFileHandle(resourceName, fileType);
		if (fileHandle.exists()) {
			InputStream stream = null;
			try {
				stream = fileHandle.read();
				// Only this line is changed to make it to read properties files as UTF-8.
				bundle = new PropertyResourceBundle(new InputStreamReader(stream, encoding));
			} finally {
				if (stream != null)
					stream.close();
			}
		}
		return bundle;
	}
}

And that can be called in this way:

	ResourceBundle.getBundle("messages", 
		new GdxFileControl("ISO-8859-1", FileType.Internal))

That worked really well until we discovered that Android API sucks and doesn't support ResourceBundle.Control before API level 9, that means our solution works only for users with Android 2.3+. That's a problem since we want to support 2.0+, so we had to think another way to solve this.

The second try

After some tests, we discovered that if we construct a PropertyResourceBundle using an InputStream, the expected encoding is ISO-8859-1 for both desktop and Android. That means that, if we use that specific PropertyResourceBundle constructor, we don't have to force the encoding. So, the new solution consists in building a PropertyResourceBundle for each locale and configuring the hierarchy ourselves by setting their parent ResourceBundle. Here is an example of what we do now:

	FileHandle rootFileHandle = Gdx.files.internal("data/messages.properties");
	FileHandle spanishFileHandle = Gdx.files.internal("data/messages_es.properties");
	ResourceBundle rootResourceBundle = new PropertyResourceBundle(rootFileHandle.read());
	ResourceBundle spanishResourceBundle = new PropertyResourceBundle(spanishFileHandle.read()) {{
		setParent(rootResourcebundle);
	}};

After that we created a map of ResourceBundles for each Locale we support, so we can call something like:

	ResourceBundle resourceBundle = getResourceBundle(new Locale("es"));

The good part is this solution works well for both Android and desktop despite the Android API level (PropertyResourceBudndle seems to be supported from API Level 1). The bad part is that we lost the ResourceBundle logic to automatically build the hierarchy of resources and we had to do that manually now.

UPDATE: The class we use for this stuff is available in our commons-gdx project, resources module with the name of ResourceBundleResourceBuilder.

Conclusion

Supporting multiple languages in an application is a way to say users of all around the world you care about them but translating text to several languages is not cheap at all, however, Java provides a good framework to simplify the job if you decide to support internationalization.

And as a side conclusion: never assume all the Java classes you are using are implemented for the minimum Android API you are targeting.

References

Area triggers using Box2D, Artemis and SVG paths

March 22nd, 2012

As we explained in previous posts, we are using Inkscape to design the levels of some of our games, in particular, our current project. In this post we want to share how we are making area triggers using Box2D sensor bodies, Artemis and SVG paths.

What is an area trigger

When we say area trigger we mean something that should be triggered, an event for example, when an entity/game object enters the area, to perform custom logic, for example, ending the game or showing a message. Some game engines provides this kind of stuff, for example Unity3d with its Collider class and different events like OnTriggerEnter.

Building an area trigger in Inkscape

Basically, we use SVG paths with custom XML data to define the area trigger to later parse it by the game level loader to create the corresponding game entities. The following screen shot shows an example of an area defined using Inkscape:

Right now, we are exporting two values with the SVG path, the event we want to fire identified by the XML attribute named eventId, and extra data for that event identified by the XML attribute eventData. For example, for our current game we use the eventId showTutorial with a text we want to share with the player on eventData attribute like "Welcome to the training grounds". The following example shows the XML data added to the SVG path:

  

The exported data may depend on your framework or game, so you should export whatever data you need instead.

Defining the area trigger inside the game

Inside the game, we have to define a entity/game object for the area trigger. In the case of our current game, that entity is composed by a Box2D sensor body with a shape built using the SVG path and a Script with logic to perform when the main character collides it.

We use sensor bodies because they are mainly used to detect collisions but not to react to them by changing their angular and linear velocities. As we explained in a previous post, we are using our custom builders to help when building Box2D bodies and fixtures. Our current body declaration looks like this:

  
Body body = bodyBuilder //
	.fixture(bodyBuilder.fixtureDefBuilder() //
		.polygonShape(vertices) // the vertices from the SVG path
		.categoryBits(Collisions.Triggers) // the collision category of this body
		.maskBits(Collisions.MainCharacter) // the collision mask
		.sensor() //
	) //
	.position(0f, 0f) //
	.type(BodyType.StaticBody) //
	.angle(0f) //
	.userData(entity) //
	.build();

The previous code depends on specific stuff of the current game but it could be modified to be reused in other projects.

As we explained in another previous post, we are using a basic scripting framework over Artemis. Our current script to detect the collision looks like this:

 
public static class TriggerWhenShipOverScript extends ScriptJavaImpl {
	
	private final String eventId;
	private final String eventData;
	
	EventManager eventManager;

	public TriggerWhenShipOverScript(String eventId, String eventData) {
		this.eventId = eventId;
		this.eventData = eventData;
	}

	@Override
	public void update(World world, Entity e) {
		PhysicsComponent physicsComponent = Components.getPhysicsComponent(e);
		Contacts contacts = physicsComponent.getContact();
		
		if (contacts.isInContact()) {
			eventManager.submit(eventId, eventData);
			e.delete();
		}
	}
}

For the current game, we are testing this stuff for a way to communicate with the player by showing messages from time to time, for example, in a basic tutorial implementation. The next video shows an example of that working inside the game:

Conclusion

The idea of the post is to share a common technique of triggering events when a game object enters an area, which is not framework dependent. So you could use the same technique using your own framework instead Box2D and Artemis, a custom level file format instead SVG and the editor of your choice instead Inkscape.

References

VN:F [1.9.22_1171]
Rating: 4.4/5 (5 votes cast)

Building 2d animations using Inkscape and Synfig

March 16th, 2012

In this blog post we want to share a method to animate Inkscape SVG objects using Synfig Studio, trying to follow a similar approach to the Building 2d sprites from 3d models using Blender blog post.

A small introduction about Inkscape

Inkscape is one of the best open source, multi platform and free tools to work with vector graphics using the open standard SVG.

After some time using Inkscape, I have learned how to make a lot of things and feel great using it. However, it lacks of some features which would make it a great tool, for example, a way to animate objects by making interpolations of its different states defining key frames and using a time line, among others.

It has some ways to create interpolations of objects between two different states but it is unusable since it doesn't work with groups, so if you have a complex object made of a group of several other objects, then you have to interpolate all of them. If you make some modification on of the key frames, then you have to interpolate everything again.

Synfig comes into action

Synfig Studio is a free and open-source 2D animation tool, it works with vector graphics as well. It lets you create nice animations using a time line and key frames and lets you easily export the animation. However, it uses its own format, so you can't directly import an SVG. Luckily, the format is open and there are already some ways to transform from SVG to Synfig.

In particular I tried an Inkscape extension named svg2sif which lets you save files in Synfig format and seems to work fine (the page of the extension explains how to install it). I don't know possible limitations of the svg2sif Inkscape extension, so use it with caution, don't expect everything to work fine.

Now that we have the method defined, we will explain it by showing an example.

Creating an object in Inkscape

We start by creating an Inkscape object to be animated later. For this mini tutorial I created a black creature named Bor...ahem! Gishus Maximus:

Modelling Gishus Maximus using Inkscape

Here is the SVG if you are interested on it, sadly WordPress doesn't support SVG files as media files.

With the model defined, we have to save it as Synfig format using the extension, so go to "Save a Copy..." and select the .sif format (added by the svg2sif extension), and save it.

Animating the object in Synfig

Now that we have the Synfig file we open it and voilร , we can animate it. However, there is a bug, probably with the svg2sif extension and the time line is missing. To fix it, we have to create a new document and copy the shape from the one exported by Inkscape to the new one.

The next step is to use your super animation skill and animate the object. In my case I created some kind of eating animation by making a mouth, opening it slow and then closing it fast:

Animating Gishus Maxumis using Synfig

Here is the Synfig file with the animation if you are interested on it.

To export it, use the "Show the Render Settings Dialog" button and configure how much frames per second you want, among other things, and then export it using the Render button. You can export it to different format, for example, a list of separated PNG files for each animation frame or an animated GIF. However, it you can't configure some of the formats and the exported file is not what I wanted so I preferred to export to a list of PNG files and then use the convert tool to create the animated GIF:

Finally, I have a time lapse of how I applied the method if you want to watch it:

Extra section: Importing the animation in your game

After we have separated PNG files for the animation, we can create a sprite sheet or use another tools to create files to be easily imported by a game framework. For this example, I used a Gimp plug-in named Sprite Tape to import all the separated PNG files and create a sprite sheet:

If you are a LibGDX user and want to use the Texture Packer, you can create a folder and copy the PNG files changing their names to animationname_01, animationname_02, etc, and let Texture Packer to automatically import it.

Conclusions

One problem with this method is that you can't easily modify your objects in Inkscape and then automatically import them in Synfig and update the current animation to work with it. So, once you moved to Synfig you have to keep there to avoid making a lot of duplicated work. This could be avoided if Inkscape provided a good animation extension.

Synfig Studio is a great tool but not the best of course, it is not intuitive (as Gimp, Blender and others) and it has some bugs that make it explode without reason. On the other hand, it is open source, free and multi platform and the best part is that it works well for what we need right now ๐Ÿ˜‰

This method allow us to animate vector graphics which is great since it is a way for programmers like us to animate their programmer art ๐Ÿ˜€

Finally, I am not an animation expert at all, so this blog post could be based on some wrong assumptions. So, if you are one, feel free to correct me and share your opinions.

As always, hope you like the post.

VN:F [1.9.22_1171]
Rating: 4.3/5 (18 votes cast)

Implementing transitions between screens

March 5th, 2012

Using transitions between game screens is a great way to provide smoothness between screen changes, for example, fade out one screen and then fade in the next one. The next video shows an example of those effects our Vampire Runner game.

In this post, we will show a possible implementation of transitions between screens using LibGDX, however the code should be independent enough to be easily ported to other frameworks.

Although we implemented it using the our own concept of GameState, we will try to use LibGDX Screen concept in this post to simplify understandability.

Implementation

The implementation is based in the concept of TransitionEffect. A TransitionEffect holds the render logic of one of the effects of the transition being performed.

class TransitionEffect {

	// returns a value between 0 and 1 representing the level of completion of the transition.
	protected float getAlpha() { .. }

	void update(float delta) { .. } 

	void render(Screen current, Screen next);

	boolean isFinished() { .. }

	TransitionEffect(float duration) { .. }
}

An implementation example of a TransitionEffect is a FadeOutTransitionEffect to perform a fade out effect:

class FadeOutTransitionEffect extends TransitionEffect {

	Color color = new Color();

	@Override
	public void render(Screen current, Screen next) {
		current.render();
		color.set(0f, 0f, 0f, getAlpha());
		// draw a quad over the screen using the color
	}

}

Then, in order to perform a transition between Screens, we need a custom Screen with the logic to apply render each transition effect and to set the next Screen when the transition is over. This is a possible implementation:

class TransitionScreen implements Screen {
	Game game;

	Screen current;
	Screen next;

	int currentTransitionEffect;
	ArrayList<TransitionEffect> transitionEffects;

	TransitionScreen(Game game, Screen current, Screen next, ArrayList<TransitionEffect> transitionEffects) {
		this.current = current;
		this.next = next;
		this.transitionEffects = transitionEffects;
		this.currentTransitionEffect = 0;
		this.game = game;
	}

	void render() {
		if (currentTransitionEffect >= transitionEffects.size()) {
			game.setScreen(next);
			return;
		}

		transitionEffects.get(currentTransitionEffect).update(getDelta());
		transitionEffects.get(currentTransitionEffect).render(current, next);

		if (transitionEffects.get(currentTransitionEffect).isFinished())
			currentTransitionEffect++;
	}
}

Finally, each time we want to perform a transition between two screens, we have to create a new TransitionScreen with the current and next Screens and a collection of effects we want. For example:

	Screen current = game.getScreen();
	Screen next = new HighscoresScreen();

	ArrayList<TransitionEffect> effects = new ArrayList<TransitionEffect>();

	effects.add(new FadeOutTransitionEffect(1f));
	effects.add(new FadeInTransitionEffect(1f));

	Screen transitionScreen = new TransitionScreen(game, current, next, effects);

	game.setScreen(transitionScreen);

As we mention before, we use our own concepts in our implementation. If you want to see our code take a look at the classes ApplicationListenerGameStateBasedImpl, GameState and GameStateTransitionImpl (do not expect the best code in the world).

Conclusion

Adding transitions between the game screens gives users a feeling of smoothness, and we believe it worth the effort.

Also, we like the current design lets you implement different effects for the transitions, we only shown fade out and fade in as example because they are really simple to implement and we are using only those for our games.

As always, hope you like the post.

VN:F [1.9.22_1171]
Rating: 4.3/5 (21 votes cast)

Toasting with LibGDX Scene2D and Animation4j

March 4th, 2012

For our latest Vampire Runner update we changed to use LibGDX scene2d instead Android GUI. The main reason for the change is that we wanted to use a common GUI API for Android and PC, and sadly we can't do that using Android API. With LibGDX scene2d we can code once and run in both platforms.

In particular, the toast feature of the Android API was really interesting to have and we want to share how we implemented it using LibGDX scene2d.

Toasting

A toast is defined as a scene2d Window that shows some text and disappear after a while, this is a pseudo code to give the idea of how to create that toast window:

Actor toast(String text, float time, Skin skin) {
	Window window = new Window(skin);
	window.add(new Label(text, skin));
	...
	window.action(new Action() {
		act(float delta) {
			// update the animation
			// if the animation is finished, we remove the window from the stage.
		}
	});
	...
	return window;
}

To animate the toast, we create a TimelineAnimation using animation4j defining that the window should move from outside the screen to inside the screen, wait some time and then go out of the screen again. The code looks like this:

TimelineAnimation toastAnimation = Builders.animation( //
	Builders.timeline() //
		.value(Builders.timelineValue(window, Scene2dConverters.actorPositionTypeConverter) //
			.keyFrame(0f, new float[] { window.x, outsideY }) //
			.keyFrame(1f, new float[] { window.x, insideY }) //
			.keyFrame(4f, new float[] { window.x, insideY }) //
			.keyFrame(5f, new float[] { window.x, outsideY }) //
		) //
	) //
	.started(true) //
	.delay(0f) //
	.speed(5f / time) //
	.build();

That code creates a new animation which modifies the position of the Window each time update() method is called.

Of course, you can animate the Window using LibGDX custom Actions or another animation framework like Universal Tween Engine, that is up to you.

If you want to see the code itself, you can see the Actor factory named Actors at our Github of commons-gdx.

In our subclass of Game, we added an empty Stage updated in each render() method, and a toast(string) method which creates a toast as explained before using default Skin and time.

MyGame extends Game {

	Stage stage;
	float defaultTime;
	Skin defaultSkin;

	render() {
		// all our game update and render logic
		...
		stage.act(delta);
		stage.draw();
	}

	toast(String text) {
		stage.add(Actors.toast(text, defaultTime, defaultSkin);
	}
}

So, if we want to toast about something, we only have to call game.toast("something") and voilรก.

You can see a running example of this, you can run the Gui.Scene2dToastPrototype of our prototypes webstart (recommended), or watch the next video:

Conclusion

Despite being a bit incomplete and buggy yet, scene2d API is almost easy to use and it is great if you want to do simple stuff.

Using scene2d is great for our simple need of GUI interfaces because we can quickly test all the stuff in PC. In Vampire Runner we are using scene2d for the feedback dialog, the new version available dialog and for the change username screen.

An interesting thing to have in mind when using scene2d API is that you can make your own Skin to achieve a more integrated look and feel.

As always, hope you like the post and could be of help.

VN:F [1.9.22_1171]
Rating: 3.4/5 (5 votes cast)