Basics

In this guide we will discuss both the 3D target preparation process and the SDK usage. In particular we will describe how to create a 3D target, how to tune the tracker in order to have optimal results with the actual target, how to deploy the 3D target and how to use it within your application, the guide indeed reflects the typical workflow required to create an application that employs the SDK.

Workflow

Although many workflows can exists depending on the actual application you are developing, at least the following steps are required:

  • create a 3D target for a real object/environment
  • test the created 3D target and tune tracker's parameters
  • develop the application's logic

3D Targets creation, tuning & deployment

Whatever purpose your 3D tracking application has, you must create a 3D target for some objects you want to detect and track. The process involved in the preparation of a 3D target is made of three major steps:

Target Creation

In order to create a 3D target you will use these tools:

  • the ARMedia 3D Tracker ToolSet mobile app (iOS/Android) More
  • the SDK online platform's tools More

The very first step required to create a 3D target is to capture some images of the real object, the number of images you need to capture mainly depends on the complexity of the object and on the size of the object in relation to the distance the user will be with respect to the object itself (typically you will take 30-50 images for most common cases, but in many cases you could just take as few as 15 images and still get very good results).

When capturing images of an object you should try to have some overlapping zones among captured frames, the overlapping could be either on the object itself or on the surrounding environment. Note that very shiny, reflective objects are not suitable for 3D tracking. Also objects that can change shape over time won't be tracked because the 3D target is static in nature.

To capture all required images you use the ARMedia 3D Tracker ToolSet mobile app (here), whose purpose is to both allow you to take pictures of objects (either manually or automatically) and to test/tune 3D targets.

The supported format for image capture is JPG, 640x480. Using the Capture command the app will ask you to enter the name of the folder used to save captured frames and also will ask you to choose if you want to manually shoot photos or use the automatic mode that will take a picture approximately every 2 seconds (you can customize this parameter in the settings related to the app).

Note that if you use an existing folder name, every frame eventually in that folder will be overwritten. On iOS the chosen folder will be created in the iTunes shared document folder of the ARMedia 3D Tracker ToolSet app, while on Android it will be created on the "ARMediaSDKToolset" folder on your device.

When you start using manual mode the interface will change like this:

On the right panel you see, from top to bottom, the current image counter, the shoot button and the resolution of captured frames. You finish by just touching the X button on the left.

When running in automatic mode, the interface will be slightly different as shown below:

The only difference is that when you touch the shoot button (that now is red) the app will start shooting based on a timer, allowing you to move between each shot. Moreover, once the app is taking photos, the shoot button will change showing the pause symbol:

Just touch the pause button to temporary stop capturing frames (this is useful if you need for instance to move to another vantage point before taking more pictures).

When you have all required images you have to upload them on the SDK online platform in the "New 3D Target" section. At the beginning of the online procedure you have to choose the camera calibration data corresponding to the actual device used to capture the objects' photos, if your device is not already listed in the available calibration entries, then you have to create a new calibration data: creating a new calibration data is very similar to the process of taking photos of an object, but the object used for calibration purposes is the a printed markers set (download it from the link available on the very same webpage - Download); when you have all images required for calibration, usually take at least 50 images from all angles) you have to upload them in the provided space.

Once you have either uploaded new images for calibration or chosen a previously calibration data, you can proceed with the next step, ie the upload of images captured for the target object (WARNING: it is important to use the correct calibration data corresponding to the same camera used to capture new frames otherwise tracking won't work as expected).

Next you must provide a 3D model that will serve as a reference and that will be used to set the scale and orientation of the 3D target, this is useful not only because it will help you to place other virtual objects accurately around the real object for augmentation purposes, but also because you can choose to use the provided 3D model to remove the surrounding environment from the pointcloud that will be created (this is very useful if you want to track an object that can be placed in different scenarios or that can be handled by the user, on the other hand you usually won't remove the environment if you are tracking a building or a landscape, even though there can be cases where you will also in these cases). Note that to have good results, you should not choose to remove the environment if the provided 3D model comes from a CAD model, that meaning that the model has not been generated by an automatic process like shown below).

So, regarding the 3D model to use as a reference you have two choices:

  • you can provide a CAD model
  • you can provide a 3D model that is the result of a 'scanning' procedure

In the first case, it is very likely that you do not have accurate correspondences among the synthetic geometry of your CAD model and the real object, because real objects (unless they are crafted by machines/processes with a very high degree of precision and they do not deteriorate during their lifetime) tend to be not geometrically precise. Nonetheless there are cases where CAD model are very suitable as 3D reference models, for instance if you want to track a building (for which a CAD model is usually available) with regular shapes, or in those cases where you can identify some parts of an object that are made by simple shapes easily reproducible using a CAD software (indeed it is not mandatory to have the 3D model of the whole object you want to track, but only a portion of it or also of surrounding objects, because the purpose of the 3D model is mainly to set the scale and orientation of the target pointcloud).

In the second case you can use many different approaches to reconstruct a 3D model of the object you want to track, nowadays there are plenty of methods available from expensive lasers scanners to cheap and even free software based solutions. One of the most effective ways to reconstruct a mesh for a real object is provided for instance by the Autodesk's 123D Catch application: what is required is just a set of photos of the object you want to reconstruct.

By far, using Autodesk 123D Catch is the preferred method when you are reconstructing an object also for tracking purposes, indeed considering that you already have a set of photos captured using the ARMedia 3D Tracker ToolSet app, you can upload them also onto the 123D Catch website and get back a very accurate 3D mesh of your object with also a texture representing the real look of the object (Note that even though you can use the same photos captured by the ToolSet app, if you need a really accurately reconstructed mesh it is advisable to use a high resolution camera to take high quality photos just meant to be used for the 3D recontruction process but not for tracking).

123D Catch allows you to download the 3D model as an OBJ file with related MTL and JPG files (NOTE: that if you want to display textures in the 3D viewer available in the SDK online platform you should open the MTL file and just leave the texture file name instead of the full or relative path).

Whatever method you choose to create the 3D model, you should convert the model to the OBJ format before uploading it onto the SDK online platform. When you upload the 3D model you can also upload any accompanying texture (and material library).

If you plan to create a target by removing the surrounding environment you should cleanup the reconstructed mesh (actually you should do if the reconstruction has some glitches even if you want to retain the surroundings).

Note that any texture that is loaded along with the model can be useful only as a visual hint for you when you will create 3D to 2D points correspondences and will be effective only when they comes from a scan of the real object but would not be very useful in the case of CAD model where usually you apply textures that do not correspond to the real object - in any case what really matters are the coordinates of the reference object not the texture on it.

Once you have your 3D model available, upload it in the provided space and proceed with the next step: 2D to 3D point correspondences.

In order to create a pointcloud that has the same scale and orientation of the provided 3D model, you have to set some correspondences among 2D points (from some of the photos you have provided) and 3D points of the uploaded reference model. For good results it is advisable to create correspondences at least from 2 or 3 photos and for each photo you have to choose at least 4 points (if they are on a planar surface) or at least 6 points if they are not on a planar surface.

At the beginning of the process you will be asked to confirm the orientation of the 3D model, indeed some 3D software may use a different reference coordinates system and the model could be rotated in the 3D viewport, in this case just click the button to change the orientation and confirm to proceed.

The interface for this step will show you a 3D viewport on the left and a 2D viewport on the right: in the latter you will see thumbnails of the uploaded photos, when you click one of the thumbnails the photo will occupy the entire viewport.

The first thing to do is to have a look at your 3D model, just look around it and try to find some points that you think are reliably accurate (for instance, in the case of a CAD model every 3D point is supposed to be placed in its precise position in space, but in the case of a 3D model that has been reconstructed you will notice that some parts of the 3D mesh are deformed especially near the boundaries, so you should avoid to select points on those parts). Also take some time to inspect the photos you have provided, as a general rule you should discard photos that are blurred, then try to identify some very distinguishing point, those points will make you life easier when you have to chose a 3D correspondence onto the 3D model.

Then you can start double-clicking the 3D model to select some points on the model, the 3D point you picked last is highlighted and is set as the current 3D point, if you focus your attention on the 2D photo that is displayed on the right view you can double-click the point in the image that corresponds to the selected 3D point. You can use the zoom facility in order to place 2D points more precisely. To proceed you have to select either:

4+ correspondendes in the case of planar points, or

6+ correspondences in the case of non-planar points

when you have done this process for 2 or 3 photos, you can proceed with the final step, where you will be asked to provide a name for the 3D target and finally you will have the opportunity to enqueue all the data to the server. When the server completes your request it will send you an email and the created 3D target (pointcloud, photos and configuration file) will be available for download in the "My 3D Targets" section of the SDK online platform.

NOTE: processing a 3D target usually requires few minutes, when your data is the processing queue you cannot submit other data, if you need to do so, you have to remove the submitted data first.

Target Tuning

Once your 3D target is ready, proceed by downloading the zip archive and in order to test the tracker with the new target, just copy the content of the archive into the iTunes documents shared folder of the ARMedia 3D Tracker ToolSet app on iOS and in the "ARMediaSDKToolsetTests" folder of your device on Android. Each 3D target is made of:

  • 1 pointcloud file (.ply)
  • N images (.jpg - the photos you uploaded)
  • 1 configuration file (.xml)

By modifying the content of the configuration file you will tune the tracker for the actual 3D target (see corresponding section for details). When you have all the above files ready for testing just launch the ARMedia 3D Tracker ToolSet app and choose the "Tracking" command, a reminder will show up and if you proceed the tracker will start.

At the beginning the tracker is in its initialization phase, i.e. it is looking for the object you intend to track, if you had a good set of photos then the initialization phase should complete quickly and tracking should start.

A good set of photos means that you have photos representative of the points of view that are supposed to be used by the application's users and that each photo had enough 'good' features to detect and track. The app (as well as the SDK) will show you the average number of features descriptors that are extracted from the provided photos, the higher this number is the easier the tracker could initialize.

Once the tracker has initialized you will see features displayed onto the real object. The app allows you to also display other information onto the real object:

  • the coordinate system axes
  • a custom 3D model (.OBJ, 3DS, FBX…)
  • the reconstructed pointcloud

You enable or disable each of the above models by touching the corresponding buttons on the left of the view.

NOTE: on iOS you can specify the custom 3D model to use by copying it into the iTunes document shared folder of the app and also by specifying its file name into the app's settings (from the Settings app) - on Android you must copy the custom 3D model in the "ARMediaSDKToolsetTests" folder of your device and then specify its name into the app's settings.

The downloaded 3D target comes with a default tracking configuration file, but in order to tune the tracker parameters you can use the app and change some parameters using sliders. In order to do so, touch the settings button (lower right corner) and try different settings (the meaning of each one is described in the configuration file section - here).

When you find a good configuration you can save it by touching the corresponding button.

On iOS the saved configuration file will be saved in the iTunes document shared folder with the "custom_config.xml" name, on Android it will be saved in the "ARMediaSDKToolsetTests" folder on your device.

At this point you can use the tuned 3D target within your application (as described in the corresponding section).

Target Deployment

To use any 3D target within your application you can use the 3D target "as is", i.e. you can copy all related files inside the application bundle or provide them externally.

Another possible approach allows you to avoid to deploy also the photos used to create the target, indeed the SDK can create a cache (details available in the configuration section - here) that can be deployed instead of the photos set.

Application development

For a complete reference of the methods used in this section please refer to the API reference section, here we show how to use the tracker APIs to set up a simple application for all supported platform.We will show how to setup the developing environment but you can also refer to the available examples projects and use them as templates to start developing your own applications.

The basic workflow, provided that you have a 3D target already available, is very simple, in a typical application you always follow these steps:

During application/view setup:

  • setup and start a capturing device
  • create an instance of the 3D tracker
  • configure the 3D tracker by providing the path the the configuration file (available with the 3D target meant to be used) as well as the tracking/capturing resolution
  • initialize the 3D tracker
  • start the 3D tracker

During application/render loop:

  • retrieve captured frame from the camera
  • pass the captured frame to the 3D tracker
  • retrieve the pose of the tracked object (if available)
  • display captured frame and augmentation

How you do the above steps depends on the operating system you are developing for, below you can find the description of the process for both iOS and Android (developing for Unity entails for a different approach and is described later).

One of the strengths of the ARMedia 3D Tracker SDK is that it is independent of the way your app captures frames and of the way it renders both frames and the virtual content. Indeed referring to the above steps you can virtually do whatever you need for step 1 of the application/view setup and for steps 1 and 4 of the application loop.

In the accompanying examples you will see how to capture frames using OpenCV APIs and how to render using the ARMedia Rendering module that wraps the some of the OpenSceneGraph's APIs, here we focus only on the tracker's APIs.

iOS

For the development environment setup, please refer to the SDK example XCode projects.

Supported architecture are armv7 and armv7s, iOS SDK must be 7 and deployment target must be 6 or later.

Be sure that "Other C++ Flags" include also: "-mfloat-abi=softfp -mfpu=neon" C Language Dialect is set to compiler-default as well as C++ Language Dialect and C++ Standard Library.

Typically you would have an instance of the 3D tracker inside the ViewController devoted to manage the Augmented Reality experience, in this case first of all include the SDK header:

#import <ARMedia3DTracker/ARMedia3DTracker.h>

Also it is useful to import the ARMedia3DTrackerDelegate header as you will see below, so let's add this header too:

#import <ARMedia3DTracker/ARMedia3DTrackerDelegate.h>

Declare the instance of the tracker among other attributes of your ViewController, as well as the variable that will hold the tracked pose:

ARMedia3DTracker *armedia3DTracker;
double pose[16];

Then you would typically create the tracker instance in your ViewDidLoad method:

armedia3DTracker = [[ARMedia3DTracker alloc] init];

set the application key (obtained from the SDK online platform):

[armedia3DTracker setKey:@"<your_app_unique_key>"];

set the tracker configuration for the target you need to recognize and track:

[armedia3DTracker setupTrackerWithConfigurationFile:configFile forCapturingAtWidth:captureWidth andHeight:captureHeight];

where:

configFile is the absolute path of the configuration file (XML) related to the 3D target you want to use, captureWidth and captureHeight are the resolution in pixels of the incoming frames provided by the capture device.

After this you are ready to initialize the tracker, this step could take some time because the tracker will load the pointcloud, will read the configuration file and will examine the provided photos depending on the content of the configuration file (note that if you do not use the photos set but rely on the cache mechanism, this time is reduced considerably), for this reason it is advisable to execute initialization in a background thread. You can easily do this by first setting a delegate for the tracker:

[armedia3DTracker setTrackerDelegate:self];

and then execute the initialization of the tracker:

[armedia3DTracker initTracker];

by doing so, the ViewController will receive a message (as declared in the ARMedia3DTrackerDelegate protocol, to which it must conforms) when initialization is over (see below for details).

Finally in order to get a message when the initialization is over and the tracker is ready to track, just provide an implementation of the ARMedia3DTrackerDelegate protocol's method -(void)trackerInitCompleted:(BOOL):

-(void)trackerInitCompleted:(BOOL)status
{
	if(status)
	{
		// tracker successfully init'd and ready to track…
		[armediaTracker startTracker];
	}
	else
	{
		// tracker could not init, handle error...
	}
}

Note that you can safely start the device camera and display frames even if the tracker has not initialized yet because everything happens in the background.

In the application loop you can retrieve a new frame from the camera and use it with the tracker that will search it for the object that you want to track:

[armedia3DTracker track:image];

here image is a cv:Mat object, if you are capturing using OpenCV nothing more is required, otherwise you just need to create a cv::Mat object providing the frame data you obtained using your capturing method.

You can check if the tracker is successfully tracking the object like this:

if([armedia3DTracker isTracking])
{
	// tracking, get pose...
	[armedia3DTracker getPose:pose];

	// show virtual content and update its pose...
}
else
{
	// not tracking, hide virtual content...
}

Where the pose is retrieved in the standard OpenGL format (x positive to the right, y positive upward and z positive backward), please take this into account when you integrate the tracker with your rendering engine.

For more details please refer to the SDK examples.

Android

In the following section, we describe the process of start using the tracker by using the Eclipse IDE. If you use a different tool for Android development, please adjust the description accordingly.

First of all you must add the armedia3dtracker.jar to the project, to do this you must copy it in the "libs" folder of your project.

Be sure to also add the OpenCV libraries.

Then chose the application package identifier because it will be required in order to obtain a valid SDK key otherwise the tracker won't work. You can set this in the Android Manifest XML file.

Supported architecture is armv7, the minimum Android version supported is 4.0.3.

Typically you would have an instance of the 3D tracker inside your custom Activity class devoted to manage the Augmented Reality experience, in this case first of all include the SDK header:

import com.inglobetechnologies.armedia.tracker.sdk.ARMedia3DTracker;

Also it is useful to import the ARMedia3DTrackerInitListener header as you will see below, so let's add this header too:

import com.inglobetechnologies.armedia.tracker.sdk.ARMedia3DTrackerInitListener;

Declare the instance of the tracker among other attributes of your Activity, as well as the variable that will hold the tracked pose:

private ARMedia3DTracker armedia3DTracker;
double pose[16];

Then you would typically create the tracker instance in your onCreate method:

armedia3DTracker = new ARMedia3DTracker(this);

set the application key (obtained from the SDK online platform):

armedia3DTracker.setKey("<your_app_unique_key>");

Then, since the tracker configuration needs the exact camera resolution (width and height) to be correctly inited, we should need to first start the camera, then use its callback to set the tracker configuration for the target you need to recognize and track:

armedia3DTracker.setupTrackerWithConfigurationFile(configFile, captureWidth , captureHeight );

where:

configFile is the absolute path of the configuration file (XML) related to the 3D target you want to use, captureWidth and captureHeight are the resolution in pixels of the incoming frames provided by the capture device.

After this you are ready to initialize the tracker, this step could take some time because the tracker will load the pointcloud, will read the configuration file and will examine the provided photos depending on the content of the configuration file (note that if you do not use the photos set but rely on the cache mechanism, this time is reduced considerably), for this reason it is advisable to execute initialization in a background thread. You can easily do this by first setting a listener for the tracker:

	armedia3DTracker.setInitListener(this);

and then execute the initialization of the tracker:

armedia3DTracker.initTracker();

by doing so, the Activity (this) will receive a message (as declared in the ARMedia3DTrackerInitListener interface, to which it must implement) when initialization is over (see below for details).

Finally in order to get a message when the initialization is over and the tracker is ready to track, just provide an implementation of the ARMedia3DTrackerInitListener method public void onInitFinished(boolean status):

public void onInitFinished(boolean status)
{
	if(status)
	{
		// tracker successfully init'd and ready to track…
		armedia3DTracker.startTracker();
	}
	else
	{
		// tracker could not init, handle error...
	}
}

Note that you can safely start the device camera and display frames even if the tracker has not initialized yet because everything happens in the background.

In the application loop you can retrieve a new frame from the camera and use it with the tracker that will search it for the object that you want to track:

armedia3DTracker.track(mRgba);

here image is a openCV Mat object, if you are capturing using OpenCV nothing more is required, otherwise you just need to create a openCV Mat object providing the frame data you obtained using your capturing method.

You can check if the tracker is successfully tracking the object like this:

if(armedia3DTracker.isTracking())
{
	// tracking, get pose...
	armedia3DTracker.getPose(pose);

	// show virtual content and update its pose...
}
else
{
	// not tracking, hide virtual content...
}

Where the pose is retrieved in the standard OpenGL format (x positive to the right, y positive upward and z positive backward), please take this into account when you integrate the tracker with your rendering engine.

For more details please refer to the SDK examples (here).

Unity Plugin

The ARMedia 3D Tracker is available for the Unity 3D platform (deployment supported only for Android and iOS) in the form of a package. To add real-time 3D tracking capabilities to your apps just import the ARMedia3DTracker.package file into your Unity 3D project. Although the package provides lots of resources and examples, the only things that are strictly required are the ARMedia3DTracker.cs script and the libraries that are the core of tracker plugin.

The very first step you usually do is to create an empty GameObject and add the ARMedia3DTracker.cs script to it. If you look at the Inspector view, you will see that this component requires some properties to be set, namely:

- the app key: you obtain the app key from the ARMedia SDK developers' portal, before asking for a key you must choose the bundle identifier for your app and use it to obtain your unique key;

- the configuration file: here you have to write the path to the tracker configuration file (XML) you want to use to initialise the tracker (usually you copy the 3D targets used by your app in the StreamingAssets folder, so the path that you are supposed to provide is considered relative to that folder - for details you can see the code of the ARMedia3DTracker.cs script);

- the camera resolution: two options are available LOW and HIGH

- the track camera mode option: if you do not set this option then the 3D pose provided by the tracker will be applied to a specified GameObject (see below) otherwise it will be applied to the camera specified by the corresponding property

- the trackable object: here you specify a GameObject that will be transformed by the 3D pose provided by the tracker (if the track camera mode is not set) and that will be shown/hidden depending on the tracker status (i.e. if it is tracking or not);

- the main camera: use this property to refer to the camera that will render the augmented content, when the tracker initialization is over the projection matrix of the specified camera will be set, also when the track camera mode is chosen the camera pose will be updated using the pose returned by the tracker;

Refer to the Start() and Update() methods for details about the way the plugin works and if you need a deeper control on the way the tracker is integrated into your app. Also have a look at the accompanying examples to see how to setup simple scenes.

Please note that in order to successfully build the project that Unity 3D creates, you must manually add some dependencies as shown below:

- for iOS add the AssetsLibrary.framework and opencv2.framework frameworks to the "Linked Frameworks and Libraries" section

- for Android add the "armeabi-v7a" folder you find in the Android section of the plugin in the "libs" project's folder.

Note that if you modify your Unity 3D project then you should choose to 'Append' to previously created project otherwise you will loose the dependencies set above.