Inside FLARManager: Loading Collada Models

<– Basic Augmented Reality | Tracking Engines –>
Proto is so cute. I want one!”

Well, here’s how you do it.

This tutorial will run you through the basics of getting an augmented reality application with a Collada model up and running. This tutorial is almost identical to the basic augmented reality tutorial. You might want to try that one first, or even the 2D tutorial. Or, you might just want to dive right in.


FLARManager Tutorial: Augmented Reality with Collada models

Download source for this tutorial here:
and place it in the root of the /src/ FLARManager folder.
Print out any of the pattern .pngs in /resources/flarToolkit/patterns, found in the FLARManager distro, to use with the tutorial.

NOTE: This tutorial demonstrates the Papervision route; however, I suggest you migrate to Away3D after getting through this example. Papervision is a dead project, but Away3D is still under active development. The source for the Collada example in Away 3D can be found here.
This tutorial will demonstrate how to draw an animated model wherever the tracker sees a marker. (Big mahalo to Tom Tallian for the Scout model!) We’ll start with just one model for now; things get more complex when you want to support multiple models tied to multiple markers.

In the application class’ constructor, we wait for the class to be added to the stage, so that we have a reference to the stage:
this.addEventListener(Event.ADDED_TO_STAGE, this.onAdded);

Once the application is added to the stage, we can begin setting things up. First, we create a FLARManager instance and pass it the path to an external xml configuration file. For now, we’ll use flarConfig.xml (located in the /resources/flar/ folder in the FLARManager distro). We also pass an instance of FLARToolkitManager, to use the FLARToolkit tracking library, and a reference to the stage.
var flarManager:FLARManager = new FLARManager("flarConfig.xml", new FLARToolkitManager(), this.stage);

Once you’re comfortable with the basics of FLARManager, you can edit this config file, or create your own, as you see fit. More information on configuration options lives here.

We want to see the video capture, so let’s add it to the stage. FLARManager creates a default video source all ready to go, though the source can also be modified via the configuration file.

FLARManager uses an event model to notify any interested parties of newly-detected markers, changes in already-detected markers, and marker removal. We add FLARMarkerEvent handlers to respond to these changes.

this.flarManager.addEventListener(FLARMarkerEvent.MARKER_ADDED, this.onMarkerAdded);
this.flarManager.addEventListener(FLARMarkerEvent.MARKER_UPDATED, this.onMarkerUpdated);
this.flarManager.addEventListener(FLARMarkerEvent.MARKER_REMOVED, this.onMarkerRemoved);

We’ll write our event handlers in just a a bit. First, we wait for FLARManager to initialize before setting up the Papervision3D environment.

this.flarManager.addEventListener(Event.INIT, this.onFlarManagerInited);


Setting up Papervision3D

Once FLARManager has finished initializing, we can set up the Papervision3D environment:

private function onFlarManagerInited (evt:Event) :void {
    this.flarManager.removeEventListener(Event.INIT, this.onFlarManagerInited);
    this.scene3D = new Scene3D();
    this.viewport3D = new Viewport3D(this.stage.stageWidth, this.stage.stageHeight);
    this.camera3D = new FLARCamera_PV3D(this.flarManager, new Rectangle(0, 0, this.stage.stageWidth, this.stage.stageHeight));
    this.renderEngine = new LazyRenderEngine(this.scene3D, this.camera3D, this.viewport3D);
    this.pointLight3D = new PointLight3D();
    this.pointLight3D.x = 1000;
    this.pointLight3D.y = 1000;
    this.pointLight3D.z = -1000;
    this.addEventListener(Event.ENTER_FRAME, this.onEnterFrame);

Papervision has to re-render the scene every frame, so we add an ENTER_FRAME event handler in which we’ll do that. (We’ll get to that part in a sec.)


Tracker camera parameters

This section is extra credit, but thought you might want to know about that FLARCamera_PV3D line…

Many tracking libraries compensate for distortion caused by the camera lens by referring to an external camera parameters file. With flare*tracker and flare*NFT, this file is /resources/flare/cam.ini; for FLARToolkit it is /resources/flarToolkit/FLARCameraParams.dat. FLARManager has a camera package that contains camera classes that can parse and apply the data within these files to incorporate this compensation into the displays generated by different 3D frameworks.

FLARCamera_PV3D requires the information from the camera parameters file, but FLARManager cannot provide this until it has loaded and parsed this file. Therefore, we wait to initialize the Papervision3D environment until after FLARManager has initialized. Once there, we pass a reference to FLARManager into FLARCamera_PV3D, from where the loaded camera parameters are extracted and applied.


Create the Model

Let’s set up a single DAE instance that we’ll map to the detected marker. The DAE class, in the ASCollada library, provides a simple framework for loading and displaying Collada models in Flash applications.

// load the model.
// (this model has to be scaled and rotated to fit the marker; every model is different.)
var model:DAE = new DAE(true, “model”, true);
model.rotationX = 90;
model.rotationZ = 90;
model.scale = 0.5;

// create a container for the cube, that will accept matrix transformations.
this.modelContainer = new DisplayObject3D();
this.modelContainer.visible = false;

    var model:DAE = new DAE(true, "model", true);
    model.rotationX = 90;
    model.rotationZ = 90;
    model.scale = 0.5;
    // create a container for the model, that will accept matrix transformations.
    this.modelContainer = new DisplayObject3D();
    this.modelContainer.visible = false;

We place the DAE inside of a DisplayObject3D so that we can position the DAE as needed to match the scene. This particular model needs to be rotated and scaled to align with the marker. If we applied the transformation matrix directly to the DAE instance, the rotation and scale would be overwritten, and the model would not display correctly.


Responding to FLARMarkerEvents

Now that FLARManager and Papervision3D are both set up, we can make the two work together by drawing Papervision3D objects when handling FLARMarkerEvents coming from FLARManager.

private function onMarkerAdded (evt:FLARMarkerEvent) :void {
    this.modelContainer.visible = true;
    this.activeMarker = evt.marker;
private function onMarkerRemoved (evt:FLARMarkerEvent) :void {
    this.modelContainer.visible = false;
    this.activeMarker = null;

For the purposes of this tutorial, we’re simply toggling the visibility of the model as a marker is added and removed from the camera’s view. We’re also keeping track of the active FLARMarker, from which we’ll extract and apply the transformation matrix that makes the model appear to be tethered to the marker.

For more on FLARMarkerEvents, see the 2D tutorial.


Updating the Model

As mentioned earlier, Papervision has to re-render the scene every frame. We’ll take advantage of this opportunity to apply the latest transformation matrix to the model, to make it look like it’s Augmenting our Reality.

private function onEnterFrame (evt:Event) :void {
    this.modelContainer.transform = PVGeomUtils.convertMatrixToPVMatrix(this.activeMarker.transformMatrix, this.flarManager.flarSource.mirrored);

Note that the FLARMarker instance we’re tracking as this.activeMarker will be updating itself continuously; FLARManager handles this for us. We could listen for changes as MARKER_UPDATED events, but since we have to render the Papervision scene every frame, we can just grab the updated information from this.activeMarker while we’re at it.

You might also notice that long, ugly method call, PVGeomUtils.convertMatrixToPVMatrix. (Unfortunately at times like this, I’m a firm believer in self-commenting code.) flare*tracker, FLARToolkit, and other tracking libraries use different coordinate systems than Papervision (and Flash 3D, and Away3D, and Sandy, and Alternativa), so we have to convert the tracking library’s transformation matrix into numbers that make sense to Papervision before applying the matrix to the Cube. Coincidentally, PVGeomUtils has just the method for us!

That should take care of it — you should now see a red-shirted fella perched gently on the marker in your hand!

To handle multiple models and multiple markers, we have to be smarter about managing all the DisplayObject3D instances and active FLARMarker instances. One way to do this (though with simple cubes, not Collada models) is laid out in this example:

Note that this gets fairly sticky fairly quickly, as models take much longer to load than simple cubes, and also consume more processing power to animate. A good multi-model application will use low-poly-count models, load each model on-demand, and be sure not to load models more than once per session.

Also, FLARManager supports arbitrary aspect ratios. No need to stick with 4:3! Here’s a 16:9 example, if you want to get all cinematic:
<– Basic Augmented Reality | Tracking Engines –>