Photogrammetry is the science, technology and art of obtaining reliable information from noncontact imaging and other sensor systems about the Earth and its environment, and other physical objects and processes through recording, measuring, analysing and representation. In this case, photogrammetry is used to create 3D models from series of photographs taken at various angles.

Photogrametry Using Photographic Input

Autodesk 123D Catch can be used for basic geometry extraction. The resulting .obj file will be much to large for the 1MB budget required for deployment in augmented reality. The polygon reduction can be achieved using the Quadratic Edge Collaps Decimation technique outlined in the Point Cloud to Augmented Reality Conversion section below.


14th & AR, New York City, Art in Odd Places, 14th Street between Union, Square and 9th Avenue, New York City, October 9th – 12th, 2014.

Point Cloud to Augmented Reality Conversion

A point cloud represents a set of points that define the shape of a scene or object, plus per vertex RGB data. A point cloud has no edges or polygon faces. Point Clouds are created with 3D scanners such as Xbox 360 Kinect or Carmine RD Sensor on the consumer level and the Leica ScanStation P20 professionally. File format includes .ply.


Metro-NeXt Station, Zürich meets New York, Vanderbilt Hall, Grand Central Terminal, New York, NY
May 17th – 22nd, 2014.


Virtual Zürich, Zürich meets New York, Vanderbilt Hall, Grand Central Terminal, New York, NY
May 17th – 22nd, 2014.

Research Task: Point Cloud to Poly Mesh


  • Mr P. MeshLab Tutorials
  • Download and launch MeshLab
  • File > Import Mesh Point_Cloud.ply
  • Filters > Remeshing, Simplification and Reconstruction > Surface_Reconstruction_Poission
  • Octree Depth = 10
  • Filters > Cleaning and Repairing > Select Faces with edges longer than…
  • Edge Threshold = 2.49187 (or default)
  • Delete Selected Faces
  • Filters > Remeshing, Simplification and Reconstruction > Quadratic Edge Collaps Decimation
  • Target Number of Faces = 7000
  • File > Export Mesh As = Point_Cloud.obj
  • Launch Maya
  • File Import > Point_Cloud.obj
  • Polygon Menu Set > Create UVs > Automatic Mapping
  • File > Export Selection
  • File of type = OBJexport (will include UV map in .mtl file)
  • From MeshLab
  • File > Import Mesh = .obj
  • Filters > Texture > Transfer Vertex Attributes to Texture
  • Source Mesh = .ply
  • Target Mesh = .obj
  • Texture width (px) = 1024
  • Texture height (px) = 1024


RGBD means Red, Blue, and Green plus depth data. A simple gaming depth sensor, like the Xbox Kinect, can be use to create RGBD objects. Undergraduate Dan Barkus has designed and printed an RGBD rig for the Xbox Kinect. See the file in Unity Web Player.


Dan Barkus and Matt Coughlin calibrating the RGBD rig, Spring semester 2014.

Research Task: RGBD Rigging

Screen Shot 2014-04-23 at 6.01.34 PM

Photorealistic Human Characters

Volumental can be used to convert data collected with the Xbox Kinect to RGBD objects. View this model of Andrew Sutliff in Chrome.


Research Task: Create, rig and animate a model using photogrametry techniques

  • Take photographs
  • Convert to geometry with UV filetextures
  • Rig character
  • Animate basic walk, run, jump and idle
  • Deploy animated charter in Unity 3D

Photogrammetry Resources

Place-Based Virtual and Augmented Reality


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s