Photogrammetry is the science, technology and art of obtaining reliable information from noncontact imaging and other sensor systems about the Earth and its environment, and other physical objects and processes through recording, measuring, analysing and representation. In this case, photogrammetry is used to create 3D models from series of photographs taken at various angles.
Photogrametry Using Photographic Input
Autodesk 123D Catch can be used for basic geometry extraction. The resulting .obj file will be much to large for the 1MB budget required for deployment in augmented reality. The polygon reduction can be achieved using the Quadratic Edge Collaps Decimation technique outlined in the Point Cloud to Augmented Reality Conversion section below.
14th & AR, New York City, Art in Odd Places, 14th Street between Union, Square and 9th Avenue, New York City, October 9th – 12th, 2014.
Point Cloud to Augmented Reality Conversion
A point cloud represents a set of points that define the shape of a scene or object, plus per vertex RGB data. A point cloud has no edges or polygon faces. Point Clouds are created with 3D scanners such as Xbox 360 Kinect or Carmine RD Sensor on the consumer level and the Leica ScanStation P20 professionally. File format includes .ply.
Metro-NeXt Station, Zürich meets New York, Vanderbilt Hall, Grand Central Terminal, New York, NY
May 17th – 22nd, 2014.
Virtual Zürich, Zürich meets New York, Vanderbilt Hall, Grand Central Terminal, New York, NY
May 17th – 22nd, 2014.
Research Task: Point Cloud to Poly Mesh
- Mr P. MeshLab Tutorials
- Download and launch MeshLab
- File > Import Mesh Point_Cloud.ply
- Filters > Remeshing, Simplification and Reconstruction > Surface_Reconstruction_Poission
- Octree Depth = 10
- Filters > Cleaning and Repairing > Select Faces with edges longer than…
- Edge Threshold = 2.49187 (or default)
- Delete Selected Faces
- Filters > Remeshing, Simplification and Reconstruction > Quadratic Edge Collaps Decimation
- Target Number of Faces = 7000
- File > Export Mesh As = Point_Cloud.obj
- Launch Maya
- File Import > Point_Cloud.obj
- Polygon Menu Set > Create UVs > Automatic Mapping
- File > Export Selection
- File of type = OBJexport (will include UV map in .mtl file)
- From MeshLab
- File > Import Mesh = .obj
- Filters > Texture > Transfer Vertex Attributes to Texture
- Source Mesh = .ply
- Target Mesh = .obj
- Texture width (px) = 1024
- Texture height (px) = 1024
RGBD means Red, Blue, and Green plus depth data. A simple gaming depth sensor, like the Xbox Kinect, can be use to create RGBD objects. Undergraduate Dan Barkus has designed and printed an RGBD rig for the Xbox Kinect. See the file in Unity Web Player.
Dan Barkus and Matt Coughlin calibrating the RGBD rig, Spring semester 2014.
Research Task: RGBD Rigging
Photorealistic Human Characters
Research Task: Create, rig and animate a model using photogrametry techniques
- Take photographs
- Convert to geometry with UV filetextures
- Rig character
- Animate basic walk, run, jump and idle
- Deploy animated charter in Unity 3D
- Autodesk 123D Catch
- A more powerful workflow for extracting geometry from photographs is to use the ARC 3D a Webservice for remote 3D reconstruction and process the results in MeshLab. Here is a series of useful MeshLab tutorial videos.
- Scientists from the Computer Vision and Geometry Lab of ETH Zurich developed an app that turns an ordinary smartphone into a mobile 3D scanner.
- Structure Sensor is an iPad-mounted 3D scan sensor which is under development.
- Zscanner 700cX – 3D Systems
- Lynda.com Animating Characters with Mecanim in Unity 3D
- HydraDeck Humans
- Infinite Realities
- Vertex Color Baking Tutorial
- Creaform 3D
- Leica HDS Laser Scanners