Environment modelling for robot vision using kinect
MetadataShow full item record
This research represents an integrated approach of reconstructing three dimensional environments for robotic navigation. It mainly focuses on three dimensional surface reconstructions of the input data using Kinect, a depth sensor. With an increase in the application areas making use of point clouds, there is a growing demand to reconstruct a continuous surface representation that provides an authentic representation of the unorganized point sets and render the surface for visualization. The main goal of this research is the study of various surface reconstruction algorithms and the creation of a three dimensional model of an object and/or an entire three dimensional environment from a set of point clouds. It starts by scanning an environment or an object using Kinect and store the point cloud generated using OpenGL and Microsoft Visual Studio. Then it focused on creating a mesh out of the stored point cloud in MATLAB, using a computational geometric approach called Delaunay Triangulation. Finally, combining surfaces and applying surface reconstruction method the three dimensional model is obtained.