# 3d reconstruction from multiple images python

I wish to make a 3D reconstruction of a scene. We also saw that if we have two images of same scene, we can get depth information from that in an intuitive way. Below is an image and some simple mathematical formulas which proves that intuition. (Image Courtesy. So in short, above equation says that the depth of a point in a scene is inversely proportional to the difference in distance of corresponding image points and their camera centers. So with this information, we can derive the depth of all pixels in an image.So it finds corresponding matches between two images.

The conveyor belt thereafter rotates around the image in a circle and lots of images are captured (perhaps 10 image per second) which would allow the kinect to catch an image from every angle including the depth data, theoretically this is possible. In this model, a scene view is formed by projecting 3D points into the image planeusing a perspective transformation. Thematrix of intrinsic parameters does not depend on the scene viewed.

So,once estimated, it can be re-used as long as the focal length is fixed (incase of zoom lens). The joint rotation-translation matrixis called a matrix of extrinsic.