Wednesday, 21 March 2018

Working of Photogrammetry and Remote Sensing


Clarification of the term- Photogrammetry

Photogrammetry, as the name suggests, is a 3-dimensional co-ordinate calculating technique. It uses images as the primary medium for metrology. The unrevealed principle used in photogrammetry is triangulation or more accurately called Aerial Triangulation. By taking at least two pictures from the so-called “lines of sight” get developed from the particular camera. They later become points on the object. To produce 3-dimensional co-ordinate plan of the points of interest, they mathematically intersect these lines of sight (which are of optical nature).
The Prussian architect Albrecht Meydenbauer in 1867 coined the term photogrammetry.
He fashioned some of the earliest topographic maps and elevation drawings. Photogrammetry services in topographic mapping are famous, but in recent years the technique has widely grown in the several fields. Various industries like engineering, architecture, underwater, forensic, geology, medicine, movies, games, and many other areas require precise 3D data.

Branches of photogrammetry



There are two broad-based categories in photogrammetry:
  • Metric Photogrammetry: It comprises of the specific measurements and calculations on photographs about the size, shape, and position of graphics features. For obtaining other data such as relevant locations (co-ordinate) of areas, features, and volumes, these calculations are mandatory.
    The particular technical camera takes the photographs, and that camera gets used in engineering fields like surveying.
  • Interpretive Photogrammetry: This type deals with perception and identification of the particular features on a photograph such as its size, shape, shadow, texture, pattern, etc. This type interprets the image to add importance and intelligence.
Remote Sensing

Remote Sensing technology is an essential part of photogrammetry. It also collects data from images. The term comes from the fact that you don't need to physically go to the place from where you want the data.

So, then what's the difference between remote sensing and photogrammetry?

The difference lies in the final information which we get. The colors specify the differences. Therefore, land use/land cover is one of the primary outputs of remote sensing process. The origin of remote sensing was to exploit a vast number of color zones in satellite images and to create 2D data primarily for GIS.
Nowadays, remote sensing devices are advanced which assist in 2D data gathering and formulation. Software tools today have a vaster and comprehensive range of technologies such as image mosaicing, 3D visualization, RADAR, GIS, as well as the softcopy photogrammetry.

Concepts to focus:
  • Radiometric resolution.
  • Spatial resolution.
  • Temporal resolution
  • Spectral resolution.
  • The radiometric resolution represents the capacity of the sensor to estimate the brightness of objects or the signal strength (acoustic reflectance). The sensor which is more sensitive to the reflectance of an object than its surroundings will be beneficial to sense the smallest object.
  • The spatial resolution represents the capacity of a sensor to identify the smallest dimension of a design on an image. In a sense, the separable distance between different patterns or things in a photograph which we can see in meters.
  • The temporal resolution relies on many factors– what time a satellite will take to return to the approximately same location in space. Or, the area of the sensor (associated to its ‘footprint’), and whether the sensor can set can off-bottom. You can more formally call it as the ‘revisit period.’
  • The spectral resolution uses the sensitivity of a sensor to give the response to a particular frequency spectrum (commonly for airborne and satellite sensors). The frequency scales include visible light, invisible light and also electromagnetic waves. The different wavelengths reflected from the objects (as different colors) make it possible to see the features. But, they must be adequately reflected.
I hope that you now understood the difference between photogrammetry and remote sensing from the above explanation.

Basis of photogrammetry

The basis of photogrammetry is perspective geometry. At the easiest level, you can consider a camera lens as a center of the view and light rays piercing straightly into the image through the center. If the calibration of the camera is proper, i.e., if we know the focal length, then we can analyze the angles of the rays from different points. Even if we have 3 points in the scenario and their 3D coordinates, then determining the position of the center gets easy. Further, the orientation of the photo holds the importance. Alternatively, we can accurately identify a location using GPS from initial data.
Knowing the orientation and position of the image enables us to determine the 3D vector from the image point. The center defines it per light ray.
When there are more than two overlapping images, there is an option to intersect the 3D vectors from every image. Further, we can define the 3D location of points in the scenario.  Our brains perform this calculation of intersection when watching a 3D movie or stereo pair. Even in computer's vision, more than two images are combined to produce better results.
Hence, this article was all about the working of photogrammetry and remote sensing.
Let us know your comments.

Source: https://blog.nibt.education/2018/03/working-of-photogrammetry-and-remote-sensing/


1 comment:

  1. I wanted to thank you for this great read!! I definitely enjoying every little bit of it I have bookmarked you to check out new stuff you post.Also visit: photogrammetry services .

    ReplyDelete