Lecture 7
Definition of Aerial Photograph 
The aerial photography is defined as the science of making photographs from aircrafts for studying the earth’s surface.  Aerial photographs of the earth’s surface are taken using a variety of platforms like balloons, rockets, aircrafts, satellites etc. Aria! photography was the first method of remote sensing and even today in the age of the satellite and electronic scanners, aerial photographs still remain the most widely used type of remotely sensed data. The popularity of aerial photographs is due to its six characteristics namely,
Characteristics of Good Aerial Photographs 
1.     Availability: Aerial photographs are readily available at a range of scales for much of the world.
2.     Economy: Aerial photographs are Cheaper than field surveys and are often cheaper and more accurate than the maps for many countries of the world. 
3.     Synoptic viewpoint: Aerial photographs enable the detection of small-scale features and spatial relationships that would not be evident on the ground. 
4.     Time freezing ability: An aerial photograph is a record of the Earth’ surface at one point in time and can therefore be used as a historical record. 
5.     Spectral and spatial resolution: Aerial photographs are sensitive to radiation in wavelengths that are outside of the spectral sensitivity range of the human eye, as they can sense both ultra violet (0.3-0.4 µm) and near infrared (0.7-0.9 µm) radiation. They can also be sensitive to objects outside the spatial resolving power of the human eye. 
6.     Three dimensional perspectives: A stereoscopic view of the Earth’s surface can be created and measured both horizontally and vertically; a characteristic that is lacking for the majority of remotely sensed images. 
Uses of Aerial Photographs 
The main use of aerial photography is for pictorial representation i.e. mosaic photo-interpretation and photographic survey. In almost all natural resources studies air photographs are used as basic material and therefore, play an important role. 
Aerial photography is valuable for faithful reproduction of terrain unbroken continuity of its tonal relationships and its meticulous minute detail. A good air photograph has to achieve a certain standard in the accuracy of its geometrical properties and tonal relationships with origin and must record details of the smallest size perceptible from camera station. The aerial photograph is the result of the combined scientific and productive effects of 
1.     Optical lens 
2.     Camera 
3.     Photographic materials
4.     Aero plane 
5.     Navigator 
6.     Camera operator
7.     Photo laboratory workers.
Stages of Aerial Photography 
The various stages in aerial photography and production of photographic prints are illustrated below. 
                            
                                                Survey aircraft
                                                Pilot and navigator
                                      Flight planning and flight map
                                      Aerial camera and its suspension
                                                Actual flight
                             Illuminated terrain, Atmosphere, camera lens, filter
                             Light sensitive emulsion on aerial film or glass plate
                                                Exposure
                                      Formation of latent image
                   Development, fixing, washing and drying of negative
                   Photographic paper contact or diapositve prints
Aerial photography in India is controlled and co-coordinated by the Survey of India and flown by a flying agency, Once the scale and type of photography are indicated, the Survey of India designs the photographic specifications and places the order for photography one of three flying agencies viz., 1) The Indian Air force 2) M/ S Air Survey Company (Pvt) and 3) National Remote Sensing Agency, Hyderabad (NRSA). 
Some of the factors, which influence the image quality of the photographs, are given below. 

Factors                                                       Principal Characteristics 

Ground detail                                             Size, light distribution, shade, colour 

Atmosphere                                                         Haze

Aircraft window                                         Light scattered and loss, optical flatness

Aircraft enclosure                                                 Temperature and pressure 

Camera and its mounting                           Vibration and steadiness 

Aerial camera                                             Calibration and rigidity of lens, shutter and
magazine assembly 
Filter                                                                    Light scatter and loss, spectral transmission,
optical flatness 
Camera lens                                                Aperture, illumination, diffusion, light loss and                                                        scatter, distortion, and aberration             

Camera shutter                                           Efficiency and mechanical shock
Focal plane                                                           Flatness               

Negative emulsion (film or glass plate)       Speed, contrast, spectral sensitivity, diffusion                                                     and exposure                          
Negative base (film or glass plate)              Spread of photographic image due to                                                                            reflection from negative base                     
Processing                                                  Contrast, speed, definition, and dimensional                                                       stability     
Printing                                                       Definition contrast and dimensional stability.
Visual Image Interpretation 
When performed manually (visually) a human interpreter interprets the image. The image used in such analysis is in a pictorial form or photograph type or ANALOG image. Photographic sensors produce analog images and variations of reflected energy. 
Remote sensing images are also represented in digital form. The digital processing and analysis is performed using a computer. A digital image is composed of mall areas known as picture element (PIXEL) arranged in a matrix form. Each pixel location is assigned a number known as Digital Number (DN), which represents the brightness of the small area on the earth’s surface. 
When we view a two dimensional image, we cannot sense the depth of the scene. We are used to see the objects as horizontal view whereas the imagery are vertical view. The other difficulty is that we can see only the visible wavelength and the interpretation of imagery recorded outside the visible range is not interpreted. 
Visual image interpretation techniques could be used on LAND SAT/airborne/RADAR images. This has the advantage of being relatively simple and inexpensive. Each LAND SAT scene which is near-orthographic and covers 3.5 million hectares give synoptic _view of soil association. The influence of climate, vegetation, topography and parent materials on soils can be observed distinctly on LAND SAT scenes. 

Elements of Visual Interpretation 
There are eight elements of visual interpretation to identify the objects. The factors involved in identifying an object are: 
1.     Tone or colour 
2.     Texture 
3.     Shape 
4.     Size 
5.     Pattern 
6.     Shadow 
7.     Association 
8.     Site 
Tone 
          Tone refers to the relative brightness or colours of objects in an image. Since different objects reflect differently, they appear as light or dark colours on imagery. For example, two Fields with different crops will have different colours, depending on the reflectivity. 
Texture 
          Texture refers to the arrangement .and frequency of colour changes in particular areas of an image. When the brightness values change abruptly in a small area, it is a rough texture, whereas smooth textured surfaces have very little colour variation. Smooth textures results from uniform or even surfaces like agricultural held and rough textures from irregular like forests. 
Shape 
          Shape refers to the general form of the objects. Shape is the distinctive clue for identification of objects. Natural features are irregular in shape like mountains whereas manmade objects have regular shapes like a stadium and cricket fields. 
Size 
          Size refers to the scale of an image. In order to quickly identify the size of the objects relative to other objects in a scene must be considered. For example, if an image has to suggest the use of buildings would suggest commercial factories and ware houses whereas small houses as residential houses. 
Pattern 
          Pattern refers to the spatial arrangement of objects. It is an orderly repetition of similar colours and texture. For example, orchards have evenly spaced trees with roads in between, whereas urban areas have regularly spaced houses. 
Shadow
          The shadow of tall object helps in interpretation. However, shadows also hinder the image interpretation because objects within shadows are not visible.
Site 
          Site refers to topographic or geographic location and is important in the identification of vegetation types. For example, certain tree species would be expected to occur on well drained upland sites whereas other trees on low land sites. 
Association or location of objects 
          This refers to the relationship between the objects and their location. For example, factories can be associated with highways, whereas schools can be associated with residential areas. 
Digital Image Analysis 

          Digital image processing involves manipulation and interpretation of digital images with the help of a computer. It includes geocoding and georeferencing with proper coordinate and projection system there are many advantages of digital image processing as compared to Visual interpretation, such as better visualization, easier cartographic facilities, flexibility in editing of data and area estimation. The digital image processing system is composed of two parts: 
Hardware and Software 
          Hardware refers to the physical components that make up the system and software refers to the set of programmes written in a computer programming language for a particular application. Minimum hardware and some of the software’s used for image processing and geophysical analysis is listed below: 
Table 8.1: Hardware and software used for the study 
Sr. No. 
Hardware
Software Packages
1
Personal computer
ILWIS (integrated Land and water Information System) 
2
Plotter

3
Desk jet printer
Arc lnfo., Arc View, ERDAS 
4
Geographical positioning system
IMAGIN, IDRSI, ENVI, GRASS, IDIMS, ELAS, GYPSY, ERIPS, SMIPS 
5
Digitizer and scanner
EASI/PACE, IDRS for data procurement. GPS software, Arc pad
Georeferencing 
          Remotely sensed image in row format contain no reference to the location of the data. In order to integrate these data with other data in a 618, it is necessary to correct and adopt them geometrically.
          Remote sensing data is affected b geometric distortion due to many factors such as sensor geometry, scanner and platform instabilities, earth rotation, earth curvature etc. These can be corrected by referencing the image to existing maps.
Geocoding
           Transformation of an image which results in a new image with the pixels stored in a new line or Coolum geometry is known as geocoding. The geocoding is used to correct the geometry of the georeferenced image, so that a distortion free image can be obtained. 
Digital Image Processing 
          The image processing can be categorized into three main functions. 
1. Image processing
2. Image enhancement
3 image classification 
Image processing
          Image processing refers to the preliminary operation to the main analysis. It involves the removal of errors introduced in the imaging, so that the image resembles to the original scene. Processing operations are grouped into two: 
1. Radiometric error correction
2. Geometric error correction 
Radiometric error correction 
          Radiometric corrections are necessary to remove variations in scene illumination, atmospheric conditions and sensor noises and response. 
·        Variation in illumination and viewing geometry between images can be corrected by establishing the geometric relationship between the area imaged and the senor.
·         Sensor noise may be introduced in an image due to irregularly that occurs in sensor or in data recording and transmission. Common forms of noises are banding and dropped lines. The correction to banding can be done by comparing with other lines of date. Dropped lines occurs due to 110 response from sensor and data is lost while transmission. They are corrected by replacing the line with pixel values in the line above or below or with the average of two.
·         The atmospheric conditions change and reduce the illumination of the scene. The scattering reduces same of the energy illuminating the surface and from layer to the sensor. 
          The correction procedure is complex, because it involves the detailed modeling of atmospheric conditions during data acquisition. 
Geometric error correction 
          Remote sensing data involves number of geometric distortion which occurs due to several reasons like rotation and curvature of earth motion of scanning system and satellite, satellite altitude and velocity. Image rectification or geometric registration is a process by which the geometry of an image is transformed to a known coordinate system.
The image rectification process (IRP) involves
1.     Identification of ground control points and
2.      Resampling. 
Resampling procedure determines the digital values of new pixel location in the corrected image. 
Image Enhancement 
          Image enhancement is a digital technique to improve the appearance o in image for human visual analyses and machine analysis. 
          The enhancement of an image is necessary because, in remote sensing, reflected or emitted energy from different earth surface materials is recorded. Under ideal conditions one material reflects large amount of energy at certain wavelength while another reflects very less energy in the same wavelength. Due to this the objects get high and low values from bright and dark areas. Again, different materials reflect different wavelength regions resulting in similar colour. This is known as low contrast image. 
There are two contrast enhancement techniques 
 Linear contrast enhancement 
          In this technique the original values are expanded to make use of the range of output device. The lowest value in the input image is assigned to black (having a value of 0) and the highest value to white colour (having value of 255). All the intermediate values are linearly distributed between these two extremes. 
Nonlinear contrast enhancement 
          In non linear contrast enhancement the input and output values are hot linearly related, they are transformed logarithmically. 
Spatial filtering 
Spatial filtering is a technique to highlight or suppress specific features in an image based on their spatial frequency. Spatial frequency is defined as the number of changes in “brightness” values per unit distance for any part of the image. An area having very few changes in brightness values is known as low frequency area and in a high frequency area brightness values change suddenly. Filtering is done through a procedure known as convolution. 
Band rationing (vegetation indices) 
          Band rationing is a technique in which the difference from surface due to different seasons and illumination are reduced. Due to different seasons, topographic conditions and changes in sunlight, the brightness values of the same surface changes, causing problem in identifying the objects. 
          This technique also highlights variations in the response of different surfaces. For example, healthy vegetation reflects large amount of energy in the NIR portion of the EMS while it absorbs energy in RED wavelength region. Other surfaces like soil and water have almost similar reflectance in both NIR and RED portions. Therefore, a ratio of reflectance in IR by a ratio of reflectance in RED would result in variation and about 1.0 for soils and water. This differentiates the vegetation from other surfaces.
          Also it becomes possible to identify the areas of unhealthy vegetation which will have lower ratio value than that for healthy vegetation. 
Principal Component Analysis (PCA) 
          Principal component analysis (PCA) is a technique which reduces the number of bands in the data and compresses as much' information in the original band as possible into fewer bands. Interpretation and analysis of these bands of data is simpler and more accurate than trying to use all the bands of data. The compressed bands are called components; hence it is called as principal component analysis.
Image classification 
False colour cnmposite (FCC) 
          This is the first step of image classification process. The spectral information stored in the separate bands can be integrated by combining them into a colour composite. The spectral information is combined by displaying each individual band in one of the three primary additive colours: blue, green and red. A specific combination of bands used to create a. colour composite image is called false colour composite. In a FCC, the red colour is assigned to the NIR, the green colour to the red, and the blue colour to the green band. For example, the green vegetation will appear reddish, the water will appear bluish and the bare soil in shades of brown and gray in an imagery.
          The images generated by remote sensing measurements in blue, green and red bands are combined by superposing the transmission is known as True Colour composite (TCC), whereas the other possible combinations of colour filters and spectral band images are known as False Colour Composite (FCC). This is done to improve the visual perception by assigning BGR to observations in green, red and near infrared spectral bands respectively. Thus, in FCC the blue colour is assigned to green band, the green colour to the red band and the red colour to the NIR band.
·        Vegetation in imageries appears red in FCC. The vegetation generally reflects predominantly in NIR region as compared to green and red. Hence vegetation appears red in FCC due to assignment of IR band to red colour. 
·        Water appears bluish in FCC. The sky blue or dark blue can be differentiated depending on the depth and concentration of sediments in water. 
·        The bare soil in shares appears brown or gray in FCC. 
·        The agriculture and forest appear pink to deep red depending on leaf greenness as the green band is assigned to the blue colour.
·        The ice, snow and clouds appear white in FCC. 
·        The human settlements, cities would appear gray in FCC. 
Methods of image classification 
          Image classification is very important and necessary step in processing of digital data. In this technique similar pixels are regrouped into “classes”. Without classification it is difficult to know about earth features accurately. 
          Actually we are used to categorize the objects by labels describing them as forest, agriculture field, river, residential building etc. we are not used to calling areas by numbers as is the case with digital images. Hence digital image classification is the process of assigning pixels to classes. Each pixel in a digital image is treated as an individual unit having different wavelength regions (spectral bands).
There are two methods of digital image classification
1. Supervised classification
2. Unsupervised classification
          A coloured image is classihed into groups of colours called cluster and then after collecting ground information, it is used for supervised classification. 
Supervised classification 
          Supervised classification is the process of using known identity that is using pixels which are already assigned to some informational class to classify pixels, whose identity is not known. There are six stages of the classification process 
1. To define training 'sites 
2. Extract signatures 
3. Classify the image 
4. In-process classification assessment (IPCA) 5. Generalization 
6. Accuracy assessment 
The description of these processes is very lengthy and out of the object of this book and hence not included. 
Unsupervised classification 
          In this process there is no knowledge about thematic map, land cover class names such as town, village, road etc. this classification can be defined as identification of natural groups within the data. In this technique, computer is required to group pixels with similar characteristics.
          A series of computer software’s are used for the classification of ages which need spec1al expertise. This book deals with the theoretical aspects of the subject and hence safely excluded the description of this practical aspect. 
Users of remote sensing techniques 
          The following groups and departments are engaged in the use of_ remote sensing techniques.
1.     All India Soil and Land Use Survey (AISLS)
2.     Central Ground Water Board (CGWB)
3.     Geological Survey of India (GSI) 
4.     National Remote Sensing Agency (NRSA)
5.      National Bureau of Soil Survey Land Use Planning (N BSSLP)
6.     National Institute of Oceanography (NIO)
7.     Oil and Natural Gas Commission (ONGC)
8.     Space Application Centre (SAC)
9.     Survey of India (SOI)


Comments

Popular posts from this blog