Site icon IJLTEMAS

The Segmentation Fusion Method On10 Multi-Sensors

Abstract: The most significant problem may be undesirable effects for the spectral signatures of fused images as well as the benefits of using fused images mostly compared to their source images were acquired at the same time by one sensor. They may or may not be suitable for the fusion of other images. It becomes therefore increasingly important to investigate techniques that allow multi-sensor, multi-date image fusion to make final conclusions can be drawn on the most suitable method of fusion. So, In this study we present a new method Segmentation Fusion method (SF) for remotely sensed images is presented by considering the physical characteristics of sensors, which uses a feature level processing paradigm. In a particularly, attempts to test the proposed method performance on 10 multi-sensor images and comparing it with different fusion techniques for estimating the quality and degree of information improvement quantitatively by using various spatial and spectral metrics.

Keywords: Segmentation Fusion, spectral metrics, spatial metrics, Image Fusion, Multi-sensor.

I. INTRODUCTION

Most of the newest remote sensing images provide data at different spatial, temporal, radiometric and Spectral resolutions, such as Landsat, Spot, Ikonos, Quickbird, Formosat, GeoEye or Orbview provide panchromatic PAN images at a higher spatial resolution than in their multispectral mode MS. Imaging systems somehow offer a tradeoff between high spatial and high spectral resolution, whereas no single system offers both [1]. This becomes remains challenging due to many causes, such as the various requirements, the complexity of the landscape, the temporal and spectral variations within the input data set [2]. So that image fusion has become a powerful solution to provide an image containing the spectral content of the original MS images with enhanced spatial resolution [3]. Often Image fusion techniques divided into three levels for processing fusion, namely: pixel level, feature level and decision level of representation. A large number of fusion or pan sharpening techniques have been suggested to combine MS and PAN images with the promise to minimize color distortion while retaining the spatial improvement of the standard data fusion algorithms.

Read Full Length Paper

Exit mobile version