Latest

whatsapp (+234)07060722008
email sales@graciousnaija.com

Sunday, 15 April 2018

NEURAL NETWORK USED FOR IMAGE COMPRESSION AND DECOMPRESSION


NEURAL NETWORK USED FOR IMAGE COMPRESSION AND DECOMPRESSION
ABSTRACT
Image compression is the technique used to minimize memory space and decrease bandwidth (reduce high data rate) for transmission without deteriorating image quality. The various methods and standards for image and video like JPEG, Wavelet, M-JPEG, H.26x etc. have been proposed by researchers. Even though, Increase in mass-storage density, speed of processor, and digital communication system performance, demand for data storage capacity and data-transmission bandwidth continues to outrage the capabilities of available technologies. Apart from the above mentioned existing technology on image & video compression, a method of image compression using soft computing have been proposed. Two layer Feed forward neural network will be considered and will be trained off-line using Levenberg Marquardt algorithm. The weights of trained network of hidden and output layers are used for image compression and decompression respectively for any test image. MATLAB will be used as software tool to carry out for training neural network, image compression and image decompression. The performance parameters like compression efficiency, complexity of algorithm and quality, for image compression & decompressions will be analyzed.
CHAPTER ONE
INTRODUCTION
1.1 BACKGROUND OF THE STUDY
Direct transmission of the video data requires a high-bit-rate (Bandwidth) channel. When such a high bandwidth channel is unavailable or not economical, compression techniques have to be used to reduce the bit rate and ideally maintain the same visual quality. Similar arguments can be applied to storage media in which the concern is memory space. Video sequence contain significant amount of redundancy within and between frames. It is this redundancy that allows video sequences to be compressed. Within each individual frame, the values of neighboring pixels are usually close to one another. This spatial redundancy can be removed from the image without corrupting the picture quality using “Intra frame” techniques.
 Principles of Image Compression
The principles of image compression are based on information theory. The amount of information that a source produce is the Entropy. The amount of information one receives from a source is equivalent to the amount of the uncertainty that is removed. A source produces a sequence of variables from a given symbol set. For each symbol, there is a product of the symbol probability and its logarithm. The entropy is a negative summation of the products of all the symbols in a given symbol set. Compression algorithms are methods that reduce the number of symbols used to represent source information, therefore reducing the amount of space needed to store the source information or the amount of time necessary to transmit it for a given channel capacity. The mapping from the source symbols into less target symbols is referred to as Compression and Vice-versa Decompression. Image compression refers to the task of reducing the amount of data required to store or transmit an image. At the system input, the image is encoded into its compressed from by the image coder. The compressed image may be subjected to further digital processing, such as error control coding, encryption or multiplexing with other data sources, before being used to modulate the analog signal that isactually transmitted through the channel or stored in a storage medium. At the system output, the image is processed step by the step to undo each of the operations that were performed on it at the system input. At the final step, the image is decoded into its original uncompressed form by the image decoder. If the reconstructed image is identical to the original image the compression is said to belossless, otherwise, it is lossy.
 B. Performance measurement of image Compression
There are three basic measurements for the IC algorithm.
1)    Compression Efficiency:-
It is measured by compression ratio, which is defined as the ratio of the size (number of Bits) of the original imagedata over the size of the compressed image data
2) Complexity:
The number of data operations required performing bit encoding and decoding processes measures complexity of animage compression algorithm. The data operations includeadditions, subtractions, multiplications, division and shiftoperations.
3) Distortion measurement (DM):
For a lossy compression algorithm, DM is used to measure how much information has been lost when a reconstructed version of a digital image is produced from the compressed data. The common distortion measure is the Mean-Square-Error of the original data and the compressed data. The Signal-to-Noise ratio is also used to measure the performance of lossy compression algorithm.
4) Image compression techniques
 Still images are simple and easy to send. However it is difficult to obtain single images from a compressed video signal. The video signal uses a lesser data to send or store a video image and it is not possible to reduce the frame rateusing video compression. Sending single images is easier when using a modem connection or anyway with a narrow bandwidth.


NEURAL NETWORK USED FOR IMAGE COMPRESSION AND DECOMPRESSION

Chapters: 1 - 5
Delivery: Email
Number of Pages: 75

Price: 3000 NGN
In Stock


 

No comments:

Post a Comment

Add Comment