Image Compression Techniques: Literature Review

With the development of modern communications technology, data compression is becoming more important to save space and reduce transmission costs. Because of this


Introduction
The digital image is a collection of pixel values that requires a large amount of storage space and transmission bandwidth.Most images are characterized by the fact that neighboring pixels are associated and thus contain duplicate information.[1].The main goal now would be to find a less correlated visual representation.[2].Image compression aims to minimize the size of a graphics file without sacrificing image quality, allowing more images to be saved in a given amount of memory space and reducing the number of times images must be delivered or downloaded over the Internet in a more efficient manner.[3].
Compression reduces the amount of data necessary to represent and store a digital image by removing redundant or excess bits from the image.Generally, the main types of redundancy can be identified as; Coding redundancy that happens when more bits are used than are required and fewer code words are used than are available, spatial and temporal redundancies (Interpixel Redundancy) that happen as a result of correlations between adjacent pixels in an image, causing some information to be duplicated unnecessarily between associated pixels, and irrelevant data (Psychovisual Redundancy) that happens when the human visual system ignores visually unimportant data, In previous years, many image compression methods have been developed, which can be categorized widely in two primary categories technologies; Lossy Compression, and Lossless Compression [2] [4].

Lossless Compression Technique
In this technique, the processing is done on every single pixel, every single bit of the original data file remains after the file is uncompressed, so the reconstructed images numerically identical to the original image, and then, all of the information is completely restored, lossless compression perform modest compression amount [5] [4] [2]. Figure (1) show image before and after compression by Lossless Compression Technique [6].

lossless compression Methods
a. Chain codes : efficiently define the boundary of rasterized shapes, and they can be compressed further, also they stands for a sequence of instructions that regulate the walk through the border pixels of an examined shape.[9].
b. Run-length encoding (RLE): is a lossless compression method that relies on the occurrence of data rather than statistical data [10], [11].The (length, value) duo is used to replace data in this coding, where "value" is the recurring value and "length" is the number of repetitions [3].
c. Predictive coding: the goal is to eliminate redundancy in image patterns.The error pattern can be regarded fully random if the predictor does a good job of removing redundancy [12].In predictive coding, previously delivered or available data is utilized to forecast future values, with the difference being coded.This would be done in the image or spatial domain, so it's a bit more complicated [2].d. a. Bit plane coding: For a simple data partitioning approach, this is an obvious choice.It has the characteristic of being immune to context dilutions and assessing probability quickly [13].e. Algorithms of an adaptive dictionary: is employed to achieve a suitable balance of compression efficiency and computational complexity [14].
f. Entropy encoding: is a lossless compression method that is applied to an image after it has been quantized.It allows for a more efficient representation of an image, using less memory for transmission or storage.[15].
g. Area code compression: Area coding is a more advanced form of lossless compression run length coding.It is extremely effective and can yield higher compression ratios (CR), mostly because of the non-linear technique, it cannot be performed in hardware.[16].[4], [5]: Lossy compression reduces a file by perpetually eliminating redundant information, and when an uncompressed file, only a part of the original information is still there, this technique is generally used when certain data loss mostly not noticed by the user as with video and sound files, with images on the Web the JPEG compression were commonly adopted.Lossy schemes are used widely because The quality of the reconstructed images is sufficient for most applications.The decompressed image isn't the same as the original image, but it's quite close [17].and provide much higher compression ratios than the lossless scheme.Figure (2) show result of compression image by Lossy Compression Technique [18].The reconstruction value for each segment in the bitmap is then calculated.This is the average of the values of the original block's associated pixels.A larger block size results in a higher compression ratio but also degrades image quality.c.Sub-band Coding (SBC): it is commonly used for speech and image coding.Also, it splits a signal's frequency bands,

Original image Compression image
for each sub-band encoded at a bit rate suited to it with an encoder [19].Where the signals are decoded, unsampled, and routed via synthesis filters at the decoder.After that, the compressed image is produced by added all of the coefficients together.d.Vector quantization (VQ): This method is a multidimensional expansion of the scalar quantization technique, and it develops a code vector dictionary, which is a collection of fixed-size vectors.An image is divided into image vectors, which are non-overlapping blocks.For each image vector, the closest matching vector in the dictionary is located, and its index in the dictionary is used as the encoding of the original image vector.[19].VQ-based coding methods are commonly utilized in multimedia applications due to their quick lookup capabilities on the decoder side.e. Fractal compression: It consists of finding discrete self-similar pieces that are repeated multiple times in the image, It divides the image into domain and rank blocks that cover the full image This technology was utilized to develop new fractal encoding methods that are still in use today [20] [21].

Subject review
In the year 2008 [22], Somasundaram and Domnic proposed a still image compression technique with a low bit rate, to generate residual codebook, and compresses VQ indices.It's a novel grayscale image compression strategy that improves image quality while keeping the bit rate low.The VQ method is being used in the system, which uses a residual codebook to improve image quality and the compression VQ indices to reduce bit rate.With the standard images, this technique provides superior PSNR values and is less expensive than GSMVQ and JPVQ.This approach compresses data faster than the other two since it uses a smaller codebook.
In the year 2009 [23], Sadashivappa and AnandaBabu performed a study to characterize a bigger collection of wavelet functions for usage in a SPIHT-based still image compression system.The study looks at the key aspects of wavelet functions and filters used in MATLAB to convert images into wavelet coefficients for sub-band coding.To objectively test image quality, the peak signal to noise ratio (PSNR) and its variation with bit rate were utilized.The impact of various parameters on various wavelet functions is investigated.The results serve as a useful guide for wavelet-based coder application makers.
Kharate and Patil provided an appropriate selection of mother wavelets based on the nature of images in a paper published in 2010 [6], which significantly improved image quality and compression ratio.They propose a unique compression technique based on wavelet packet best tree with better run-length encoding, which is based on Threshold Entropy.Because a whole tree is not dissected, the suggestion technique minimizes the temporal complexity of wavelet packets decomposition.Based on threshold entropy, the algorithm identifies sub-bands that contain meaningful information.It is suggested that the improved run-length encoding technique outperforms RLE.The compression ratio is good for low frequency (smooth) images, but it is very high for gray images, according to the results obtained on a set of natural and synthetic images.For high-frequency images like Mandrill, and Barbara, the compression ratio is good, and the image quality is preserved.These findings are compared to those acquired using the JPEG-2000 application.The results achieved using the suggested algorithm are superior to those obtained using the JPEG-2000 application.
In the year 2010 [24], Somasundaram and Vimala proposed a novel approach called Efficient Block Truncation Coding (EBTC).The proposed method is a lossy image compression technique that takes advantage of inter-pixel redundancy to minimize the bit rate furthermore.It is a well-known fact that the intensity values of adjacent pixels are more or less the same.After separating the image into small 4 × 4 pixel blocks, the blocks are classified into two categories: low detail blocks and high detail blocks.The block is referred to as a high detail block when the intensity values of the nearest pixels differ, and it is referred to as a low detail block when the difference between the intensity values is less.When compared to traditional BTC, the proposed approach provides excellent performance of PSNR values and bit rate.
In the year 2011 [25], Mohammed and Abou-Chadi did a study to look into image compression utilizing block truncation coding, which is regarded as a lossy image compression technique.The original block truncation coding (BTC) and Absolute Moment block truncation coding were chosen as the two algorithms (AMBTC).The two algorithms employ a two-level quantize and divide the image into non-overlapping blocks.Different grey level test images with 512512 pixels and 8 bits per pixel were used to apply the approaches (256 grey levels).The bit rate of the reconstructed images is 1.25 bits per pixel.This equates to an 85 percent compression ratio.Image quality(SSIM) was assessed using the Bit Rate (BR), Peak Signal to Noise Ratio (PSNR), Weighted Peak Signal to Noise Ratio (WPSNR), and Structural Similarity Index (SSI).The ATBTC algorithm outperforms the BTC algorithm, according to the results.It has been demonstrated that at the same image compression, bit rate using AMBTC gives superior image quality than compression image by using BTC.Furthermore, the AMBTC is much faster than the BTC.
An improvement lossy compression technique that works with grayscale images to eliminate correlation and spatial redundancy between pixels of Block Truncation Coding (BTC) and Enhanced Block Truncation Coding (EBTC) in the year 2011 [26] have been suggested by Kumar and Singh, which is useful for preserving the compression ratio and quality of an image.The ETBTC algorithm outperforms the BTC algorithm, according to the results.It has been demonstrated that at the same bit rate, image compression using EBTC gives superior image quality than image compression using BTC.This algorithm was put to the test on a variety of grayscale images of various sizes.Image quality was evaluated by using Weighted Peak Signal to Noise Ratio, Peak Signal to Noise Ratio, Bit Rate, and Structural Similarity Index.Where bit rate of the reconstructed images is 1.25 bpp, which equates to compression of 85%.
In the year 2014 [27], Bhavana Patil and Asharani Patil researched to develop a computationally efficient and effective image compression algorithm based on DCT and wavelet transform.The work focuses on wavelet image compression utilizing the Haar Transformation, intending to reduce processing needs by applying different compression thresholds for the wavelet coefficients and obtaining results in a matter of seconds, improving the quality of the reconstructed image.They investigate key design challenges using a reduced model of a sub and coder.
Haar wavelet achieves a higher compression ratio and PSNR than DCT.A higher PSNR indicates better image quality.They are adaptively quantized using a high-frequency sub band with better resolution, in addition to Haar wavelets.Due to separable wavelets filters and clustering with spatial limitations, these two compression approaches provide well-structured directional edges and vast homogeneous regions.The bit rate of sub band coding is substantially lower than that of the original sub-band images.
In the year 2015 [28], Zhou, Bai, and Wang conducted a study.A discrete cosine transform-based image reduction approach is proposed (DCT).This approach combines differential pulse code modulation and vector quantization in a DPCM hybrid scheme.In this system, DCT is utilized to transfer the image to the frequency domain from the spatial domain.The block data is then shortened after being translated into a vector in zigzag order.Following that, the vector is divided into DC and AC coefficients.The DC coefficient is coded using DPCM after scale quantization.
Multistage vector quantization is used to code AC coefficients (MSVQ).Then, on index tables and DC portions, entropy encoding is done independently.The proposed algorithm outperforms the standard VQ algorithm as well as the hybrid DCT-VQ technique.The codebook design procedure, which is improved by using multiple small-sized codebooks instead of one huge codebook, is the method's only complicated operation as compared to the JPEG scheme.The suggested technique has a higher PSNR value than the JPEG standard, as demonstrated by the experimental results.
In the year 2017 [7] The work of Kong, Sun, Han, and Guo [29], in 2017 proposed an image reduction and transmission strategy based on non-negative matrix factorization (NMF), the NMF algorithm concept is studied first.The image capture, block, compression, and transmission mechanisms are then finished collaboratively.Camera nodes take images and send them to regular nodes, which compress them using the NMF method.The cluster head node sends compressed images to the station, and ordinary nodes receive them.They assigned distinct functions to nodes, such as data processing and long-distance data transmission, respectively.As a result, the entire system's energy usage becomes homogeneous, and finally, the image restoration is handled by the station.This mechanism can reduce the energy consumption of camera nodes, which play a critical role in the network, according to simulation results.At the same time, it may balance the network's energy usage and lengthen its lifetime.It can also efficiently remove common noise and enhance image restoration quality.
In the year 2017 [30], Ahmed and George proposed a low-cost lossy compression for a color image in a study.The data of the RGB image is transformed to YUV color space, and then U and V bands are down-sampled by the propagation step.Each color sub-band is decomposed separately using the biorthogonal wavelet transform.The Low-Low (LL) sub-band is then encoded using the (DCT).Scalar Quantization is used to code the remaining wavelet sub-bands.The quadtree coding method was also used to code the results of DCT and quantization procedures.Finally, adaptive shift coding is employed as a high-order entropy encoder to remove any statistical redundancy and boost compression efficiency.The system was put to the test on a series of standard color photographs, and the compression results revealed that it was capable of reducing the size while keeping fidelity levels above the acceptable level, with compression ratios of around 1:30 for Color Barbara and 1:40 for Color Lena.
In the year 2017 [31], Mander and Jindal propose a novel technique for image compression.Jindal's technique combines BTC and DWT algorithms with spline interpolation.It assists in shrinking the image's size so that it takes up less memory and is easier to send.As a result, grayscale images are compressed using BTC.After compression, the images are reconstructed using the Discrete Wavelet Transform (DWT) with spline interpolation.PSNR values observed are reasonable, and alterations have a favorable impact on compressed image visual quality.This image compression approach takes care of all of the image's edges.The method is used because it is less complicated than others and is simple to implement.Following the application of these strategies, it was discovered that the results obtained were over 43 percent better than the techniques utilized by other users for compression.These results are derived by comparing the findings of this study to previous implementations by various researchers.The recommended method has been discovered to be ineffective exceeds the most commonly used existing procedures, providing outcomes that are 49 percent better.
In 2017, Abood utilized three composite color image compression methods in this study [32]: composite stationary wavelet technique (S), composite wavelet technique (W), and composite multi-wavelet technique (M).The compression parameters are derived for the third-level high-energy sub-band of each composite conversion in each composite technology.Color image compression is used in these methods to produce great compression, no loss of original image, higher performance, and good image quality.The three-level multi-wavelet transform (MMM) in M technology is the best complex transformation among the 27 types, with the highest energy and compression ratio values and the lowest bit per pixel (bpp), time in seconds, and rate-distortion values, and the least bit per pixel (bpp).A color image's compression is nearly the same as the average compression parameter values for the three ranges of the same image.This work is beneficial for images with high compression, no loss of original image, improved performance, and good image quality.
Kumar R. and et al. [33] devised in 2019 an effective matrix completion technique for picture compression and quality retrieval.The suggested method uses thresholding and singular value reduction to complete low-rank matrices.Singular value decomposition (SVD) is used to decompose an image to obtain a low rank of image data that may be approximated in compressed form.The singular value thresholding approach is then used to recover the compressed image's visual quality.The proposed method is easily applicable for various visual characteristics of the image for various compression efficiency, and the comparative analysis also considers as evidence that explains the suitability of the proposed method in comparison to state-of-the-art techniques and standard techniques such as JPEG200.Visual quality can also be improved using an SVT-based quality retrieval procedure, depending on the application.The simulation results show that the proposed method is capable of compressing images at high rates.A complete examination of the proposed method's efficiency in terms of compression and quality retrieval has been presented.Experiments show that a maximum compression of 80% can be achieved while maintaining acceptable visual quality for the human vision system (HVS).
Li and Jia published a paper in 2019 [34] proposing a model of coding bit-rate within high bit-rate in terms of mean absolute difference and coding quantization parameters for predictive coding.The model for JPEG-LS is then used to create a rate control approach for near compression.To manage the bit rate during a certain image coding process, quantitative parameters are altered piecewise based on the model.Experiments demonstrate that with the proposed strategy, the final code rate can be close to a target rate.Because of the exact bit rate model, it is possible to avoid quantitative parameters varying within a large range, which is not possible with other methods.Consequently, the suggested approach can achieve a performance of rate-distortion that is close to ideal.
Ariatmanto and Ernawan [35] in the year 2020 proposed a new scaling factor for selected Discrete Cosine Transform (DCT) coefficients in image watermarking, where these factors employ particular guidelines to reduce distortion.Image blocks with the lowest pixel variances are chosen as embedding places.The best image quality is used to determine the ideal scaling factors for specified DCT coefficients on the middle frequencies.the scaling factors are used to accomplish the embedding procedure, the results indicate that the proposed method achieves higher Normalized Cross-Correlation (NC) values of watermark recovery against various attacks than existing schemes, also this scheme maintains watermarked images with a PSNR value of 45 dB in quality.
In the year 2020 [36], Aljaz Jeromel and Borut Zalik proposed a modern lossy approach for compressing cartoon images.To begin, the image is divided into parts of about the same color.The chain codes for all regions are then determined.The Burrows-Wheeler Transform, RLE, and Move-To-Front transformations are used to transform the sequence of acquired chain code symbols.Finally, in the last stage, an arithmetic encoder can be employed to compress the output binary stream even more.The suggested technique is asymmetric, which means that it does not reverse all of the compression steps during decompression.The given method yields significantly superior compression ratios than JPEG2000, WebP, JPEG, PNG, SPIHT, and two of the algorithms specialized in cartoon images compression: the quad-tree algorithm and the RS-LZ algorithm, according to the experimental results.
The researcher Peto and et al. in the year 2020 [37] developed the compressed adaptive integration technique, a one-of-a-kind method for computing stiffness and mass matrices in imaginary domain approaches involving the integration of discontinuous functions (C-AIS).The new approach adds a step to the standard quad treedecomposition-based adaptive integration scheme (AIS), which consists of an established, C-AIS that has several benefits: To begin with, the compression of the sub-cells invariably saves significant time in terms of numerical integration computations.Second, the compression technique is simple to integrate into existing applications because it runs directly after the quad tree-decomposition procedure.Third, C-AIS produces the same level of precision as traditional AIS in the case of polynomial integrands.Finally, C-AIS can be easily integrated with other systems aimed at reducing the number of integration points, such as the Boolean-FCM, to achieve the fourth advantage.C-AIS method is demonstrated to be efficient in the context of a fictional domain model (FCM) based on Cartesian meshes applied to linear electrostatics and modal analysis problems, but it is also suitable for quadrature in other fictional domain models, such as CutFEM and cgFEM.
J. Wang and colleagues in 2020 [38] proposed a new approach termed CDMD (Compressing Dense Media Descriptors) which is an end-to-end method for compressing color and grayscale images using a dense medial descriptor approach, which adapts the existing DMD method, which was originally introduced for image segmentation and simplification, to the problem of image compression.An enhanced layer-selection approach, a lossless MAT-encoding scheme, and an all-layer lossless compression scheme were presented to achieve this.They make two major contributions to this study.First, effective layer selection heuristics, a modified skeleton pixel-chain encoding, and a post-processing compression approach improved the encoding power of dense skeletons.Second, across a wide range of natural and synthetic color and grayscale images, a benchmark was proposed to calculate ideal parameters for dense skeletons and to assess the encoding capability of dense skeletons.Because it achieves greater compression ratios at similar quality to the well-known JPEG technique, this new method (CDMD) suggests that skeletons can be an attractive choice for lossy image encoding.
In the year 2020 [39], Al-khassaweneh and AlShorman suggest a new lossy method for image reduction.There are two steps to the proposed algorithm: The Frei-Chen bases stage and the RLE stage.This method's main purpose was to increase compression factor while lowering decompression distortion.In the second stage, RLE is employed to improve the compression factor even more.RLE's goal is to boost compression while reducing image distortion in the decompressed version.The results of the tests showed that the proposed approach is efficient in terms of compression factor and MSE.The frei-Chen stage has an increased compression factor yet strong correlation values.
To increase the compression factor, Frei-Chen bases are combined with well-known RLE.In terms of performance, the proposed method outperforms other image compression algorithms.
Lone proposed in 2020 [40], A compression technique based on spatial orientation block trees is proposed.To encode an image, it primarily comprises two tiny lists and two-state tables.The main goal is to create a memoryefficient and fast method that achieves a modest lossless -perceptually lossless compression performance.
In the year 2021 [18], Ragmi Mustafa, Basri Ahmedi, and Kujtim Mustafa conducted a study.The topic of the proposed study is lossy image compression using neural networks.They looked at the BEP-SOFM algorithm, which uses the Backward Error Propagation algorithm to get initial weight values for the Self-Organizing Feature Maps algorithm fast.By dividing the image into equal-sized parts and utilizing quadtree segmentation, the compressed image was created.The testing revealed that employing quadtree segmentation for the BEP-SOFM method produces better error outcomes than dividing the image into blocks of the same size.The image size is a significant factor in the image compression process.When compared to the results produced using the simple splitting approach, quadtree segmentation for small images did not, or only marginally, improve image quality.However, the quality of larger photographs is improved.This is because the input vector components have the same value after breaking the training image into smaller blocks by changing the pixels' value to the average value.This indicates that the color value of the decompression process will be the same.However, for a larger image, these blocks are lacking in detail.
The results are presented in terms of mean square error (MSE) and peak signal to noise ratio (PSNR).
Zhang and et.al. [41] in 2021, developed a wavelet-based sensing method to compress remotely sensed astronomical images, introducing a new wavelet-based CS framework.The ideal scaling rate assignment approach is provided by the improved scaling matrix with dual scaling rate assignment.At low rates of measurement, the enhanced measurement matrix retains the most relevant frequency domain information.The process starts with a two-dimensional discrete waveform transformation (DWT), which gives the image frequency information.The parent-child relationship between the sub-bands determines how the wavelet coefficients are reorganized in a new ordered fashion.We offer an optimized measurement matrix with a double assignment of the scaling rate and construct scanning modes for high-frequency sub-bands based on trend information.Higher scaling rates can be assigned simultaneously to sparse vectors carrying more information and higher energy coefficients in sparse vectors using a single scaling matrix.Image sampling can be improved by using a two-assignment technique.Orthogonal matching (OMP) and inverse discrete wavelet transform (IDWT) are employed to recreate the image in the decoding phase.this technique can accomplish high-quality reconstruction at a low measurement rate, and developed a high-performance remote-sensing astronomical image compression methodology.
In 2021 Ko, H.-H. [42], proposes an improved binary MQ arithmetic coder that uses a look-up table (LUT) for (A x Qe) to improve coding performance.Quantification on several levels using 2-level, 4-level, and 8-level look-up tables.Rather than employing a uniform quantized value of (A x Qe), experiments were carried out with the parameters  and  being varied at each level of 2, 4, and 8.With modifying , , nonuniform quantization of (A x Qe) is used.We got positive results when we applied the JBIG2 and JPEG2000 coding standards.The best LUT was discovered through a series of experiments.The higher the quantization level, the better the compression performance of JBIG2.The best-chosen parameters  and  at 4-level and 8-level are 1.0.Meanwhile, at most quantization levels, the best compression performance of JPEG2000 was attained at a value of 1.05.
A study in 2021 by Svynchuk and et. al. [20], highlights a new image compression approach that relies on a finite number of parameters and employs a class of nonmonotonic singular functions with fractal features.These characteristics enable high digital data compression ratios and quick decoding.Because a class of continuous functions that depend on a finite number of parameters and exhibit fractal qualities is explored, an algorithm for generating image encoding-decoding is investigated.Unlike conventional functions, fractal functions aid in the effective encoding of data and the solution of complicated problems in a variety of human endeavors.A system of iterative functions is the mathematical model utilized in fractal image reduction.Because it involves a huge number of changes and mathematical calculations, the encoding process takes a long time.However, this results in a high level of image compression.To decode an image, you'll need to know the fractal codes that will let you reconstruct the raster image; in this situation, unpacking the image is easier because the majority of the work was already done during encoding.The acquired results allow for the creation of a mathematical basis that is sufficiently dependable for the compression of varied graphic information, as well as the improvement of existing approaches.
To accomplish lossless image encryption and compression at the same time, Zhang M. in 2021 proposed [43] a joint lossless image compression and encryption strategy based on a context-based adaptive lossless image codec (CALIC) and the hyper-chaotic system is proposed, Four encryption locations are designed to realize joint image compression and encryption, taking advantage of CALIC's characteristics: encryption for the predicted values of pixels based on gradient-adjusted prediction (GAP), encryption for the final prediction error, encryption for two lines of pixel values required by prediction mode, and encryption for the entropy coding file.Furthermore, to improve security, a new four-dimensional hyper-chaotic system and plaintext-related encryption based on table lookup are implemented.According to the test results, the proposed approaches offer a high level of security and provide good lossless compression performance.

Studies description
In this section, a summary of the image compression techniques was presented in the table 1, that were previously explained and clarified at subject review.

Conclusion
In previous years, image compression has become a dazzling and vibrant field.Where many researchers presented different types and techniques of image compression.Some of these researches were discussed in this review, to conclude that they are all useful in the related area, which is constantly evolving to show us every day new research with better results, its main goal is to reduce the cost of transmission and storage.This paper also gave a Studies description to abstract the technique, method, and work for each research in this survey, to make a valuable contribution for other scientists working on this topic.
, Singular Value Decomposition (SVD) which is a rapid technique of lossless compression proposed by Pabi and et al, compresses images by utilizing a smaller rank to mimic the original matrix.With low compression ratios, SVD provides good PSNR values.Encoding time is increased by employing SVD for distinct singular values with an acceptable PSNR.A new fast compression strategy called SVD-BPSO is developed that uses SVD and butterfly particle swarm optimization to reduce encoding time.The use of the BPSO idea in singular value decomposition decreases encoding time and increases transmission speed.The simulation results indicated that the strategy delivers a high PSNR while requiring the least amount of encoding time.