In order to check the steps of the algorithm experimentally, we applied standard and custom images. These 6 standard images are Airplane, Boat, Girl, Goldhill, Lena and Toy. These images are in gray scale with size of 512×512 [36, 37], as shown in Fig. 8, in order to measure and compare the results of the algorithm with other previous mentioned method, several important measurement criteria MSE, PSNR, SSIM and ER were used. The Embedding Rate (ER) was used to measure the data hiding efficiency.
(6) (bpp) \(\frac{\text{n}\text{u}\text{m}\text{b}\text{e}\text{r} \text{o}\text{f} \text{s}\text{e}\text{d}\text{r}\text{e}\text{t} \text{d}\text{a}\text{t}\text{a} }{\text{i}\text{m}\text{a}\text{g}\text{e} \text{s}\text{i}\text{z}\text{e}}\) = ER
The bpp refers to bits per pixel. The higher embedding rate shows that this method can embed a large amount of secret information in the cover image. Also, maximum peak signal-to-noise ratio (PSNR) is used to measure the similarity between the original image and the marked image, using Eq. 7.
A higher PSNR value indicates that the marked image is very similar to the original image, therefore, the human eye cannot identify effectively the difference between the stego image and the original image.
(7) (db) PSNR = 10\({\times \text{log}}_{10}\frac{{255}^{2}}{\text{M}\text{S}\text{E}}\)
The MSE shows the mean square error between the original image and the stego image, the Eq. 8 shows it. Where, In this equation (H×W) refers to the image size.
\(\sum _{\text{i}=1}^{\text{H}}\sum _{\text{j}=1}^{\text{W}}{\left(\text{p}\left(\text{i}.\text{j}\right)-\text{p}{\prime }(\text{i}.\text{j})\right)}^{2}\) (8) × \(\frac{1}{\text{H}\times \text{W}}\) MSE =
In addition, the Structural Similarity index (SSIM) is calculated to measure the visual similarity between the cover image and the stego image as shown in Eq. 9.
SSIM (P,\(\stackrel{´}{\text{P} ) }\)= \({\left[\text{l}(\text{P} . \stackrel{´}{\text{P} ) }\right]}^{{\alpha }}\) \(\times {\left[\text{c}(\text{P} . \stackrel{´}{\text{P} ) }\right]}^{\text{ß}}\times {\left[\text{s}(\text{P} . \stackrel{´}{\text{P} ) }\right]}^{{\gamma }}\) α = ß = γ = 1 (9)
Larger values of SSIM assure that the human eye system cannot effectively detect secret data from the stego image. In this equation, p and p' denote the cover image and the stego image, respectively. The three parameters l, c and s show luminance contrast and structure, respectively. The value of luminance was calculated by:
(10)l(p,p') =\(\frac{2\times \stackrel{-}{\text{p}}\times {\stackrel{-}{\text{p}}}^{{\prime }}+\text{c}}{{\stackrel{-}{\text{p}}}^{2}\times {\left({\stackrel{-}{\text{p}}}^{{\prime }}\right)}^{2}+\text{c}}\)
Where \(\stackrel{-}{\text{P}}\) and \(\stackrel{´}{\stackrel{-}{\text{P}}}\) denote the average values of the cover pixels and the stego pixels, respectively. The constant C assures that the denominator is greater than 0. The value of contrast was calculated by:
c(p,p') =\(\frac{2\times {{\delta }}_{\text{p}}{{\delta }}_{{\text{p}}^{{\prime }}}+\text{c}}{{{{\delta }}^{2}}_{\text{p}}\times {{{\delta }}^{2}}_{\text{p}}+\text{c}}\) (11)
In the Eq. 11 and Eq. 12, \({ {\sigma }}_{\text{p}}\) and \({{\sigma }}_{{p}^{{\prime }}}\) is the standard deviations of the cover pixels and the stego pixels, respectively. The value of structural was calculated as follow:
(12) s(p,p') =\(\frac{{{\delta }}_{\text{p}{\text{p}}^{{\prime }}}+\text{c}}{{{\delta }}_{\text{p}}{{\delta }}_{{\text{p}}^{{\prime }}}+\text{c}}\)
Where the covariance \({{\delta }}_{\text{p}{\text{p}}^{{\prime }}}\) can be calculated by:
\({{\delta }}_{\text{p}{\text{p}}^{{\prime }}}\) =\(\frac{1}{\text{H}\times \text{W}-1}\sum _{\text{i}=1}^{\text{H}}\sum _{\text{j}=1}^{\text{W}}({\text{P}}_{\left(\text{i},\text{j}\right)}-\stackrel{-}{\text{P}}\left)\right({\stackrel{´}{\text{P}}}_{\left(\text{i},\text{j}\right)}-\stackrel{´}{\stackrel{-}{\text{P}}} )\) (13)
4.1. Performance of the proposed method
In the following images, the result and performance of the proposed method are shown.
Our proposed method, tests were performed for all 6 standard images shown in Fig. 8, and the interpretation of the results is shown in Table 1.
Table 1 The performance of our proposed method at a glance
Performance of the proposed scheme with 4*4 block |
Test images | EC (bit) | PSNR (db) | MSE | SSIM | ER (bpp) |
Airplane | 206,270 | 50.99 | 0.5176 | 0.9977 | 0.786 |
Boat | 202,692 | 51.00 | 0.5163 | 0.9979 | 0.773 |
Girl | 203,040 | 51.05 | 0.5105 | 0.9982 | 0.774 |
Goldhill | 200,884 | 51.01 | 0.5149 | 0.9985 | 0.766 |
Lena | 202,990 | 50.99 | 0.5166 | 0.9978 | 0.774 |
Toys | 202,954 | 50.99 | 0.5177 | 0.9977 | 0.774 |
Average | 203,138 | 51.00 | 0.5156 | 0.9979 | 0.774 |
According to the numerical results shown in Table 1, applying 6 standard images, the average embedding capacity is 203138 bits and the average peak signal-to-noise ratio is 51.00 dB. Also the average mean squared error and the average structural similarity index are 0.5156 and 0.9979, respectively. The average information embedding rate is 0.774 bpp, which shows that comparison of the proposed method to the previous method, especially, the competitor method in [30]. The embedding capacity and peak signal-to-noise ratio have increased significantly.
It is also necessary to reiterate that in the proposed method, using a pair of maximum and zero points of the histogram of the image prediction error, we performed the histogram shifting operation and then information embedding. In the step of image prediction error histogram shifting, the pixels that are greater than and equal to one are shifted one unit to the right in the histogram and the rest of the image prediction error pixels remain unchanged as explained in Eq. 1. This shift is used to detect and extract the information "1" of the embedded binary string in the embeddable pixels of the image. This capability reduces the image distortion, unlike the method of Chang et al. [30]. As a result, for all test images and based on the results approached in table 1, considering the high embedding rate of about 0.774 bpp, we achieved a high PSNR. Which shows the efficiency of the proposed method has been significantly increased based on the mentioned results. The comparative results of our proposed method with other three methods is analyzed in section 4.2.
4.2. Comparison of the proposed scheme with other methods
In the following, the review and comparison of the results of our method with other methods are figured out. The numerical and illustration of results are shown in Tables 2 to 4 and related figures from 9 to 12, respectively. From outcomes of our proposed scheme, embedding capacity and peak signal-to-noise ratio accompany with embedding rate have been evaluated.
Table 2 Comparisons among the proposed method and three related methods in terms of EC (bit) values
Performance of the proposed method in terms of EC values |
Methods | Images |
Airplane | Boat | Girl | Goldhill | Lena | Toys |
Proposed scheme | 206,270 | 202,692 | 203,040 | 200,884 | 202,990 | 202,954 |
Chang et al. [30] | 163,944 | 136,551 | 138,349 | 113,243 | 145,604 | 145,040 |
Hu et al. [26] | 41,329 | 24,938 | 26,255 | 19,905 | 28,785 | 25,979 |
Ni et al. [19] | 9,002 | 5,614 | 3,739 | 2,683 | 2,908 | 9,344 |
As the numerical results are shown in Table 2, our proposed method is compared with the method of Chang et al. For the airplane image, compared to the competitor method, the embedding capacity increase is 25.8%, that is, we increased the capacity by 42326 bits. For the boat image, the embedding capacity increase is 48.4% that it verifies, we increased the capacity by 66141 bits. For the girl image, the embedding capacity increase is 46.7% that shows increment in the capacity by 64,691 bits. For the Goldhill image, the embedding capacity increase is 77.3%, that is, our proposed method increased the capacity by 87,641 bits. For the Lena image, the embedding capacity increase is 39.4%, this result shows the increment by 57386 bits, and finally For the toy image, the embedding capacity increase is 39.9%, that is, the proposed method increased the capacity by 57914 bits. By evaluating these results we come to conclusion that our proposed method works better in compared with the method of Chang et al. [30]. Our outcomes and approaches show a significant result compared to the method of Hu et al.[26] and Ni et al.[19].
Table 3 Comparisons among the proposed method and three related methods in terms of PSNR (db) values PSNR (db) values
Performance of the proposed method in terms of PSNR values |
Methods | Images |
Airplane | Boat | Girl | Goldhill | Lena | Toys |
Proposed scheme | 50.99 | 51.00 | 51.05 | 51.01 | 50.99 | 50.99 |
Chang et al. [30] | 31.763 | 31.517 | 31.005 | 31.014 | 31.401 | 31.072 |
Hu et al. [26] | 51.417 | 51.414 | 51.121 | 51.377 | 51.449 | 51.389 |
Ni et al. [19] | 53.231 | 53.121 | 55.212 | 51.059 | 53.843 | 49.087 |
The numerical results sorted in Table 3 show the performance of our proposed method compared with other references in term of peak signal-to-noise ratio. PSNR percentage increase for airplane image is 60.5%, for boat image is 61.8%, for girl image is 64.6%, for Goldhill image is 64.4%, for Lena's image is 62.3% and for the toy's image is 64.1%. As a result, compared to the competitor method [30], we have had a good improvement in terms of the PSNR criterion. An increase in PSNR means that the visual quality of the image is improved.
Also, the peak signal-to-noise ratio of our proposed method has been compared with the method of Hu et al. [26] and also has been compared with the method of Ni et al. [19].
Table 4
Comparisons among the proposed method and three related methods in terms of ER (bpp) values
Performance of the proposed method in terms of ER values |
Methods | Images |
Airplane | Boat | Girl | Goldhill | Lena | Toys |
Proposed scheme | 0.786 | 0.773 | 0.774 | 0.766 | 0.774 | 0.774 |
Chang et al. [30] | 0.625 | 0.5209 | 0.5277 | 0.4319 | 0.5554 | 0.5532 |
Hu et al. [26] | 0.157 | 0.095 | 0.100 | 0.075 | 0.109 | 0.099 |
Ni et al. [19] | 0.034 | 0.021 | 0.014 | 0.010 | 0.011 | 0.035 |
Table 4, shows the comparisons between our proposed method in the criterion of embedding rate (ER) and the three related methods [19,26,30].
The increase of this criterion compared to the competitor's method [30] for the image of the plane and boat is 25.7% and 48.3%, respectively, for the image of the girl and the image of Goldhill, it is 46.6% and 77.3% and finally for The image of Lena and the toy, this ratio is 39.3% and 39.9%, respectively.
These results show that in criterion of information embedding rate, our proposed method has sometimes increased significantly compared to other mentioned methods.
In Fig. 9, Fig. 10, and Fig. 11 comparison of our proposed method jointly with the other 3 methods is illustrated. Regarding all standard test images are shown as a bar chart. The effectiveness and capability of our method compared to the results of previous methods; especially, the main competitor's method, has been evaluated intuitively in terms of the three criteria of EC, PSNR and ER, respectively.
In general, in the process of data and information embedding, the more information embedded in the image, the lower the quality of the image. It is because of the number of changed pixels increases and leads the increment in image distortion. In this case, the PSNR criterion is one of the criteria for measuring image quality. In Fig. 12, our proposed method is compared with three other methods in terms of PSNR to ER ratio. The PSNR parameter has an inverse relationship with ER, generally PSNR decreases with the increase of ER. In recent years, researchers have tried to achieve a suitable PSNR by using data hiding algorithms despite the increase of ER. Figure 12 shows the advantages of proposed method in term of the embedding rate (ER), the amount (PSNR) decreases exponentially and gradually.
For example, for a sample curve related to the airplane in Fig. 12, which is shown in blue color, the amount of PSNR in ER = 0.1967 is equal to 54.9 or in ER = 0.2623, PSNR is equal to 54.25 or in ER = 0.3934 the size of PSNR is equal to 53.18, which shows the increase of ER causes the size of PSNR is gradually decreasing. But in the method of Chang et al. [30], as shown in Fig. 12, in the image of the airplane, with the increase in the information embedding rate, the PSNR decreases steeply.
In Hu et al.'s method [26] and Ni et al.'s method [19], due to the low and limited embedding capacity, the range of changes is small compared to our proposed method. As a result, PSNR changes are in the same range.