Questions and postings pertaining to the usage of ImageMagick regardless of the interface. This includes the command-line utilities, as well as the C and C++ APIs. Usage questions are like "How do I use ImageMagick to create drop shadows?".
The JPEG compression algorithm split the image into 8x8 blocks
My thinking is that if we split an image into 8x8 blocks, feed each block to libjpeg and then merge the JPEG blocks into a lossless format(say PNG) , we should theoretically obtain the same JPEG if we had converted the original image directly into JPEG.However this isn't the case in practice.
With a sampling factor 4:4:4 (ie no downsamping), there is no difference.
With downsampling, you need larger blocks, eg 16x16. Even then, there is a difference, but only up to about 1% RMSE even with low "-quality" numbers. I guess the difference is because the downsampling is done before splitting into blocks.
Aside: I would put "+repage" after "-layers merge", though it doesn't matter here.
A revised script (for my old version of IM which doesn't have "-metric SSIM") is:
1. DCT time to process a block is proportional to at least N*log(N) where N is the number of pixels per block. So best speed comes from small blocks.
2. DCT can be done in parallel, so smaller blocks are better for multithreading.
3. The lossy compression occurs at the DCT stage, and is constant within a block. Image areas with no high-frequency detail (eg blue sky) compress more easily than areas with HF detail (eg grass), so we don't want a block to straddle these mixed types.