Questions and postings pertaining to the usage of ImageMagick regardless of the interface. This includes the command-line utilities, as well as the C and C++ APIs. Usage questions are like "How do I use ImageMagick to create drop shadows?".
Q1: When using ResizeImage() func, will there be quality loss?
Q2: When convert an image from png(or gif) format to jpg format, can it cause some question about quality loss?
IM Version: ImageMagick 6.7.8-3 2012-09-11 Q16
MagickCore API
Q1:
Any conversion to jpg, even with -quality 100, will cause some loss of quality. JPG is a lossy compression format. If you want lossless, then you could use JP2000.
fmw42 wrote:Any conversion to jpg, even with -quality 100, will cause some loss of quality. JPG is a lossy compression format. If you want lossless, then you could use JP2000.
Thanks. But I want to make sure wether it will cause some loss of quality to convert to png or gif.
Othermore, wether there are some loss of quality when resizing a picture (such as zoom a picture from 800*600 to 400*300) ?
jasonlee wrote:Thanks. But I want to make sure wether it will cause some loss of quality to convert to png or gif.
What exactly do you mean by "quality"? If you remove bits from pixels, you lose information. This may or may not be visible in the result. If you will be further processing the result, it's best to keep all the data you can. When saving to PNG, IM sometimes needs to be told to save it as PNG24 to keep all the detail.
jasonlee wrote:Othermore, wether there are some loss of quality when resizing a picture (such as zoom a picture from 800*600 to 400*300) ?
Again, you will obviously lose data. In this case, 75% of the data.
There will always be some loss of quality when resizing. but it is not a matter of simply loosing pixel data, but of merging the pixel data. When enlarging you actually generate more data, but the quality will not be quite perfect either, at least not without making an ugly blocky image!
Halving the dimensions will cause ALL the information to be merged into a quarter the amount of space. All the information is there but you loose the finer details (or the higher spatial frequencies in expert terms) as pixels become merged. When images are resized by non-integer amounts, you also get a loss of quality from having to map a array of values into a completely different array of values, and that can generate other 'artefacts' that may or may not be wanted.
To do this resize (or distort) will try to map a function, surface, or waveform to the input image array, so as try to best determine what the output values to match that same function, surface, waveform. This is what filters are -- attempts to try and match a 'interpolatory' surface to the input image to generate an output image using a different array of values.
Now when coding, it is not looked at like that, but as a 'pixel contribution' or 'neighbourhood weighted average' or even a 'convolution' of nearby pixels values (samples) to try and generate the best pixel values. This 'sampling function' is called a 'reconstruction filter', or just 'filter'.
There is a loss of quality, and how sharp, or how many other 'artefacts' are introduced by this process is what makes resizing so difficult. However I would not say you loss data, just merge data. How much quality is lost. Well that is very very difficult to determine without some specific definitions.