Change in processing of -limit size options regression
Posted: 2012-03-02T15:28:02-07:00
We recently upgraded from 6.7.2-2 to 6.7.4-2 and experienced a painful regression. We use -limit to control memory usage like so:
We used -monitor and noticed it was the resize that was going slow, which lead us down the rabbit hole of suspecting OpenMP. We tried re-compiling our same version without OpenMP and experienced the same bug. Again with the latest version of ImageMagick, too.
We were trying to figure out how much memory was actually being used by the "broken" resize from the new version. So we copy/pasted the default map limit from:
That made me think something was weird with the processing of the -limit map option, since if we removed just "-limit map 200mb" it would get fast again.
So, we copy/pasted the 3.5004GiB and started reducing the number until it took a long time. Well, once we got the number down to below 1GB and it was still going fast (whereas -limit 1000mb was slow), I tried this:
5 hours of work with 2 guys to figure this out. Ouch.
So anyway, looks like there is a regression (or undocumented change) in the size parsing of -limit sizes.
Enjoy,
Alan
On 6.7.2 this ran in ~3s. On 6.7.4 it took ~40s.convert -monitor -limit memory 200mb -limit map 200mb foo.jpg -flatten -strip \( +clone -resize 1500x1000 -quality 75 -write foo-large.jpg +delete \) -thumbnail 100x75 -unsharp 0x0.8 -quality 65 foo-thumb.jpg
We used -monitor and noticed it was the resize that was going slow, which lead us down the rabbit hole of suspecting OpenMP. We tried re-compiling our same version without OpenMP and experienced the same bug. Again with the latest version of ImageMagick, too.
We were trying to figure out how much memory was actually being used by the "broken" resize from the new version. So we copy/pasted the default map limit from:
We started out by changing our 200mb to 500mb then 1500mb. Still it kept going SLOW.$ convert -list resource
File Area Memory Map Disk Thread Time
-------------------------------------------------------------------------------
768 3.7585GB 1.7502GiB 3.5004GiB unlimited 2 unlimited
That made me think something was weird with the processing of the -limit map option, since if we removed just "-limit map 200mb" it would get fast again.
So, we copy/pasted the 3.5004GiB and started reducing the number until it took a long time. Well, once we got the number down to below 1GB and it was still going fast (whereas -limit 1000mb was slow), I tried this:
It was fast.convert -monitor -limit memory 200MB -limit map 200MB foo.jpg -flatten -strip \( +clone -resize 1500x1000 -quality 75 -write foo-large.jpg +delete \) -thumbnail 100x75 -unsharp 0x0.8 -quality 65 foo-thumb.jpg
5 hours of work with 2 guys to figure this out. Ouch.
So anyway, looks like there is a regression (or undocumented change) in the size parsing of -limit sizes.
Enjoy,
Alan