Running two instances of convert on my dual core (2Ghz Core 2 Duo) machine with 2GB RAM actually is slower than running one instance and processing the two images one after the other. (6min for serial order, 8min for parallel).. I am using Q8. The images being resized and cropped are large (2580x1915) (847KB in png format)..
Is there something I am doing wrong? I am not passing in any -limit or other hint parameters - just the resize and crop commands.
The only thing I can think of is that the processor is maxed out with just one image, and there is not enough headroom for the second - but that is surprising if just one instance of IM is chewing up a 2Ghz dual core processor.
It cannot be memory as there is plenty of it, right?
Are there any best practices on running multiple instances of convert on one machine that I am not following? I did do a bunch of searches but turned up nothing that I could use.
Any help is very much appreciated.
Multiple instances of convert actually slower than one
Re: Multiple instances of convert actually slower than one
Most likely the instances run slower because the pixel cache was pushed to disk and there was I/O contention. Add -debug cache to your command lines and see if the pixel cache is disk or memory.