However, I started with ImageMagick with one purpose in mind: I wanted to create composites of 100s or 1000s of JPGs at once. In theory this should be pretty simple, but in practice I'm having a lot of trouble. I've searched the forums and found a few examples of different methods, but most of them are considering a dozen or two JPGs, not 1000s.
The problem I've run into is that I run out of memory before the process can complete. I have tried using
Code: Select all
composite *.jpg -blend
Code: Select all
convert *.jpg -average
In my mind, I don't feel like this operation should require much memory. If you figure that an image is a 3-dimensional matrix (width, height, color x 3 bytes), then a 5 megapixel image should be 5,000,000*3 => 15 MBytes. So loading one raw image should take 15 megs. However, the images don't need to be loaded and stored separately. Instead, we would load the next image and just add the individual pixel values to the first image. Then unload the most recent image, load the next, repeat the process and so on. Eventually you would end up with the sum of all the images pixel values in the 3-D matrix. Finally, divide the individual matrix values by the total number of images and voila! Now you have a composite image.
<Warning: math follows>Using this method, the memory required for 1000 summed images would be 5,000,000*7 => 35 MBytes. We multiply by 7 instead of 3 since 2^18=262144 which is enough to store the maximum pixel intensity (for each R,G,B) of 255 for all 1000 images. 18 bits times 3 (for R, G, and B) gives 54 bits total for each pixel. 7 Bytes are required to provide 54 bits of data.</warning>
So, to me it seems that this type of operation should be very lightweight memory-wise. So that means that either I'm not using the best ImageMagick method, or there is room for some significant optimization in the code.
I'm hoping that someone here with more experience can tell me where (or if) I'm going wrong.
Thank you in advance.